From patchwork Tue Oct 8 18:32:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826785 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2071.outbound.protection.outlook.com [40.107.92.71]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F6D8212D3F for ; Tue, 8 Oct 2024 18:33:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.71 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412399; cv=fail; b=dY1ytBfaZD6QLbp9pilvngKDfoA6YpD7B4rVU/a+qC/LTUAjveBNHwNw1aNhb0dF/3Oe/YJPxuTQ+/7dzSp52DaOKiegVHEJD/1lGYLZdR3AUI5yC/zQNZNNdvTohYKtjT3XvzuKsCrBCqbFLLTsjs2Ps3b7uDn8Ypzy6ujbUaY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412399; c=relaxed/simple; bh=//qYrnhUvNz8H4i99vBYhXHPEb0fEcIa6X+wLf7sliA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=t7kGg2xx9+ogAMgGvkqAxpY1O7TMYFZqJcFzhy/0FzDV0/V5RXDXyxbFtUBxJ2g10sJtejxlmol4fMgsmqOIVWpKuSEp1j9oUSnpnSUUNcPTtfpsRco2XiAhy2PmX4YPu1wqHFBvE/6NBUhX4psRcTaw3LlobrrWfVsUuGFkdfo= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=f2hn3F6L; arc=fail smtp.client-ip=40.107.92.71 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="f2hn3F6L" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=saQQcHJDCVuLXmRRYI03eH9r9Yf0tLZONqFAboCt6jZqOOmyuXlLPLD7bQlNdB/OLaNn9OJaRNNC4QwtSsFWtQByucp6Vu9ZmPdQPQ1585c5wRY8cCvXPjT34MjZJbAUbNLmv2hKHvz9PNqBtUIMV/yTYnh7JbhyjL5hnoKeWkd7n5kTjzIsisMRPAhbgdh0G6oGEY+CuulLvU74UcUoLAsIoRw9v1XwU+sov6YjTeDo/LT4AwvvIfb5w3aRYJ7LNF3sy6E/jSbFuDjHJJiJmOaP/30dBmBX2yKsxTiiAtdIjesWvp1JYbybAf6dDr0eiQjzFBrHsQ9Qiaq7BchWvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PDLVhTgRfXfdma4BKzVztYkU3ugJUF28Vp8kwVGjAoA=; b=xgCci/3rReCuDZ9Bs2dNEvYeBGneXFCxp+kBlBRcLqo0YociPpPiz77IdRsFgUKG3yqmQ6Tb+3YlRwb31XSF3k0q0Y2kLirTnqbPTCbw1mUd6vozPwJHjO4R4RKnXdmq5h/rgCYUnmdk203rZD4edJItOC4sAKsBDAZ4ZyYjQIwqA6pxcvfNR6k17l3Pi3qrr9qOd4PC2CAenHkGs2Xi+itP8D2m2+lup+g60cA+TuW/YKuaVCPmWujQnEz5+HrmjnLO26otod6yq3DEVuxMEBNKffqEdVvpS4Dhd7f+LpqdhA76eUe6NgsYxtTyg+SPq+QOpVopg++vuHsJiqgGaQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PDLVhTgRfXfdma4BKzVztYkU3ugJUF28Vp8kwVGjAoA=; b=f2hn3F6LsqUFEOULvt4svWRqIzwQ+qr4FXXzIJtTx+BNcxBq6tJiasFErjoj5Vqfw1MVrXfzP9ZtjVs93c/a3zl7Mv8kMRmaNtyxQUkGzaf7NzcjLcZCmqy4DDgHuLmLOTYJ5+0+auZW1QEBTe3xYXdqzBZt0PT4euqqV2+1c26gmK6o8/DmP+GIMwjvZZ1YoTWnDmgYFPT/yvDn+oh0UKkU0etumoE4csmn+0SdWFXPypoE1h4qA5qsMi7I0iMiszHfpOwt8Yd2+mA1KjMNr1oy8YEesmA20wd6LFD6+afRYZEqkCLRpeCW1kWDp8+F3SMBvJuZ9yukkbkJhL3bHQ== Received: from CY5PR22CA0080.namprd22.prod.outlook.com (2603:10b6:930:80::19) by CY8PR12MB7242.namprd12.prod.outlook.com (2603:10b6:930:59::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.16; Tue, 8 Oct 2024 18:33:12 +0000 Received: from CY4PEPF0000E9DA.namprd05.prod.outlook.com (2603:10b6:930:80:cafe::ac) by CY5PR22CA0080.outlook.office365.com (2603:10b6:930:80::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.17 via Frontend Transport; Tue, 8 Oct 2024 18:33:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9DA.mail.protection.outlook.com (10.167.241.73) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:12 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:01 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:00 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:32:57 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 01/14] net/mlx5: qos: Flesh out element_attributes in mlx5_ifc.h Date: Tue, 8 Oct 2024 21:32:09 +0300 Message-ID: <20241008183222.137702-2-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9DA:EE_|CY8PR12MB7242:EE_ X-MS-Office365-Filtering-Correlation-Id: 362af8cd-5e39-4f64-4191-08dce7c7aa6c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|376014|82310400026; X-Microsoft-Antispam-Message-Info: JVeYIY5evGXYUZhNZ82osCk1BjsMPSluNXiqL9P6903t0D3MDVM5R9/eX3miQPNEEmnKUsFT+OJY1srnYFHOLBDkst9huXYChhYZkdR68KiOCUs+CdSpoRkMfru+WtJ+1iAZRL1XDvJ8mZNkOTgrHy5I3AoH4Y9YzRFadD94hVrUAJQbNTAFenCj6tHUHgecb/5jlTzjzGGurnJaxRudLt2qtV7+bjIdiietfN2VZMYxGGNt4TlqwbFA1D8wHFR9MYJumIdPNbwfPEWKG+ajo4s9osSAZj3B5kgmLhHqNSt4xy4PxOb39D8gujDRD//iHHH0wi7wjrFjCY77KVw10FCl24Fhj8Hi5KftixmMgMZfwk/ZhEs7AeeKl8srgqM8ocnxrtguuLlGl+YCBOcKYUDabfi8HXMAQs+fRsdqRh2fPhGcV7+P9ikUgu0oEurSP2szgJ70gQCManoTx/EdT3iKTS0zZGIMh+L/MaybyZKtCMXiDwipZGMu4MZEWMzL4RePKqLTiHepH/ZdGXP2mFZskEIFNqlk5hPXM8zfJqSfHCRatfHICVuXJz3N1YP0arXW6ZxeCRmkLrMO0f32J16TdsmAil8MYV4zBeNGnNguRqti6XtnEuN6RMfvvYCvQX0DP+2zyANPz+yzLJ/vMqVVvym6K2MbSgbBBRQnMRH/m74cYoT39WwUYYOgYsZ5sFdwWF43kZfDulRGwRgSGUaNv6xD56GDPCJoL0zPV6puyWjiv5givtzZuqNO2+5fXX5UF2u3jd1+xO7+zo7xT6DR/UmfhKDC4pIECCSx7YiEYU9OPUGPFJ07FAP9Z//M2IIvIIKjKoCdzGpsCNwE1ULBQc5bkoTrD+Rpzupq1+cgdXp+sLavXGhFdpPDJd+i/cD4/EgXh6IWkn2bzw2yWwlNz4Ofi6Z6TXE5/cn8AVzyLfCkT7Nb4zB9u8pUkvWm86EAsnSxfiGiZcbbXvZzfx0Ubs9MlfbhfnnM1R0Y1fDxsAShnzasn4DGDLqEQ3Y/9nWY6fTYEkCOGfzNmeR5tBhJY4Sas96SBRq3EBybRHCHCf2SnR/NdADwayVZ2X2gLMGGkNsnsmt8rF4UEGxrB5v6YP9NOz2JPnpRJOP0ssEA8HH9ywZi/V/sYkwnNH6sdhZXz3AZ1UbJ02AIFRep3nvUOW3nxtHjvGLRfu/eVzajuIJq5llsMl3CvN2KEbIX/W9P7fO+7obJwqtdt6FkZy5FyBcxb40gjACg70hcQ4vhMkwxmcm5KzGTo9/wlQGg6SaZppfjwq3Vu8kdJ2nkgEL4DMFTT+XDR2zWfCZL1JkzJvC/wZ0UeMyrOxJA5qaAixGXRIEVZn3glwi2IFNtY7ZAJh/b4iAKi5q82RHy3p612dsGelsT5tl6LqD7Z8xS X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:12.2559 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 362af8cd-5e39-4f64-4191-08dce7c7aa6c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9DA.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7242 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu This is used for multiple purposes, depending on the scheduling element created. There are a few helper struct defined a long time ago, but they are not easy to find in the file and they are about to get new members. This commit cleans up this area a bit by: - moving the helper structs closer to where they are relevant. - defining a helper union to include all of them to help discoverability. - making use of it everywhere element_attributes is used. - using a consistent 'attr' name. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 18 +++-- include/linux/mlx5/mlx5_ifc.h | 67 ++++++++++--------- 2 files changed, 45 insertions(+), 40 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 02a3563f51ad..7154eeff4fd4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -339,7 +339,7 @@ static int esw_qos_vport_create_sched_element(struct mlx5_eswitch *esw, struct mlx5_esw_rate_group *group = vport->qos.group; struct mlx5_core_dev *dev = esw->dev; u32 parent_tsar_ix; - void *vport_elem; + void *attr; int err; if (!esw_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT)) @@ -348,8 +348,8 @@ static int esw_qos_vport_create_sched_element(struct mlx5_eswitch *esw, parent_tsar_ix = group ? group->tsar_ix : esw->qos.root_tsar_ix; MLX5_SET(scheduling_context, sched_ctx, element_type, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT); - vport_elem = MLX5_ADDR_OF(scheduling_context, sched_ctx, element_attributes); - MLX5_SET(vport_element, vport_elem, vport_number, vport->vport); + attr = MLX5_ADDR_OF(scheduling_context, sched_ctx, element_attributes); + MLX5_SET(vport_element, attr, vport_number, vport->vport); MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent_tsar_ix); MLX5_SET(scheduling_context, sched_ctx, max_average_bw, max_rate); MLX5_SET(scheduling_context, sched_ctx, bw_share, bw_share); @@ -443,8 +443,8 @@ __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *ex { u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_esw_rate_group *group; - __be32 *attr; u32 divider; + void *attr; int err; group = kzalloc(sizeof(*group), GFP_KERNEL); @@ -453,12 +453,10 @@ __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *ex MLX5_SET(scheduling_context, tsar_ctx, element_type, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR); - - attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes); - *attr = cpu_to_be32(TSAR_ELEMENT_TSAR_TYPE_DWRR << 16); - MLX5_SET(scheduling_context, tsar_ctx, parent_element_id, esw->qos.root_tsar_ix); + attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes); + MLX5_SET(tsar_element, attr, tsar_type, TSAR_ELEMENT_TSAR_TYPE_DWRR); err = mlx5_create_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, tsar_ctx, @@ -559,7 +557,7 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta { u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_core_dev *dev = esw->dev; - __be32 *attr; + void *attr; int err; if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling)) @@ -573,7 +571,7 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR); attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes); - *attr = cpu_to_be32(TSAR_ELEMENT_TSAR_TYPE_DWRR << 16); + MLX5_SET(tsar_element, attr, tsar_type, TSAR_ELEMENT_TSAR_TYPE_DWRR); err = mlx5_create_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 96d369112bfa..c79ba6197673 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -4105,11 +4105,47 @@ enum { ELEMENT_TYPE_CAP_MASK_QUEUE_GROUP = 1 << 4, }; +enum { + TSAR_ELEMENT_TSAR_TYPE_DWRR = 0x0, + TSAR_ELEMENT_TSAR_TYPE_ROUND_ROBIN = 0x1, + TSAR_ELEMENT_TSAR_TYPE_ETS = 0x2, +}; + +enum { + TSAR_TYPE_CAP_MASK_DWRR = 1 << 0, + TSAR_TYPE_CAP_MASK_ROUND_ROBIN = 1 << 1, + TSAR_TYPE_CAP_MASK_ETS = 1 << 2, +}; + +struct mlx5_ifc_tsar_element_bits { + u8 reserved_at_0[0x8]; + u8 tsar_type[0x8]; + u8 reserved_at_10[0x10]; +}; + +struct mlx5_ifc_vport_element_bits { + u8 reserved_at_0[0x10]; + u8 vport_number[0x10]; +}; + +struct mlx5_ifc_vport_tc_element_bits { + u8 traffic_class[0x4]; + u8 reserved_at_4[0xc]; + u8 vport_number[0x10]; +}; + +union mlx5_ifc_element_attributes_bits { + struct mlx5_ifc_tsar_element_bits tsar; + struct mlx5_ifc_vport_element_bits vport; + struct mlx5_ifc_vport_tc_element_bits vport_tc; + u8 reserved_at_0[0x20]; +}; + struct mlx5_ifc_scheduling_context_bits { u8 element_type[0x8]; u8 reserved_at_8[0x18]; - u8 element_attributes[0x20]; + union mlx5_ifc_element_attributes_bits element_attributes; u8 parent_element_id[0x20]; @@ -4798,35 +4834,6 @@ struct mlx5_ifc_register_loopback_control_bits { u8 reserved_at_20[0x60]; }; -struct mlx5_ifc_vport_tc_element_bits { - u8 traffic_class[0x4]; - u8 reserved_at_4[0xc]; - u8 vport_number[0x10]; -}; - -struct mlx5_ifc_vport_element_bits { - u8 reserved_at_0[0x10]; - u8 vport_number[0x10]; -}; - -enum { - TSAR_ELEMENT_TSAR_TYPE_DWRR = 0x0, - TSAR_ELEMENT_TSAR_TYPE_ROUND_ROBIN = 0x1, - TSAR_ELEMENT_TSAR_TYPE_ETS = 0x2, -}; - -enum { - TSAR_TYPE_CAP_MASK_DWRR = 1 << 0, - TSAR_TYPE_CAP_MASK_ROUND_ROBIN = 1 << 1, - TSAR_TYPE_CAP_MASK_ETS = 1 << 2, -}; - -struct mlx5_ifc_tsar_element_bits { - u8 reserved_at_0[0x8]; - u8 tsar_type[0x8]; - u8 reserved_at_10[0x10]; -}; - enum { MLX5_TEARDOWN_HCA_OUT_FORCE_STATE_SUCCESS = 0x0, MLX5_TEARDOWN_HCA_OUT_FORCE_STATE_FAIL = 0x1, From patchwork Tue Oct 8 18:32:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826787 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2061.outbound.protection.outlook.com [40.107.237.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05A28213EFE for ; Tue, 8 Oct 2024 18:33:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.61 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412411; cv=fail; b=dVh0ZTDADUGr8amyWiF756JGbkqMlAWvRFWWahFYaC6BSN1KlQdLtA51J/7zKiU3eS6Cl5cCz8OX2W2iQ8+csNcuxYpSCEy18JpIq4UnV0b1H7lRsPHzx3PCSaEtonpZ47Gi+m3y96kYmAFjgtZxKVZp8vjzQw3yAY7duX3ZMYc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412411; c=relaxed/simple; bh=Mtfz4zXMaFpZJhOzpw7xif/mWEK3K2tGVr3qdLx9SC0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BwAcD8dcRKmzx5gGXsYEssbXSnnSYYVtEEI12DL2rfe/L3KRyDf3BQmyxKBnJoGsGu6vdjSEoapMvFX6DCmmkAW9QBlhyW7785Wc1HAVvH6W3LqWn7tDl/b1hb6/4dO7WaUFFP0TyQiLlFnMxg9PU/kotbp39Da0KkIBVSZE+Ko= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=ri0VqNx9; arc=fail smtp.client-ip=40.107.237.61 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="ri0VqNx9" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=GlGBusjZBP5vNJQj4D7hVFHmHYhPWMGIyX83K0zoKRA3938fLEwuNIFYb8y8FHnjoPNuUaVTFSYOKdxBtn8JGk37tY7jx/w4AliMozHMLvq2ocdyHAryiNgQFVc142JjqtoKbaa9Zxqk1XUCmzcBe1U+MNyj7vH9JAWOJPwuO7Ss+uqY/RYeI2ajjsJcKVBpvmM2WZRCbLYCpQ07wvhehVd91imA0c3fTkD33DtfDnlBQqFsVD5KgT81/7AEixbilJMAvjE1BlNXuGihT+JlDYd83FzYwaCFl8WT0msyD3Df+Fqmz3/Zvmbgvd9L8jHNoerxGXGOQ1VPKyDxkfRK0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YWgY10692yxidj2M5/1VaQ8/XE1lo5yllzVLXHqyMKQ=; b=ZupRoX4bpNWylzF4f0+ckh+DDXPpe3dCDRp9+z8QClgMEc4DXZmphyl2dBxnGaQWMbD2cPb8EZGE7wphXJz48j7A0fLqoqsa6T0nwQqsAjRlkh9gQ9kKKTNbGEHWlTJI/OSAVNPluG1Bozlfwd/EcCPLspw3NaN0VvYkeixr23GAyRjCQoqFRjcmM2SAGPa5Wv6fGoAA6TiUgQOyZM4CjlQqPWWATV7HNemc19ldw3riA/CVo5VrGQZvKpl1f2WcpvdjM4b12Hmmx3G2opeR7GyXFvSRSHV+vyqr2gJxNB7IuFnp0noW5R3EPy1665z6FJZnEzIV6yqr85R0eEXzOA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YWgY10692yxidj2M5/1VaQ8/XE1lo5yllzVLXHqyMKQ=; b=ri0VqNx9vNMhUrLsGjrYAypBXrL4MgE073NZ+GQOtK0MQZO39Y2nJYg8Im7GzBUu0E74MJa6aZSrGP+c5MpHnaGtNXfohZ4+9pcaQtSaTIuG6LQNMjUx2nr+C/jRa6uksUEk6T50FxMb4QdnX/3XwNb/LRPLCUxs6xbZchiM8tQHwgpyGM+8pdqmLZ1NlIE6M6V1wbK7WJvD6NyYPvNyJnCOdjVTeIdeDvqaIAnI5NSg3LsR6Acc7C+F5mL24b3FLDWeQ9ZcqWNT7igtVGvky0ITf1fFVw0fjypBRBYtfwPgzkVG1uN4xkX3Q9OOwzYyE3Cur+3x8dSubsDfD0dAzg== Received: from CY5PR13CA0005.namprd13.prod.outlook.com (2603:10b6:930::19) by MN0PR12MB6342.namprd12.prod.outlook.com (2603:10b6:208:3c1::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Tue, 8 Oct 2024 18:33:17 +0000 Received: from CY4PEPF0000E9DB.namprd05.prod.outlook.com (2603:10b6:930:0:cafe::2f) by CY5PR13CA0005.outlook.office365.com (2603:10b6:930::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.17 via Frontend Transport; Tue, 8 Oct 2024 18:33:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9DB.mail.protection.outlook.com (10.167.241.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:16 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:04 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:03 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:01 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 02/14] net/mlx5: qos: Rename vport 'tsar' into 'sched_elem'. Date: Tue, 8 Oct 2024 21:32:10 +0300 Message-ID: <20241008183222.137702-3-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9DB:EE_|MN0PR12MB6342:EE_ X-MS-Office365-Filtering-Correlation-Id: 86b9abc1-449b-4776-a91c-08dce7c7ace4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|82310400026|376014; X-Microsoft-Antispam-Message-Info: GrT9J/gfVEbxBhC/ZsETYfMc+QEPFywHfzBxEWyeMM2E/vbIabR/nRjO9um+uZ9wx9YDt2JTI5U0Vh+bBhsdOUWJaEe7EaPtXq52YnLmrIJqhrmc8Lm3WVW/2ZuLakYC9wGszcYsF+f8zsXUq4SzIdRCV7NdteAU9H7wxT7meph7gYkB0HbRLD0UiJdl0qq6qt8Nf49Iz7DseT+Qq2vv7Y+t2Z1GGYuclzm2ycEoFi5VPV9MnOItLtxuGIEiZnec4xwSR8mAzX+OWpi/GmL/4n4g5nwGZlWUUge4OAgW1ObI082T93S/aeTMkDZEWzVLMUpoGh2zlZnQ4BaJ/4kuOrGNe/JNzELVwUEAJDSJvNExD/jRqmqJZbObHqZyTZBITRYFTyZe5hHW5pBFvyQw7NA80bPC2S354k9EUF8/TX7EnT78dMJZFnzIFNYFuVqN3b4mH0Kk6RABX8WOtabyjDuoN3+0nlo/H/0r1C5JItjC7kAuTd+JwxWua3cc30VbzFrkl6QVCyXG9kCMVPtjrxrCSPFcx+fqOl290oDQ3DaEz+T3/F8xHRXtLeVwTX6i9Og56JjHZIAnujdlWfyJqmrfN25HvD9NGIqGcDnMhvijawsZLfACw4pfBvwA29AONEl+0izgVjwsGhTcDuSvTHIFu0ytZwMAWIdwoq+EnY5mV5l+KLqYhJJjGjlVHRBCdPlYzSc9mUI9aBp4fXU3eznsGmeUkooapO1CVJRLME3FfqTQFVarnoIvY0sjw3FchQi+hgc/IAafFj60M+3Svt8BGEjuc9lxJIFNv/km+9kRgAXWXWf8zTFF1qyXKo4clAgk1NevnnRutsj7efjmXtEO1mMPiRfyvhcMWBv7CTHm2EZWbJeD9V66Ukd2iX9a1tWQcW0T2oIAiGWuKPae8LNEVyPvnDaZS4PD1j+51XFezjj24ygDmElHxc48/lHbp+AajOb6jy15HCFMAYvXbXigw/TWNXUIT+Q1SrnjLGop+soII9sycov5DSFVI1JwEs1rv4wuFpPPObe4Es9kWmGDARl8XIYUoMahgCjFpbO5KAVshoOpDvfUydNxpngLGU4Kqe3JvK/KIpJQW78omApjCRTETuYt4rWCaaKsH00wuVdiHMFQLGBcoEinepf9oPP+YMnNcOhdZtbHZlq44q2dmpoiYx3dK3buRe20/dlOYZ8Ahp76ZjQWN8iOei92xeHEvJa/4oVuB4UbGdIjCHaXhSQGZ4FdKkWF4eF78ST7tVb4D0PGVIoIUw0jrYqqbbclQRT2hf/Rn9KQqibXkYrzb2pra3tfaogJZoGeBNoBYMjFzdtA8jdmfKxmYo36Rimzjagi8mJE2hV/wawmSfes/7ks1PY9SbNYr1iO2AKYAGVu2f9U0vlkfMehAeA6 X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(82310400026)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:16.3848 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 86b9abc1-449b-4776-a91c-08dce7c7ace4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9DB.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6342 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu Vports do not use TSARs (Transmit Scheduling ARbiters), which are used for grouping multiple entities together. Use the correct name in variables and functions for clarity. Also move the scheduling context to a local variable in the esw_qos_sched_elem_config function instead of an empty parameter that needs to be provided by all callers. There is no functional change here. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../mlx5/core/esw/diag/qos_tracepoint.h | 16 ++++----- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 35 +++++++++---------- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 6 ++-- 3 files changed, 27 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h index 1ce332f21ebe..0ebbd699903d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h @@ -15,14 +15,14 @@ TRACE_EVENT(mlx5_esw_vport_qos_destroy, TP_ARGS(vport), TP_STRUCT__entry(__string(devname, dev_name(vport->dev->device)) __field(unsigned short, vport_id) - __field(unsigned int, tsar_ix) + __field(unsigned int, sched_elem_ix) ), TP_fast_assign(__assign_str(devname); __entry->vport_id = vport->vport; - __entry->tsar_ix = vport->qos.esw_tsar_ix; + __entry->sched_elem_ix = vport->qos.esw_sched_elem_ix; ), - TP_printk("(%s) vport=%hu tsar_ix=%u\n", - __get_str(devname), __entry->vport_id, __entry->tsar_ix + TP_printk("(%s) vport=%hu sched_elem_ix=%u\n", + __get_str(devname), __entry->vport_id, __entry->sched_elem_ix ) ); @@ -31,20 +31,20 @@ DECLARE_EVENT_CLASS(mlx5_esw_vport_qos_template, TP_ARGS(vport, bw_share, max_rate), TP_STRUCT__entry(__string(devname, dev_name(vport->dev->device)) __field(unsigned short, vport_id) - __field(unsigned int, tsar_ix) + __field(unsigned int, sched_elem_ix) __field(unsigned int, bw_share) __field(unsigned int, max_rate) __field(void *, group) ), TP_fast_assign(__assign_str(devname); __entry->vport_id = vport->vport; - __entry->tsar_ix = vport->qos.esw_tsar_ix; + __entry->sched_elem_ix = vport->qos.esw_sched_elem_ix; __entry->bw_share = bw_share; __entry->max_rate = max_rate; __entry->group = vport->qos.group; ), - TP_printk("(%s) vport=%hu tsar_ix=%u bw_share=%u, max_rate=%u group=%p\n", - __get_str(devname), __entry->vport_id, __entry->tsar_ix, + TP_printk("(%s) vport=%hu sched_elem_ix=%u bw_share=%u, max_rate=%u group=%p\n", + __get_str(devname), __entry->vport_id, __entry->sched_elem_ix, __entry->bw_share, __entry->max_rate, __entry->group ) ); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 7154eeff4fd4..73127f1dbf6e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -22,9 +22,10 @@ struct mlx5_esw_rate_group { struct list_head list; }; -static int esw_qos_tsar_config(struct mlx5_core_dev *dev, u32 *sched_ctx, - u32 tsar_ix, u32 max_rate, u32 bw_share) +static int esw_qos_sched_elem_config(struct mlx5_core_dev *dev, u32 sched_elem_ix, + u32 max_rate, u32 bw_share) { + u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; u32 bitmask = 0; if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling)) @@ -38,20 +39,17 @@ static int esw_qos_tsar_config(struct mlx5_core_dev *dev, u32 *sched_ctx, return mlx5_modify_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, sched_ctx, - tsar_ix, + sched_elem_ix, bitmask); } static int esw_qos_group_config(struct mlx5_eswitch *esw, struct mlx5_esw_rate_group *group, u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { - u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_core_dev *dev = esw->dev; int err; - err = esw_qos_tsar_config(dev, sched_ctx, - group->tsar_ix, - max_rate, bw_share); + err = esw_qos_sched_elem_config(dev, group->tsar_ix, max_rate, bw_share); if (err) NL_SET_ERR_MSG_MOD(extack, "E-Switch modify group TSAR element failed"); @@ -65,20 +63,18 @@ static int esw_qos_vport_config(struct mlx5_eswitch *esw, u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { - u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_core_dev *dev = esw->dev; int err; if (!vport->qos.enabled) return -EIO; - err = esw_qos_tsar_config(dev, sched_ctx, vport->qos.esw_tsar_ix, - max_rate, bw_share); + err = esw_qos_sched_elem_config(dev, vport->qos.esw_sched_elem_ix, max_rate, bw_share); if (err) { esw_warn(esw->dev, - "E-Switch modify TSAR vport element failed (vport=%d,err=%d)\n", + "E-Switch modify vport scheduling element failed (vport=%d,err=%d)\n", vport->vport, err); - NL_SET_ERR_MSG_MOD(extack, "E-Switch modify TSAR vport element failed"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch modify vport scheduling element failed"); return err; } @@ -357,9 +353,10 @@ static int esw_qos_vport_create_sched_element(struct mlx5_eswitch *esw, err = mlx5_create_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, sched_ctx, - &vport->qos.esw_tsar_ix); + &vport->qos.esw_sched_elem_ix); if (err) { - esw_warn(esw->dev, "E-Switch create TSAR vport element failed (vport=%d,err=%d)\n", + esw_warn(vport->dev, + "E-Switch create vport scheduling element failed (vport=%d,err=%d)\n", vport->vport, err); return err; } @@ -378,9 +375,9 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_eswitch *esw, err = mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, - vport->qos.esw_tsar_ix); + vport->qos.esw_sched_elem_ix); if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR vport element failed"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy vport scheduling element failed"); return err; } @@ -683,9 +680,9 @@ void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo err = mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, - vport->qos.esw_tsar_ix); + vport->qos.esw_sched_elem_ix); if (err) - esw_warn(esw->dev, "E-Switch destroy TSAR vport element failed (vport=%d,err=%d)\n", + esw_warn(esw->dev, "E-Switch destroy vport scheduling element failed (vport=%d,err=%d)\n", vport->vport, err); memset(&vport->qos, 0, sizeof(vport->qos)); @@ -809,7 +806,7 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 err = mlx5_modify_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, ctx, - vport->qos.esw_tsar_ix, + vport->qos.esw_sched_elem_ix, bitmask); } mutex_unlock(&esw->state_lock); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index f44b4c7ebcfd..9bf05ae58af0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -213,9 +213,9 @@ struct mlx5_vport { struct mlx5_vport_info info; struct { - bool enabled; - u32 esw_tsar_ix; - u32 bw_share; + bool enabled; + u32 esw_sched_elem_ix; + u32 bw_share; u32 min_rate; u32 max_rate; struct mlx5_esw_rate_group *group; From patchwork Tue Oct 8 18:32:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826786 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2074.outbound.protection.outlook.com [40.107.244.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D06D2139DA for ; Tue, 8 Oct 2024 18:33:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412406; cv=fail; b=d8TtYm5z+fkU42bw5Q/7LTD12a5Gr3VyqkSnaUQ7+WSBOJKz+Ge1iq7kzk5x4Q3/2i1o5LklRm0FSJDl55uIeoQOzALYxzx8UinLkYizhAu9cWKMDadkQG7qnW6vilJOHruoFDkmdyh9XWn63LqhdB/vZylu69EHIPig3q8QWl0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412406; c=relaxed/simple; bh=+IrNDjBTo9Q3uSyn/O1PqcoL05R6tufDlKDyMG1N9jM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=lfZva/Ub1aLHQ8k+9Fh+9luepFFht6KMymnuxkVJAb6E1Iv0TnYlbPso60sZ8hJzdWyonLjJFWugAHqJtZOlKaAKA4jyMqiObInuSDwrgEIZJ53s6sJarnNdLzwljQDuh7YwGzyBM4U4bQTTM14EMml3bHeyKO0qOiHmm2Jq2Yw= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Mit5eKU2; arc=fail smtp.client-ip=40.107.244.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Mit5eKU2" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=gU27AMe1s8y9aS8cVtvaKGf9tUw/JKMisV7h6BqMlvYkW4k1vMFm4BKc14YOBSjep9ZN/sCr6HxuS6XG8sZncRwKKNNh55OAwrkKaQgTVKLYExFRwlhR9sWwfzV4rckwacYEWqyxvV59Y15MaMNKfSILoQGWo5snOpu7ENWcXaArwk1rMusG/qcqe1T2X4bmDiINKca5tSgfIeSYvWefTHh54+BSMs08qgZ89N/w6byXyNcI42Yx2eTexpiXRkIoVnMURp/UK5PbiJfr86oGGwNvNIoFs/wOB3BBY0n4+AVhQ3jVHNQRrf6hDcFS+OJd1qiFsVVE5+XTaCcT7O7cxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AZKoFqgM7Q1OL1Omik4Iv9q7Zr9Vaw/qPAwmeAfqCx0=; b=nft3xkWcF4sYHthKkJTCn5l/dgNqW+lpB3O7rh/qiTxXKeaLuVJBVIS0wsdpYegv2f1S2VBCnAmKvTBH7a2j39jb+YKt4WagbEdWzBAhS9ramGga7l3SIiBsHbz3FGCjgAuZg8naxvWxog9QPRDhgCRxzGIfD4eujJ/MVJngyvX9aSoaw9KydFBMmQfWvzBt/YjFgoMXR4DeOCeq4FNp8bwctvQy1RZC06cQHpl1dhTPitKUoA82+C9fbM0V9hjCaDt5Wng5g9Mwi55mNuIrneG807L+tUJ4nkpXtIaKKXEMAIcE1Lv6Msa/MA89BNn6O79eb4OWR1UlhINwKsABVw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AZKoFqgM7Q1OL1Omik4Iv9q7Zr9Vaw/qPAwmeAfqCx0=; b=Mit5eKU2cnHOxfuNOknVPXpm56DBIny4MNOABLa4aigDoQArC1ZSeAAikiEPwN8ZtL/Kv+wBLVoGJUM3QfmE0ZXGt9+yw745LxFxE3fUNq0fcxD9MddclGlxdLq8L80PVLEeptlZbhN0Xb6ztZq6vCDT9969TFVyCvkPpYAAxuCYonYFzCahbqQrehtIBoiG/OcqfxEvsoW8p/zXX99/nAKwfxrb/0wNBQ3EVwVpJ5RCywya32AHtrv6SabBhdt0BhQaj9HQam1DtErBz013G54su3mPNC25W+FdJHjgBjtvisCeB8snWp3cKgNWeAfuz8Lc8FT0k77d9wpmRcrJzA== Received: from DS2PEPF00004567.namprd21.prod.outlook.com (2603:10b6:f:fc00::509) by CY5PR12MB6525.namprd12.prod.outlook.com (2603:10b6:930:32::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Tue, 8 Oct 2024 18:33:20 +0000 Received: from CY4PEPF0000E9DC.namprd05.prod.outlook.com (2603:10b6:92f::1001:0:12) by DS2PEPF00004567.outlook.office365.com (2603:10b6:f:fc00::509) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8069.3 via Frontend Transport; Tue, 8 Oct 2024 18:33:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9DC.mail.protection.outlook.com (10.167.241.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:19 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:07 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:06 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:04 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 03/14] net/mlx5: qos: Consistently name vport vars as 'vport' Date: Tue, 8 Oct 2024 21:32:11 +0300 Message-ID: <20241008183222.137702-4-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9DC:EE_|CY5PR12MB6525:EE_ X-MS-Office365-Filtering-Correlation-Id: 50b91ad6-2cd7-47be-0a8e-08dce7c7ae8f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: I5nl3m/C2+hGhTk642jJKCSopWBIz1vBZvQEp/l05P/EWA2/zI4KodG+Q2ipr8357NrFKcbpkbzypDvsHZGegbnntyH9sZ9QXb5mKNIBY/IHt1S3KW3depfBD54TUmBVmwtpFtYMVWv4h++ZHRes2j4pe3E/l8mr9mpxcHjma6JSGaThhY7y77VG6bJjmZC/bJ7loyL3aizB5ifqe5t0TjHwWdX+1jmOUqIK9HFqPc1ZrxU4ARGIffiz29uj9Q1MoM3LX4DgTKNKIAVBH1jiF+VQgyFW9Pvlrz75d79JPl+aFxISzXvS7t/c/MmJ+vLxd8oFAgktzS1+SXVi7pU0BVpJtDk5ushCj1YNhX9PQNRWNKn1SmhFMmyQpqMGEc0PLuW2NISmCZdtc2lLsqGI/UOKKZRq/E5GHC2rHs9r1BthFmhBQHNId6zYC4RA6nLFIG7xCC6qF5mU63xokVwPbgefKjtcF0LbgcKRs8rUZZBTBDbi8P12R2maC41+jJ2HMsYzDYSahTM9UZnyz9zJ6x51JjFCq25Dhaej1zH88HgzBTW3OL/DxY5NPWtL84Jo2/UL2yNz5ySZWFC4oJ2Z36ClLthcq/24mMvQlZZ0ny/Sao1nWu0Tlsk9flrfLW0W+WdwMDGkeVdSUIjpc0VO1aEJyw623BDOFYLX2zzAFWcLgZJDEcp6nJ1aSkjEO1X884POKLIOPtfVHWFrbPO6Dvc5x3ocCDXdm240SxzergcCxb4iPwAwIzipkkZsn83ybpNpL8Zx+gTugFYSh5S2nZrwjAK7pRBkbsLKY4zMfn4+OmVdfEyzYNhL8O+fONRccsKjkv56sy4BMHW7Px0/t9XNM3hLy484ihNHZpVJ0KNfcDN39Aar6jXvBL0TCxcpsdnYZsUZiE+4tciXSJ3J1EZohA7M03cnwbUWr6bCdWLPf9nfk3FoY9o9zEPkKhz5BxDuNRyo1Z8BPeV9plporNUx0Bl05rd7/K8oQVToq6kKYuXQqwimDhCVh0rFJu8LcYqd2MqhOW1n3kHUhQaOK0SJqTiZ7k0iySMIXFHjDiuEQZrWi2PKzprOBEk3DjFvqiAQfkTPqSUFEHyo40rzOHy+tDlCnjy3LYO3mAleIDq13DnBkG6vivL97gI+aMHaikbVnqNToO+eYg40uDledX8z9PfZNlCWZ7FnWPRpvxVJUexg65dEanZTN/kznzWAPPlAfwFfkJB0+LwoUzYvbDHlQrx08ntLKruq/J8phtLJ/aDWVlQYAM9fODAVz+HkpTbsjKVHcy3iHG6DBJklHsaJiMPP165RVUrcSUoHmycob2/wui8fmvFyCjUYxLJfZGMN26zQCIoJ9MMhsaepC53T7eCUAbjZrayJBJujo6M= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(1800799024)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:19.1479 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 50b91ad6-2cd7-47be-0a8e-08dce7c7ae8f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9DC.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6525 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu The current mixture of 'vport' and 'evport' can be improved. There is no functional change. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 48 +++++++++---------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 73127f1dbf6e..8be4980fcc61 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -88,7 +88,7 @@ static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw, bool group_level) { u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); - struct mlx5_vport *evport; + struct mlx5_vport *vport; u32 max_guarantee = 0; unsigned long i; @@ -101,11 +101,11 @@ static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw, max_guarantee = group->min_rate; } } else { - mlx5_esw_for_each_vport(esw, i, evport) { - if (!evport->enabled || !evport->qos.enabled || - evport->qos.group != group || evport->qos.min_rate < max_guarantee) + mlx5_esw_for_each_vport(esw, i, vport) { + if (!vport->enabled || !vport->qos.enabled || + vport->qos.group != group || vport->qos.min_rate < max_guarantee) continue; - max_guarantee = evport->qos.min_rate; + max_guarantee = vport->qos.min_rate; } } @@ -134,24 +134,24 @@ static int esw_qos_normalize_vports_min_rate(struct mlx5_eswitch *esw, { u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); u32 divider = esw_qos_calculate_min_rate_divider(esw, group, false); - struct mlx5_vport *evport; + struct mlx5_vport *vport; unsigned long i; u32 bw_share; int err; - mlx5_esw_for_each_vport(esw, i, evport) { - if (!evport->enabled || !evport->qos.enabled || evport->qos.group != group) + mlx5_esw_for_each_vport(esw, i, vport) { + if (!vport->enabled || !vport->qos.enabled || vport->qos.group != group) continue; - bw_share = esw_qos_calc_bw_share(evport->qos.min_rate, divider, fw_max_bw_share); + bw_share = esw_qos_calc_bw_share(vport->qos.min_rate, divider, fw_max_bw_share); - if (bw_share == evport->qos.bw_share) + if (bw_share == vport->qos.bw_share) continue; - err = esw_qos_vport_config(esw, evport, evport->qos.max_rate, bw_share, extack); + err = esw_qos_vport_config(esw, vport, vport->qos.max_rate, bw_share, extack); if (err) return err; - evport->qos.bw_share = bw_share; + vport->qos.bw_share = bw_share; } return 0; @@ -189,7 +189,7 @@ static int esw_qos_normalize_groups_min_rate(struct mlx5_eswitch *esw, u32 divid return 0; } -static int esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, struct mlx5_vport *evport, +static int esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vport, u32 min_rate, struct netlink_ext_ack *extack) { u32 fw_max_bw_share, previous_min_rate; @@ -202,19 +202,19 @@ static int esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, struct mlx5_vpor fw_max_bw_share >= MLX5_MIN_BW_SHARE; if (min_rate && !min_rate_supported) return -EOPNOTSUPP; - if (min_rate == evport->qos.min_rate) + if (min_rate == vport->qos.min_rate) return 0; - previous_min_rate = evport->qos.min_rate; - evport->qos.min_rate = min_rate; - err = esw_qos_normalize_vports_min_rate(esw, evport->qos.group, extack); + previous_min_rate = vport->qos.min_rate; + vport->qos.min_rate = min_rate; + err = esw_qos_normalize_vports_min_rate(esw, vport->qos.group, extack); if (err) - evport->qos.min_rate = previous_min_rate; + vport->qos.min_rate = previous_min_rate; return err; } -static int esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, struct mlx5_vport *evport, +static int esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vport, u32 max_rate, struct netlink_ext_ack *extack) { u32 act_max_rate = max_rate; @@ -226,19 +226,19 @@ static int esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, struct mlx5_vpor if (max_rate && !max_rate_supported) return -EOPNOTSUPP; - if (max_rate == evport->qos.max_rate) + if (max_rate == vport->qos.max_rate) return 0; /* If parent group has rate limit need to set to group * value when new max rate is 0. */ - if (evport->qos.group && !max_rate) - act_max_rate = evport->qos.group->max_rate; + if (vport->qos.group && !max_rate) + act_max_rate = vport->qos.group->max_rate; - err = esw_qos_vport_config(esw, evport, act_max_rate, evport->qos.bw_share, extack); + err = esw_qos_vport_config(esw, vport, act_max_rate, vport->qos.bw_share, extack); if (!err) - evport->qos.max_rate = max_rate; + vport->qos.max_rate = max_rate; return err; } From patchwork Tue Oct 8 18:32:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826789 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2045.outbound.protection.outlook.com [40.107.243.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A7D4213ED5 for ; Tue, 8 Oct 2024 18:33:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.45 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412414; cv=fail; b=I/sGrPGbTkXwoNUoibYE4BH9aIk8tcUQ0nShNFgEKXXEo8a7tng87oLB6i5BLT7N7rM0vmzsNAEd8GZu0LC8EaAamHotOQ+92TKVSHp8AB8xbS0IVMSQn3VPtECcAK/kxWL8aR7eCy7krdDhtPYOjexMF1QijaY68PlpXZrCreo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412414; c=relaxed/simple; bh=poiFJ+c+6B7D3lfVOcSTo5dLbsicAtEnCEoEpD8STBA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Ug+PZQybl1bWIC44cXOqwqaudsHbZN5Itu6Xfb0AcSMvrf9TgPPBOlBWBuOojmuW/SwJK6bDBhyvIzBQ8yYbsBQYpUH8Im0ZZNLKOwl4Nw87wp4WuLkc+66bbJR6SHAWbokiGGscOlHwiU4f0dLM/Xv3O18znfdvJ5rtQjoOiss= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=UPbGhi2v; arc=fail smtp.client-ip=40.107.243.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="UPbGhi2v" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=lFJtG1/BvAmI54d3oAnb4CX91vgZRyAt410OXuqWcMgg4qp0ZPmBpgH9CeRFlclEEPYFQBvg5nEgDaEDQWn/oNbWqSL08QJUpL3p5TQ17C1pw60lkKs39V1PX1V2VsfaSKENCk7n0Q+qBqFQJPppnPzbDAdm8s0rj06aw7uMJV/6/x74bc6QweMnFJFkcVdD4sJBO6Sz8pO5riU8oYVAmxzOW5A3N10WHK4D5D9Amll0B5QDqesuWVMVbFPmmPWzLFCPKK8o94H8q9qK7X60lcZk8SEqRIBswPKGvKz9uym3NKhzGTiOUHMTqU9rncmAeO/3VaieY8Yd104Up8lRZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ih3vFMt63gjYj1c6NvSqiYdpsu0aBqHxfEDAnPq9fkg=; b=b9pw/WusoqFVfo3eQu45XrZai4NCGWIxr7OEBMzqAfx3JNWJdm+fzaPhR9ckfTKawzvp/ujVYxEUQZrJBleFMROPhO5mpwNc1+WzptooU0BEViJgMDinUrUfclTOmaGgLQqdteR1L2SY4QrTGJGcsGFT8qTs7f7lasOM5ubLbkyUMZrFJsxY1BCI36mdHH0TXwfPKEPuzVpgEzscSBzaFU82RnnFaInRexFQVmQro+uTDvVjuJzpa/VkaQduZL+kNA8DH5/Kzn0whiOczD6Ao6+x14Vri1Lsm3UQZprog2upmW7AlUrf0otcomFwbHfThQC01X9Z1cyzXVRxcjsObg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ih3vFMt63gjYj1c6NvSqiYdpsu0aBqHxfEDAnPq9fkg=; b=UPbGhi2v+WK6pqvKy/b04hT5YgcaiA9MFUtqfKTGxF9Fgg6vMAXXOUfDIV4EDGU4AIiwNhj1QLvukK2ou1VSdFWfLQwqf8FFHVMmgzI+pn2MG4jaIMI1Na9qIPsFwSDzQsz3za7R6eNwU0ekuBvDqOOp+XxZn7tQpPuK92FaJlGY+wLLHv9LPMADWcnCfeaofFVgL8qbvnylsRoK2TIGIVsOff1BqvJqWCGU7q9Or9/JCGyLB3sSFJJfn/alS9EKIWpQNRJZ0v9430jx8jrbORJtAt9uYq892ng8ptAJuen/mFvAbTkAmwzsjur9HK5V5BOBWNfx5wPfvA3u5i9Zjg== Received: from CH0PR03CA0117.namprd03.prod.outlook.com (2603:10b6:610:cd::32) by MW4PR12MB6756.namprd12.prod.outlook.com (2603:10b6:303:1e9::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.20; Tue, 8 Oct 2024 18:33:26 +0000 Received: from DS2PEPF00003441.namprd04.prod.outlook.com (2603:10b6:610:cd:cafe::e2) by CH0PR03CA0117.outlook.office365.com (2603:10b6:610:cd::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.17 via Frontend Transport; Tue, 8 Oct 2024 18:33:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS2PEPF00003441.mail.protection.outlook.com (10.167.17.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:25 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:10 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:10 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:07 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 04/14] net/mlx5: qos: Refactor and document bw_share calculation Date: Tue, 8 Oct 2024 21:32:12 +0300 Message-ID: <20241008183222.137702-5-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003441:EE_|MW4PR12MB6756:EE_ X-MS-Office365-Filtering-Correlation-Id: bdb83690-5b97-4d32-5789-08dce7c7b262 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: KCbOfQNH3CiEQnQzVfEZZPSu3WdcVcsLNSD2g0rzLgx7HpeGK6ZOSI705KS2NeqZpXvhIq8rQeHYuL0L7aOjsvKlhXKK1farwZtgFYKuARd9JmjuWoLL8iOy5JNBg/uK/ujep4cP/ZKHiISmzEJd/0M6WO4aovxuaDYt61j+KqK7KNFCSumiYC0OTuwZndusygoSlx1//sJul6eaCcOh4knnO5vczVdGDOym8v1VcsSu7C29WznX0lpJ3LvtMYlTXSugCGI8wvDPs21Xx78QCBSLys5PjQ9pUNMDUkoy4cEseUB4V1U+ERK/BFf3s/A9E+84F2vdt5bVi4ZFhWh6uQxi4jnvn9vxCh9nI1ImJIVNNVWnl2e3NI3t33K2bVbpZ/XoRaYF6vY7akkCfHNERRMB9hYiWuQ40xKuheFb8CHbtKxjtBaqxuM0OYxoixixupaMehfXHFyI/8JFUcd14e3qqjdaKr+3ZxYYzknjduCj0CotsCq9a1wq8Wmoathb3PHY6H7+mCSSc4HKbKMSEfv3ASjufEidJEWof8MAH+YYNPntGOpnTFzGe/iVas+hUEIPZWSP92FTn50BxALSbUZ9bGit9bz9w50uBiRtM5iEIsfXFBO6M6AeDe6DHnbnqj59Y/JmhxZHaaT0wC8G+btC909xJQg145krEYkVJl7k9kFqCkD3Q2JEc4ukB55eGIoy22U+1DZUZ7i1mowLVw4HawZLYeZVOTGhjtQ7a4LzCKatPoeQSu9p4en5iAgY1lzU+JtqFzPoWg1Mr76l7y8UyTt6DwZLAvJ2+FxeBpFEPR31h5TIWGFsiaPdzXtLmdtOiADuR7heTNeB530zsjYtatRndqq36YXuDKNTR9sPKYofs8iUK899Eun9Lv7u/TcnryMvw87ac2RcMSr/JwB/DbR1afg0C+QEUeuBUX+vXg9aScQACocECIw5ZVjJYB0WE/eEtOYQcRvJ++lhgfKZuhR13bpGh3sHtnogQXe1w3JChurexNpvLG/9wBWztCZsnCHbQTg1T9haPRJqHNosRXpUfpR3W/Pp94j4WuUc42SAOgjKtCSjt56grbJ5hktJ1HpY89LteP13+kWWa3WMJsfIBi5K9YYfZNLgwaRGL8+l0Lvda2oaxNyRZ0wrgOC/o8IbVsozc07eEal/wAFenpY+7B8tAirWQ+FJtYY9k60vNUWf+SoB8jWq3a6Zom3RHIPi6HQb2KMQiu1WT8hf+lca3h5S1FO0z48jXjmj0Z1GBX8PWVjDm0UvlCNLEvmc85JaIB8Wk2Uk4swr7/gnP/8MRFwdWsMdZ7Ruz8B5TTjW5w5exkjFPl5UtUM+VL/IbonrUTN7UDVVT3OjPSqo46mD2gdi19GamYZ3F2OOHCfhoAPChVXvHgMXYfE/ X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:25.5659 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bdb83690-5b97-4d32-5789-08dce7c7b262 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003441.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6756 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu The previous function (esw_qos_calculate_group_min_rate_divider) had two completely different modes of execution, depending on the 'group_level' parameter. Split it into two separate functions: - esw_qos_calculate_min_rate_divider - computes min across groups. - esw_qos_calculate_group_min_rate_divider - computes min in a group. Fold the divider calculation into the corresponding normalize functions to avoid having the caller compute the corresponding divider. Also rename the normalize functions to better indicate what level they're operating on. Finally, document everything so that this topic can more easily be understood by future maintainers. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 134 +++++++++--------- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 3 +- 2 files changed, 71 insertions(+), 66 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 8be4980fcc61..a8231a498ed6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -11,13 +11,13 @@ /* Minimum supported BW share value by the HW is 1 Mbit/sec */ #define MLX5_MIN_BW_SHARE 1 -#define MLX5_RATE_TO_BW_SHARE(rate, divider, limit) \ - min_t(u32, max_t(u32, DIV_ROUND_UP(rate, divider), MLX5_MIN_BW_SHARE), limit) struct mlx5_esw_rate_group { u32 tsar_ix; + /* Bandwidth parameters. */ u32 max_rate; u32 min_rate; + /* A computed value indicating relative min_rate between group members. */ u32 bw_share; struct list_head list; }; @@ -83,57 +83,77 @@ static int esw_qos_vport_config(struct mlx5_eswitch *esw, return 0; } -static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw, - struct mlx5_esw_rate_group *group, - bool group_level) +static u32 esw_qos_calculate_group_min_rate_divider(struct mlx5_eswitch *esw, + struct mlx5_esw_rate_group *group) { u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); struct mlx5_vport *vport; u32 max_guarantee = 0; unsigned long i; - if (group_level) { - struct mlx5_esw_rate_group *group; - list_for_each_entry(group, &esw->qos.groups, list) { - if (group->min_rate < max_guarantee) - continue; - max_guarantee = group->min_rate; - } - } else { - mlx5_esw_for_each_vport(esw, i, vport) { - if (!vport->enabled || !vport->qos.enabled || - vport->qos.group != group || vport->qos.min_rate < max_guarantee) - continue; - max_guarantee = vport->qos.min_rate; - } + /* Find max min_rate across all vports in this group. + * This will correspond to fw_max_bw_share in the final bw_share calculation. + */ + mlx5_esw_for_each_vport(esw, i, vport) { + if (!vport->enabled || !vport->qos.enabled || + vport->qos.group != group || vport->qos.min_rate < max_guarantee) + continue; + max_guarantee = vport->qos.min_rate; } if (max_guarantee) return max_t(u32, max_guarantee / fw_max_bw_share, 1); - /* If vports min rate divider is 0 but their group has bw_share configured, then - * need to set bw_share for vports to minimal value. + /* If vports max min_rate divider is 0 but their group has bw_share + * configured, then set bw_share for vports to minimal value. */ - if (!group_level && !max_guarantee && group && group->bw_share) + if (group && group->bw_share) return 1; + + /* A divider of 0 sets bw_share for all group vports to 0, + * effectively disabling min guarantees. + */ return 0; } -static u32 esw_qos_calc_bw_share(u32 min_rate, u32 divider, u32 fw_max) +static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw) { - if (divider) - return MLX5_RATE_TO_BW_SHARE(min_rate, divider, fw_max); + u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); + struct mlx5_esw_rate_group *group; + u32 max_guarantee = 0; + + /* Find max min_rate across all esw groups. + * This will correspond to fw_max_bw_share in the final bw_share calculation. + */ + list_for_each_entry(group, &esw->qos.groups, list) { + if (group->min_rate < max_guarantee) + continue; + max_guarantee = group->min_rate; + } + if (max_guarantee) + return max_t(u32, max_guarantee / fw_max_bw_share, 1); + + /* If no group has min_rate configured, a divider of 0 sets all + * groups' bw_share to 0, effectively disabling min guarantees. + */ return 0; } -static int esw_qos_normalize_vports_min_rate(struct mlx5_eswitch *esw, - struct mlx5_esw_rate_group *group, - struct netlink_ext_ack *extack) +static u32 esw_qos_calc_bw_share(u32 min_rate, u32 divider, u32 fw_max) +{ + if (!divider) + return 0; + return min_t(u32, max_t(u32, DIV_ROUND_UP(min_rate, divider), MLX5_MIN_BW_SHARE), fw_max); +} + +static int esw_qos_normalize_group_min_rate(struct mlx5_eswitch *esw, + struct mlx5_esw_rate_group *group, + struct netlink_ext_ack *extack) { u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); - u32 divider = esw_qos_calculate_min_rate_divider(esw, group, false); + u32 divider = esw_qos_calculate_group_min_rate_divider(esw, group); struct mlx5_vport *vport; unsigned long i; u32 bw_share; @@ -157,10 +177,10 @@ static int esw_qos_normalize_vports_min_rate(struct mlx5_eswitch *esw, return 0; } -static int esw_qos_normalize_groups_min_rate(struct mlx5_eswitch *esw, u32 divider, - struct netlink_ext_ack *extack) +static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); + u32 divider = esw_qos_calculate_min_rate_divider(esw); struct mlx5_esw_rate_group *group; u32 bw_share; int err; @@ -180,7 +200,7 @@ static int esw_qos_normalize_groups_min_rate(struct mlx5_eswitch *esw, u32 divid /* All the group's vports need to be set with default bw_share * to enable them with QOS */ - err = esw_qos_normalize_vports_min_rate(esw, group, extack); + err = esw_qos_normalize_group_min_rate(esw, group, extack); if (err) return err; @@ -207,7 +227,7 @@ static int esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, struct mlx5_vpor previous_min_rate = vport->qos.min_rate; vport->qos.min_rate = min_rate; - err = esw_qos_normalize_vports_min_rate(esw, vport->qos.group, extack); + err = esw_qos_normalize_group_min_rate(esw, vport->qos.group, extack); if (err) vport->qos.min_rate = previous_min_rate; @@ -229,9 +249,7 @@ static int esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, struct mlx5_vpor if (max_rate == vport->qos.max_rate) return 0; - /* If parent group has rate limit need to set to group - * value when new max rate is 0. - */ + /* Use parent group limit if new max rate is 0. */ if (vport->qos.group && !max_rate) act_max_rate = vport->qos.group->max_rate; @@ -248,10 +266,10 @@ static int esw_qos_set_group_min_rate(struct mlx5_eswitch *esw, struct mlx5_esw_ { u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); struct mlx5_core_dev *dev = esw->dev; - u32 previous_min_rate, divider; + u32 previous_min_rate; int err; - if (!(MLX5_CAP_QOS(dev, esw_bw_share) && fw_max_bw_share >= MLX5_MIN_BW_SHARE)) + if (!MLX5_CAP_QOS(dev, esw_bw_share) || fw_max_bw_share < MLX5_MIN_BW_SHARE) return -EOPNOTSUPP; if (min_rate == group->min_rate) @@ -259,15 +277,13 @@ static int esw_qos_set_group_min_rate(struct mlx5_eswitch *esw, struct mlx5_esw_ previous_min_rate = group->min_rate; group->min_rate = min_rate; - divider = esw_qos_calculate_min_rate_divider(esw, group, true); - err = esw_qos_normalize_groups_min_rate(esw, divider, extack); + err = esw_qos_normalize_min_rate(esw, extack); if (err) { - group->min_rate = previous_min_rate; NL_SET_ERR_MSG_MOD(extack, "E-Switch group min rate setting failed"); /* Attempt restoring previous configuration */ - divider = esw_qos_calculate_min_rate_divider(esw, group, true); - if (esw_qos_normalize_groups_min_rate(esw, divider, extack)) + group->min_rate = previous_min_rate; + if (esw_qos_normalize_min_rate(esw, extack)) NL_SET_ERR_MSG_MOD(extack, "E-Switch BW share restore failed"); } @@ -291,9 +307,7 @@ static int esw_qos_set_group_max_rate(struct mlx5_eswitch *esw, group->max_rate = max_rate; - /* Any unlimited vports in the group should be set - * with the value of the group. - */ + /* Any unlimited vports in the group should be set with the value of the group. */ mlx5_esw_for_each_vport(esw, i, vport) { if (!vport->enabled || !vport->qos.enabled || vport->qos.group != group || vport->qos.max_rate) @@ -382,12 +396,8 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_eswitch *esw, } vport->qos.group = new_group; + /* Use new group max rate if vport max rate is unlimited. */ max_rate = vport->qos.max_rate ? vport->qos.max_rate : new_group->max_rate; - - /* If vport is unlimited, we set the group's value. - * Therefore, if the group is limited it will apply to - * the vport as well and if not, vport will remain unlimited. - */ err = esw_qos_vport_create_sched_element(esw, vport, max_rate, vport->qos.bw_share); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch vport group set failed."); @@ -428,8 +438,8 @@ static int esw_qos_vport_update_group(struct mlx5_eswitch *esw, /* Recalculate bw share weights of old and new groups */ if (vport->qos.bw_share || new_group->bw_share) { - esw_qos_normalize_vports_min_rate(esw, curr_group, extack); - esw_qos_normalize_vports_min_rate(esw, new_group, extack); + esw_qos_normalize_group_min_rate(esw, curr_group, extack); + esw_qos_normalize_group_min_rate(esw, new_group, extack); } return 0; @@ -440,7 +450,6 @@ __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *ex { u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_esw_rate_group *group; - u32 divider; void *attr; int err; @@ -465,13 +474,10 @@ __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *ex list_add_tail(&group->list, &esw->qos.groups); - divider = esw_qos_calculate_min_rate_divider(esw, group, true); - if (divider) { - err = esw_qos_normalize_groups_min_rate(esw, divider, extack); - if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch groups normalization failed"); - goto err_min_rate; - } + err = esw_qos_normalize_min_rate(esw, extack); + if (err) { + NL_SET_ERR_MSG_MOD(extack, "E-Switch groups normalization failed"); + goto err_min_rate; } trace_mlx5_esw_group_qos_create(esw->dev, group, group->tsar_ix); @@ -515,15 +521,13 @@ static int __esw_qos_destroy_rate_group(struct mlx5_eswitch *esw, struct mlx5_esw_rate_group *group, struct netlink_ext_ack *extack) { - u32 divider; int err; list_del(&group->list); - divider = esw_qos_calculate_min_rate_divider(esw, NULL, true); - err = esw_qos_normalize_groups_min_rate(esw, divider, extack); + err = esw_qos_normalize_min_rate(esw, extack); if (err) - NL_SET_ERR_MSG_MOD(extack, "E-Switch groups' normalization failed"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch groups normalization failed"); err = mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 9bf05ae58af0..ce857eae6898 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -215,9 +215,10 @@ struct mlx5_vport { struct { bool enabled; u32 esw_sched_elem_ix; - u32 bw_share; u32 min_rate; u32 max_rate; + /* A computed value indicating relative min_rate between vports in a group. */ + u32 bw_share; struct mlx5_esw_rate_group *group; } qos; From patchwork Tue Oct 8 18:32:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826788 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2082.outbound.protection.outlook.com [40.107.101.82]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7B50213EC7 for ; Tue, 8 Oct 2024 18:33:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.101.82 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412413; cv=fail; b=KYrOyaEQ5Y8omW1SdWenukzDBxMG3Qh4Avtp/iY+S4V6sYghwRguhkFb42j8EQsqQIBJbKe+NHOP0WhdpiqZo098tZWcR/DBV9lBjfygDaLChx+gWSp8+VngVOVQMvidOXsexdZY6jUziViMtimPaqC6T7sjwZ4u+pebiH8b9GI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412413; c=relaxed/simple; bh=rP8E7S48pT1K2+1IRumV2Yven0+L4P59N33dDF/xcvo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rzQLZW9M9iuiatDyiyvWmaKHuA4KddD7VV11acqi0cJWsEQusWIsV3+M7dn3/9VtRntYaryfNOLTwRSoiHa3FIBu0kBxzp0YqRKhqW1k/k2Paievnw9sM6pHO1RaW5IeZj+k9e398hZof7bfYEksK7m+SZGpBBMvmEm9BzJ7XoY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Y4V2IgEC; arc=fail smtp.client-ip=40.107.101.82 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Y4V2IgEC" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=R3+euzyWER8qHMVQGF9y0As6I+PxmLx+9Lcg/Z9ZRQlqf4yNvLJG3FCSEZf+HI5ZA3wfZNtG8f+xbPOnkifIgQweKmdUvFE3xPiXhcu8hhQ42JHvljD615Ok9+DeIstkLg1QhR2yDHIGyYjXMjYyYHQtbzYu5uvMvZLclqNcJnHkODMhAP/M2NIbx/H1Nry1NBF5LWY4y+7cFcHwSSWiHaXeroTApRiwv4G+XcR9lMrmVTxhh92W9WIQBnh4YwC2gC9oZXDEFiJLApFLS3IOiTbBS0pwtrhJN/Lfk886JxaBfdGGG9hlzc1Z7mOasc1DWlnvdjDrQqQaez3DODUXaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fHm7vGy76Chz5x4fA8zmunWKwrTt27pwupjcMRXfayw=; b=XDudidoCTo7GJKljK5WEAtLIADqiNv+IQjI6H08L9Dzs4tg51RynOvjOZ+tNq0i5HMyZAWu+H13pHdzuQtG2D6dHPf6GPC+3lbf194CqpPomnSKFKdOzm4rhBNSTM6bCv2WzB6N+7s6natp4U1dj1xK1V8PaNZ+/ZZo9oA0vg06VBHqumuqcbiSj+GIOs7bJD+TEVOlMmVhZAiBDHqwaQmymqTdCvDR+UgpoWygmKh8k8VpW6zngG0MW7bl/MwiM1pw+S7M7Xmc++42U5dznYMH58oEIFveBwWdJaxDLvKmlQIeSrCYg91JXsfmzWX6njFYFjEh/s9JB1lDr2MahOA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fHm7vGy76Chz5x4fA8zmunWKwrTt27pwupjcMRXfayw=; b=Y4V2IgEC+qyzVI5krBbTPWKs53kf/jvEZt74jkBK10tpffDzyhiogmhrkOWMKFmVaD87U5c5q/WF8FxfSBconk6VHRrhTJhudu2sSpv9yXcTHJ4t4YYP1+g25PGBK46HS6v9TrJ2T4ZNAJHfOKrGC1tCIW0NtOs1g5rdZu38ZtCMbiTFD4H0K5yUvdAnt3pbCUeFFJxUc4E085w1LWvSkwlFYN7FA3zx8TAwJ/6sDkK/noVyH+p0CUW7n/umvFA9mhQ8Yt3NCzeqpbOJpTZKU2Xsy3IscI/+8YOmQ8iMfsopY1ESf/u7Qkihc7tB0T7WCzgNteDqhTTPJ/FO9Uo58g== Received: from PH1PEPF000132FA.NAMP220.PROD.OUTLOOK.COM (2603:10b6:518:1::2b) by MN2PR12MB4128.namprd12.prod.outlook.com (2603:10b6:208:1dd::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Tue, 8 Oct 2024 18:33:26 +0000 Received: from CY4PEPF0000E9D9.namprd05.prod.outlook.com (2a01:111:f403:f912::5) by PH1PEPF000132FA.outlook.office365.com (2603:1036:903:47::3) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23 via Frontend Transport; Tue, 8 Oct 2024 18:33:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9D9.mail.protection.outlook.com (10.167.241.72) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:25 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:13 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:13 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:10 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 05/14] net/mlx5: qos: Maintain rate group vport members in a list Date: Tue, 8 Oct 2024 21:32:13 +0300 Message-ID: <20241008183222.137702-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D9:EE_|MN2PR12MB4128:EE_ X-MS-Office365-Filtering-Correlation-Id: 41688264-263f-4ffd-7c9a-08dce7c7b241 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|82310400026|1800799024|36860700013; X-Microsoft-Antispam-Message-Info: YF/WG0T06uffcdbQfDJoPqx3e6j4wPWArnLKJPOk9AeicQyPWrW0qvrVl+ahcJW4z4LHAtjN23q0iNWfCmgYEub0XVk1pTTv4zJoJDBIXXE6JBBPcbEFpqshrAx32SwOoQO58ZU5dnS7qFprvu91A0FDz7uIsY+xMuNRy3vfTn7XSFLbpQQsUXEzHOGFjTtc3s3VX+rGflEr2Nv9uiNnEB5IxxbeUAN/qoIvsJ2fR+6WGhOVnuaqwRj4fzDm9UAiqwPvWdHh1KZNPCmzZQX7RVlzR8YYIJbNXTn1tv1c1dmfV/zxRPBJfOytvhkb/lWMTSWkrLzRZ3p+yrw2+Ow4+LrDNHmBeIlhLJcNARM7aoN4iwvJ5UauA7SikKp6RTEQf1KiMkLT0HYWoVoohmqnHfn2Vp8SCzhKJkwiLtScUcMa/x1Hr5UyaeqNhNtuSXJr2dbusehfiUFjeN6u91xoABO7o47PuAYEGkEnUluXPNM+7ME0muMvW5Y+UnPKqwA42Ut3kToZUSaWXxFDz0KmuBzdy/snWIB5uEw9YyEZzDji5dGLIna/4O2pThVEJtww3DeEVi75ytLoeEdvgj5msGHYWUKhUStLoICm3zo5www2JxE6ko19SIbq/VoYiKm7O0Np/FnWVvCFtLsyUWoeDRy4J/PnUFPXGL/ST50uJzef0aKuJ8Epg2EWNMT3naUtC2KRBqyRTFSURR0qBJMaMT/Xc3LXNqUpJosrkLKtgCJHzyaMtj/dySiUjXG098fJajGyCkAnv/9uXVAmdkVGo/4nKG4w3dSxc71W5zHYHVEsqisQ54kdRPhYiFWH9SPIbZhKSdTrbyMQm07ZXIpOGmQNf1j9Adk/6OPz1xSsZG6peCsYG+jNWrByUgpbF8lvcHauooIxVNDaOkugKUhC3qR0XjlVN+AaWBNNv7tEhkNhxd81uM2lBhPzIeG4koxZ2UsAL0ZUvZAXx/WyjL1/2UL4O61PTc4uJBI1PPjZdZ9TDeNo1y+SflvDgx3RYDBmi2ouUmyqa3Zyz1TfmhOgXcgNUF27hgbnuD4OCcI9zNg6IcaWF6pnHU960ouWeRieK3PxJVoOAjMdcfHS/afGHvgH9fpKLjNEx6/aVmfdKyJ183IpJzz4jYkfCN3mNzUvM2v2dJrqzFQagC6u6cHaHcPys5UzUfjOvcXqubYzJF5X2nIzJV93TDyl0006ZDXaq4n+Ad8VXu10yki+HDjQXiJ8ZyGFXoT8jeK7Pbya6ImH5bquUcx59q+Rwljk/mQ9Iwtp29V3RG/GnoguN6CY3wV/zvB5UFYxZy172Yv7hBzMAlFcTeDSKGm70wOuP3ieH7cUt7GYADTIrGIcT/X/vbDQ4blG2G+AbXwv4d99xBKCwVGZs/xreA3zmfBbpc9c X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(82310400026)(1800799024)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:25.3974 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 41688264-263f-4ffd-7c9a-08dce7c7b241 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D9.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4128 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu Previously, finding group members was done by iterating over all vports of an eswitch and comparing their group with the required one, but that approach will break down when a group can contain vports from multiple eswitches. Solve that by maintaining a list of vport members. Instead of iterating over esw vports, loop over the members list. Use this opportunity to provide two new functions to allocate and free a group, so that the number of state transitions is smaller. This will also be used in a future patch. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 94 +++++++++++-------- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 1 + 2 files changed, 58 insertions(+), 37 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index a8231a498ed6..cfff1413dcfc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -20,8 +20,17 @@ struct mlx5_esw_rate_group { /* A computed value indicating relative min_rate between group members. */ u32 bw_share; struct list_head list; + /* Vport members of this group.*/ + struct list_head members; }; +static void esw_qos_vport_set_group(struct mlx5_vport *vport, struct mlx5_esw_rate_group *group) +{ + list_del_init(&vport->qos.group_entry); + vport->qos.group = group; + list_add_tail(&vport->qos.group_entry, &group->members); +} + static int esw_qos_sched_elem_config(struct mlx5_core_dev *dev, u32 sched_elem_ix, u32 max_rate, u32 bw_share) { @@ -89,17 +98,13 @@ static u32 esw_qos_calculate_group_min_rate_divider(struct mlx5_eswitch *esw, u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); struct mlx5_vport *vport; u32 max_guarantee = 0; - unsigned long i; - /* Find max min_rate across all vports in this group. * This will correspond to fw_max_bw_share in the final bw_share calculation. */ - mlx5_esw_for_each_vport(esw, i, vport) { - if (!vport->enabled || !vport->qos.enabled || - vport->qos.group != group || vport->qos.min_rate < max_guarantee) - continue; - max_guarantee = vport->qos.min_rate; + list_for_each_entry(vport, &group->members, qos.group_entry) { + if (vport->qos.min_rate > max_guarantee) + max_guarantee = vport->qos.min_rate; } if (max_guarantee) @@ -155,13 +160,10 @@ static int esw_qos_normalize_group_min_rate(struct mlx5_eswitch *esw, u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); u32 divider = esw_qos_calculate_group_min_rate_divider(esw, group); struct mlx5_vport *vport; - unsigned long i; u32 bw_share; int err; - mlx5_esw_for_each_vport(esw, i, vport) { - if (!vport->enabled || !vport->qos.enabled || vport->qos.group != group) - continue; + list_for_each_entry(vport, &group->members, qos.group_entry) { bw_share = esw_qos_calc_bw_share(vport->qos.min_rate, divider, fw_max_bw_share); if (bw_share == vport->qos.bw_share) @@ -295,7 +297,6 @@ static int esw_qos_set_group_max_rate(struct mlx5_eswitch *esw, u32 max_rate, struct netlink_ext_ack *extack) { struct mlx5_vport *vport; - unsigned long i; int err; if (group->max_rate == max_rate) @@ -308,9 +309,8 @@ static int esw_qos_set_group_max_rate(struct mlx5_eswitch *esw, group->max_rate = max_rate; /* Any unlimited vports in the group should be set with the value of the group. */ - mlx5_esw_for_each_vport(esw, i, vport) { - if (!vport->enabled || !vport->qos.enabled || - vport->qos.group != group || vport->qos.max_rate) + list_for_each_entry(vport, &group->members, qos.group_entry) { + if (vport->qos.max_rate) continue; err = esw_qos_vport_config(esw, vport, max_rate, vport->qos.bw_share, extack); @@ -395,7 +395,7 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_eswitch *esw, return err; } - vport->qos.group = new_group; + esw_qos_vport_set_group(vport, new_group); /* Use new group max rate if vport max rate is unlimited. */ max_rate = vport->qos.max_rate ? vport->qos.max_rate : new_group->max_rate; err = esw_qos_vport_create_sched_element(esw, vport, max_rate, vport->qos.bw_share); @@ -407,7 +407,7 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_eswitch *esw, return 0; err_sched: - vport->qos.group = curr_group; + esw_qos_vport_set_group(vport, curr_group); max_rate = vport->qos.max_rate ? vport->qos.max_rate : curr_group->max_rate; if (esw_qos_vport_create_sched_element(esw, vport, max_rate, vport->qos.bw_share)) esw_warn(esw->dev, "E-Switch vport group restore failed (vport=%d)\n", @@ -446,16 +446,33 @@ static int esw_qos_vport_update_group(struct mlx5_eswitch *esw, } static struct mlx5_esw_rate_group * -__esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +__esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix) { - u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_esw_rate_group *group; - void *attr; - int err; group = kzalloc(sizeof(*group), GFP_KERNEL); if (!group) - return ERR_PTR(-ENOMEM); + return NULL; + + group->tsar_ix = tsar_ix; + INIT_LIST_HEAD(&group->members); + list_add_tail(&group->list, &esw->qos.groups); + return group; +} + +static void __esw_qos_free_rate_group(struct mlx5_esw_rate_group *group) +{ + list_del(&group->list); + kfree(group); +} + +static struct mlx5_esw_rate_group * +__esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +{ + u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; + struct mlx5_esw_rate_group *group; + int tsar_ix, err; + void *attr; MLX5_SET(scheduling_context, tsar_ctx, element_type, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR); @@ -466,13 +483,18 @@ __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *ex err = mlx5_create_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, tsar_ctx, - &group->tsar_ix); + &tsar_ix); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch create TSAR for group failed"); - goto err_sched_elem; + return ERR_PTR(err); } - list_add_tail(&group->list, &esw->qos.groups); + group = __esw_qos_alloc_rate_group(esw, tsar_ix); + if (!group) { + NL_SET_ERR_MSG_MOD(extack, "E-Switch alloc group failed"); + err = -ENOMEM; + goto err_alloc_group; + } err = esw_qos_normalize_min_rate(esw, extack); if (err) { @@ -484,13 +506,12 @@ __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *ex return group; err_min_rate: - list_del(&group->list); + __esw_qos_free_rate_group(group); +err_alloc_group: if (mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, - group->tsar_ix)) + tsar_ix)) NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR for group failed"); -err_sched_elem: - kfree(group); return ERR_PTR(err); } @@ -523,21 +544,19 @@ static int __esw_qos_destroy_rate_group(struct mlx5_eswitch *esw, { int err; - list_del(&group->list); - - err = esw_qos_normalize_min_rate(esw, extack); - if (err) - NL_SET_ERR_MSG_MOD(extack, "E-Switch groups normalization failed"); + trace_mlx5_esw_group_qos_destroy(esw->dev, group, group->tsar_ix); err = mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, group->tsar_ix); if (err) NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR_ID failed"); + __esw_qos_free_rate_group(group); - trace_mlx5_esw_group_qos_destroy(esw->dev, group, group->tsar_ix); + err = esw_qos_normalize_min_rate(esw, extack); + if (err) + NL_SET_ERR_MSG_MOD(extack, "E-Switch groups normalization failed"); - kfree(group); return err; } @@ -655,7 +674,8 @@ static int esw_qos_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo if (err) return err; - vport->qos.group = esw->qos.group0; + INIT_LIST_HEAD(&vport->qos.group_entry); + esw_qos_vport_set_group(vport, esw->qos.group0); err = esw_qos_vport_create_sched_element(esw, vport, max_rate, bw_share); if (err) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index ce857eae6898..f208ae16bfd2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -220,6 +220,7 @@ struct mlx5_vport { /* A computed value indicating relative min_rate between vports in a group. */ u32 bw_share; struct mlx5_esw_rate_group *group; + struct list_head group_entry; } qos; u16 vport; From patchwork Tue Oct 8 18:32:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826790 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2076.outbound.protection.outlook.com [40.107.237.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F1BF5212D1B for ; Tue, 8 Oct 2024 18:33:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412417; cv=fail; b=OPJVUR1h0MLsvgk+XR+Rx+NKcHVY8QTchHjiEh2yMUIYCZFazpAz25vaoIOpcSnGKuMxPJafcU6U9RKXTEayg1dGouKoYF8DIZX34ow//UZm8aLn1nbHkG3pkOemh6Yun4gnKDYd5YNFdwlMujqkIJp6Ll2ReS0OT2D4uYsc8RI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412417; c=relaxed/simple; bh=QjDsaivZDmPvlxlTt02Tt0RS1hvf31WOrV1gppQGvOw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=HhqoMYcC2IgVh309NHpDfolawO1G7dMajzrShTXips3G8EW/3zzNorBpHGhW8QFGvhrttidf9BjNUyF7JrXdETzUpylLgu6Hd1yGxIlGbJhDgf/zJRimw7Cd9B34eGC1ojJeR5op+P53r7u95XTZ4//DdWZ+yavExKkzGLdMuv8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=J00C0rkz; arc=fail smtp.client-ip=40.107.237.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="J00C0rkz" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=smudbTrwxQgb/ENZ0BX3I/F8fZw8zkenxZdTviRYmXvTyffy27uF9two/b8SFkWz2ubgJSd87KZNWKUm5GlBo+QIRyECdmIWEfBZ9ycFDXlraFbEmJlyklOtN/P23GmVY1mwK54cz9p0jSo41C6ajYcFh1oYSiqsrWiv94sIVhhoCMdGaAbxHnR5lp2QvNt8r+Xco8eYPG1aewnxmDMCQg+4/XARvo1kcY9H2/4YxuxcFLEXatBMbiXFvY2yEA3Qb9WP4qyLQv5C5VPRk+XSzERx0PHAKG8WMenXWUs2debBjkpoRv2KCMCd/jIUutOdEA+0NvkyJNkzgm3/Pj5K0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cSjiSoJe5GHz9m5lrOmH06Hovmwpsren/9ihjWn004c=; b=STlaU6RcNedXCg7DvP0afZjV5xav/VnPYxg6tOlak05LT4rU9zZ6TpLL0eu6IUwS93UcnnjV9DtDA9j/afqxK8Pge3/6M6x0S44/YNqDmQ8ifOl2y6Fr8xgiG7PbkpDynf/G/Q9fN4SosWjCNXeIOmoVRBO26wvlVDF7XqCM2SyyU/fe2H0QiN85c3MMzzXneCwjHYMhpOaC8Xkllo3AM+ZAzMyPuEHmGg6G5dhoBnluBleGKaeXItIdJVMeU+Fp2s6QWCk5zoQjHQBR+1vMoDL+O9eLA8J7drmX7DhR7yLlcZfpONEhWOV6X+M5YjZiGebpLW6JRmIFDRbTSq8PzA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cSjiSoJe5GHz9m5lrOmH06Hovmwpsren/9ihjWn004c=; b=J00C0rkzSOCN5/OXeSxG50pmnYzYr/O/58Flj+AY/U4CkRjhoEziIBjg4dWpTpA4MuKSdi87dWmhJwL1z96obmU3qKczL6H1s1ZIjuYfoMFm8kXyG9gOkq9necCQESXsAv0nyLEoiCGXeQMFJKhO04WUR+7UueeSW3nACGWJe85hK2fezFFPM1KtVZYYRIEYC0RgQWXAHa+WcEWvUaw+UIMAnSF6RVJ4lvNHdZve9LWu5aymiMgONgI80f22qYm8AwkIkWTGShLw0XTM6Y4unJ9Ej8B/6IFIMoBpXAciISLAnSta1LBfhYAMVJioSidvWCuiooaUiphCttJtwsmcgg== Received: from DS2PEPF0000455F.namprd21.prod.outlook.com (2603:10b6:f:fc00::50d) by BL1PR12MB5852.namprd12.prod.outlook.com (2603:10b6:208:397::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Tue, 8 Oct 2024 18:33:28 +0000 Received: from CY4PEPF0000E9DC.namprd05.prod.outlook.com (2603:10b6:92f::1001:0:12) by DS2PEPF0000455F.outlook.office365.com (2603:10b6:f:fc00::50d) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8069.3 via Frontend Transport; Tue, 8 Oct 2024 18:33:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9DC.mail.protection.outlook.com (10.167.241.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:28 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:17 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:16 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:13 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 06/14] net/mlx5: qos: Always create group0 Date: Tue, 8 Oct 2024 21:32:14 +0300 Message-ID: <20241008183222.137702-7-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9DC:EE_|BL1PR12MB5852:EE_ X-MS-Office365-Filtering-Correlation-Id: 27c851c8-8831-4e03-031f-08dce7c7b41c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|82310400026|1800799024; X-Microsoft-Antispam-Message-Info: pREHytyvTpU4wOP0swibySUduwMsapIM0jMPzJRE9XBaWnmKGG8dVn2EWGcHnUGQ4lrNpGlg68625motMFOk3lb3lPYGndONV7QgAWMFU8eh0RrtAl/5oUv0Be8y4Z1hXy2eb32DZjW93nWWij53tXfA8N/MWa+6LETjgs/Ap4f2MHUe7wKKZ9JTa+jd5qq4QsWdv7yIQIs20szqI8vV3WUv1wrz/RtVYEouMgd+7VLrUMHksf5qtAtOXsm/nRTF4dJCiT8fvk6RfdOs1G9tqksVgM3d8epRW/BZqpsOkHWhTdeM6QAz3ZQZPpAo+XE3bXGQ2OwE8IK9+KRTgxXSNwjr+HeFnjcV0qYc6NcKQkCuci9XcCxiySUVrt/EGuFL6n6kwvRYSP/p8lTUhjaEJZWk8Lj671uWRgxmCzlYtZZEId28q2tY8i9qQKU+5KBCRAycnNqPmUq1hxDhZbGHoUQqVMW7h3FkClqh8CHj6ZYjm7HbBNXctuSjrrXXElJ7Dnl48wsTu2rJ4xGjBOXAz5AxyxfCY3lKNsvGxH1v8y5UtLZfhGSF+DY0VtCp2nMswm6ax9Gy9Spu0WJKVMpgaWE++esA+QMXVzArI3epR7Ec87R9MZkcEdtA3RlO1HLMP/ZPiUgWmBEUSQ9TaZrVmh2fTXfLAROlWPs9wl32H5IbNEVu8sl9zUdiswrPAJ1mhRVCZ92hkVC9QFwWAMrJneJSXVWYATHfaEL2pNQexeus3Z4V6/afgCmBxEMMuyRVO039XCajt4CXVdF/paTZMDngac7NR3nGgiqXTF/Lov9xb9J6GCBzTsi2JN6W8bpStDfCOVjUsbbWHcDTQSjBZAwflN9hB3jVgE8B/Mi1hf34BOywRO4BXqrr93h/TpVwDLe3tyr3JPvaVJm34rgeSIEf9Qeu8G1ICRrt7tv3upmNCOMKjWm2sWVyJRoFg7cIZPMegTd0Im7+bL5E4hdww3Q7aXVPnrqCy9jHLANh0lfciC2hS/MX6x/A+iESuIe9ql5pUV39ILuEmUO3FadYsKs+D4/+XExk1YWPjknUrF6O2LYa2sk4xTsKFhXu33e6PsOZMG0/maEFOImzoWXSATH1qvgAC6dy0yGZz4ruSZRFk69Zmo2qtt3HqMAaDVSUF66+Uzl4Hv4n7dc6XbRYWbYsjOyRS0GVm1iZ8swjKxgGzlPIx5L2gaPOvr9EVxEeLlgFskfyB11RZE+pLQk3H9tacVhzpE727f9s33CQxbbQREZA/G/wggm5e7IhTb30SOePeCUkFbymiL7CAi+8r+xT26UQjw1099YdAatYYjKv+VcImK1+MFXSF3F2m7UgSRif9Pu9wawTYOlew6O5xjhpUQix3QvyUpCwkzU4X3g49XkH4JvToymzY7mEKedA X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(376014)(82310400026)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:28.4604 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 27c851c8-8831-4e03-031f-08dce7c7b41c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9DC.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5852 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu All vports not explicitly members of a group with QoS enabled are part of the internal esw group0, except when the hw reports that groups aren't supported (log_esw_max_sched_depth == 0). This creates corner cases in the code, which has to make sure that this case is supported. Additionally, the groups are about to be moved out of eswitches, and group0 being NULL creates additional complications there. This patch makes sure to always create group0, even if max sched depth is 0. In that case, a software-only group0 is created referencing the root TSAR. Vports can point to this group when their QoS is enabled and they'll be attached to the root TSAR directly. This eliminates corner cases in the code by offering the guarantee that if qos is enabled, vport->qos.group is non-NULL. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 36 +++++++++++-------- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 12 ++++--- 2 files changed, 30 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index cfff1413dcfc..958b8894f5c0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -113,7 +113,7 @@ static u32 esw_qos_calculate_group_min_rate_divider(struct mlx5_eswitch *esw, /* If vports max min_rate divider is 0 but their group has bw_share * configured, then set bw_share for vports to minimal value. */ - if (group && group->bw_share) + if (group->bw_share) return 1; /* A divider of 0 sets bw_share for all group vports to 0, @@ -132,7 +132,7 @@ static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw) * This will correspond to fw_max_bw_share in the final bw_share calculation. */ list_for_each_entry(group, &esw->qos.groups, list) { - if (group->min_rate < max_guarantee) + if (group->min_rate < max_guarantee || group->tsar_ix == esw->qos.root_tsar_ix) continue; max_guarantee = group->min_rate; } @@ -188,6 +188,8 @@ static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_e int err; list_for_each_entry(group, &esw->qos.groups, list) { + if (group->tsar_ix == esw->qos.root_tsar_ix) + continue; bw_share = esw_qos_calc_bw_share(group->min_rate, divider, fw_max_bw_share); if (bw_share == group->bw_share) @@ -252,7 +254,7 @@ static int esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, struct mlx5_vpor return 0; /* Use parent group limit if new max rate is 0. */ - if (vport->qos.group && !max_rate) + if (!max_rate) act_max_rate = vport->qos.group->max_rate; err = esw_qos_vport_config(esw, vport, act_max_rate, vport->qos.bw_share, extack); @@ -348,19 +350,17 @@ static int esw_qos_vport_create_sched_element(struct mlx5_eswitch *esw, u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_esw_rate_group *group = vport->qos.group; struct mlx5_core_dev *dev = esw->dev; - u32 parent_tsar_ix; void *attr; int err; if (!esw_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT)) return -EOPNOTSUPP; - parent_tsar_ix = group ? group->tsar_ix : esw->qos.root_tsar_ix; MLX5_SET(scheduling_context, sched_ctx, element_type, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT); attr = MLX5_ADDR_OF(scheduling_context, sched_ctx, element_attributes); MLX5_SET(vport_element, attr, vport_number, vport->vport); - MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent_tsar_ix); + MLX5_SET(scheduling_context, sched_ctx, parent_element_id, group->tsar_ix); MLX5_SET(scheduling_context, sched_ctx, max_average_bw, max_rate); MLX5_SET(scheduling_context, sched_ctx, bw_share, bw_share); @@ -605,12 +605,17 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta INIT_LIST_HEAD(&esw->qos.groups); if (MLX5_CAP_QOS(dev, log_esw_max_sched_depth)) { esw->qos.group0 = __esw_qos_create_rate_group(esw, extack); - if (IS_ERR(esw->qos.group0)) { - esw_warn(dev, "E-Switch create rate group 0 failed (%ld)\n", - PTR_ERR(esw->qos.group0)); - err = PTR_ERR(esw->qos.group0); - goto err_group0; - } + } else { + /* The eswitch doesn't support scheduling groups. + * Create a software-only group0 using the root TSAR to attach vport QoS to. + */ + if (!__esw_qos_alloc_rate_group(esw, esw->qos.root_tsar_ix)) + esw->qos.group0 = ERR_PTR(-ENOMEM); + } + if (IS_ERR(esw->qos.group0)) { + err = PTR_ERR(esw->qos.group0); + esw_warn(dev, "E-Switch create rate group 0 failed (%d)\n", err); + goto err_group0; } refcount_set(&esw->qos.refcnt, 1); @@ -628,8 +633,11 @@ static void esw_qos_destroy(struct mlx5_eswitch *esw) { int err; - if (esw->qos.group0) + if (esw->qos.group0->tsar_ix != esw->qos.root_tsar_ix) __esw_qos_destroy_rate_group(esw, esw->qos.group0, NULL); + else + __esw_qos_free_rate_group(esw->qos.group0); + esw->qos.group0 = NULL; err = mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, @@ -699,7 +707,7 @@ void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo lockdep_assert_held(&esw->state_lock); if (!vport->qos.enabled) return; - WARN(vport->qos.group && vport->qos.group != esw->qos.group0, + WARN(vport->qos.group != esw->qos.group0, "Disabling QoS on port before detaching it from group"); err = mlx5_destroy_scheduling_element_cmd(esw->dev, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index f208ae16bfd2..fec9e843f673 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -213,6 +213,7 @@ struct mlx5_vport { struct mlx5_vport_info info; struct { + /* Initially false, set to true whenever any QoS features are used. */ bool enabled; u32 esw_sched_elem_ix; u32 min_rate; @@ -362,14 +363,17 @@ struct mlx5_eswitch { atomic64_t user_count; struct { - u32 root_tsar_ix; - struct mlx5_esw_rate_group *group0; - struct list_head groups; /* Protected by esw->state_lock */ - /* Protected by esw->state_lock. * Initially 0, meaning no QoS users and QoS is disabled. */ refcount_t refcnt; + u32 root_tsar_ix; + /* Contains all vports with QoS enabled but no explicit group. + * Cannot be NULL if QoS is enabled, but may be a fake group + * referencing the root TSAR if the esw doesn't support groups. + */ + struct mlx5_esw_rate_group *group0; + struct list_head groups; /* Protected by esw->state_lock */ } qos; struct mlx5_esw_bridge_offloads *br_offloads; From patchwork Tue Oct 8 18:32:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826792 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2041.outbound.protection.outlook.com [40.107.237.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC6AC212D1B for ; Tue, 8 Oct 2024 18:33:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.41 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412426; cv=fail; b=P+ZLReMD93GlJRBq4c9zo0Xs7Pa+nmTxOYde33elq6VzZ5BNb6iYh3Ra3RLgv9iiCS2ISmvrp93O7ZMoqfcCwnUExMDmNhNJqch7OPzArnEmJ/P4A1H2xsC1RT9sYH7MRqi9P75BS3nLPBYxLGHAqKOJ61ChNz0dUWmHNyaMntQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412426; c=relaxed/simple; bh=krpfjcBoUR4bnslxtdDQ9r9NDqpcU1stij6QrK05gQ4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qGNclg1r9vuN+hvZ1wN4UR9bp2HcSPZsmekEASn6XHd01wixH0yjuXcaiFnK+r6aLfsqGE5ONZ1q+TtNMoj7yluVEPzz1CFUYCzpfoqCUvahlWUdD/hupQCKZMkgVSnx+wgqS1PJGWY5dP8RcgNsj4y6qAO68CVx0ln0pNN396M= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=uCQxt8+2; arc=fail smtp.client-ip=40.107.237.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="uCQxt8+2" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=IZOTCum6zeexxwHg+UzTwIVKn2cgTupHr07zZ0jVxrog6KN3qRWzvCiDyP8x7tWvu3O6uawiwlAp3AjnIk9i/w4uK9vIX4xFer+yTMD5QiYgjHP6snci+pCb4wMr7S71Q6pdYlTYrQtkbL+FQ3hjGAsIHE4aJAr/Tf3Ov3Y2gHAwyUcyCZeX1+Z0TWjTkK7RC6yBHZYKxoVbko16zzjSnyoi+JKWtL07HzUa9bnqWCn1tk/nmnKS2EVRjYROolLRZD1iQtuWRsa+m7c7HP8J7abG8pxB6+xh4SbTiB4ydNLilhFkc41n4Ep6VHkaDDjoMeqxodwzFN/JcCPJ8bXaYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uPDp/quHVB2MrQI1nv0mfkrt4kxUzXCPTE+c+MAsfIg=; b=EL1CK4CWfNKD6UjszwjDVIPMKn/hZAG0BkHUkhvS4F2RThKlBnwAm1LS4OhVSIeyekQEheu7LJ8ar45gCImEZKMEXVwAV3cSy8MuoklNHP45KTOYn71zcX2IvxGRrCbp/osmc5LFFrOGgW+0+7A9q5AjvyU+nDXZlqrubC+81imuzpsFCV8mRo6zyKuDV/6MYZaD8+LVfwqYmg8s9wLjeav/+jWtEKFZQJLBZs9xEvhvRu/IBfZmpaKQGp98O2P/Hyt0uCNppyeDEj+ACgdK6fAa7x9SrvANImF2vCoimMNtkI3dt/i+AOpaKqoqUze7Qbir5lL9BK3pImaaSkveLw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uPDp/quHVB2MrQI1nv0mfkrt4kxUzXCPTE+c+MAsfIg=; b=uCQxt8+2AB5dIokcfFLwnZD3tbbj/yBSnqzE5KM0dLNlQaYwxZ/jl69lFwb82gvF7XibDUHwl0cXxHEZOhtCM4H4swzrhMrWaD8ghVez84ZY0hf450F6lKkUX1Jd1fWD7UTFfhyO6BK9Ch+iNPwmYy4R66oYM1tz22DXrxuU/YVSTKOwYkEhi1eq/9tS0GY+G7Poho7kGdquBjyAJURYxuYDiZu5eMubqy+kqh9neHbKe30uPWXFyINFzZsOsQNwuE60v+kXIMBTQnS9djVNog2lQ+f1fga19slQCXf8bqUYu7Fe0bqqGWdwcVYjMSMdT6JqQESlqAVNzm0bDEI6gA== Received: from DM6PR01CA0001.prod.exchangelabs.com (2603:10b6:5:296::6) by CH3PR12MB9147.namprd12.prod.outlook.com (2603:10b6:610:19a::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Tue, 8 Oct 2024 18:33:36 +0000 Received: from DS2PEPF00003442.namprd04.prod.outlook.com (2603:10b6:5:296:cafe::1) by DM6PR01CA0001.outlook.office365.com (2603:10b6:5:296::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7982.34 via Frontend Transport; Tue, 8 Oct 2024 18:33:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS2PEPF00003442.mail.protection.outlook.com (10.167.17.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:35 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:20 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:19 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:17 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 07/14] net/mlx5: qos: Drop 'esw' param from vport qos functions Date: Tue, 8 Oct 2024 21:32:15 +0300 Message-ID: <20241008183222.137702-8-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003442:EE_|CH3PR12MB9147:EE_ X-MS-Office365-Filtering-Correlation-Id: 9566f065-22bf-4e06-2041-08dce7c7b874 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|82310400026|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: 9WXxHbQsB6mE9uKT/RPsadjKhqYON2HhlLOW7w5xNpK+FmxcQdLUsKwv61bGMu+mxTIbq9YjUetMQVLhFQwOWKPzyJidR6djGcvFznL3CRrkR8Hdv+76PQwnQqEAVPKXjq67RFtM35sLscETpGrodSJzNyPRWqtbGkwa+BEZZdRJr/OllUSLxdCElBME3xzfJragMGiFKKoDyYYvjHoaQA5Ech4ychf1Vphrjbl513HmJx6PFYtpsTkubOVMr2EcpjMMmpSbeq5404wLk6P3CidK5JiKZWJSdVhJutKjeTT5qvft9CQjJNAIQozoO2IdxcjAOcdHLj7+ILVuj0J3RKj0XnzDF68y48KFdEHPxJazA0oimm2JBCSit3QrjuM/sl8F8xMZbRJLICHkMuYmI7rIrhX2HHP7uLzD015kYPJmx/Ndb20CC1AtXgGPJIlZjpfwqKUV3UobP7SDJdyzuxwGfIl0zTfaJUsQpsIAY+r2bSKVDtgBCN9kwh69mPioDbTvcO8DhWIdiVSugU4ye5azcsfpQS6UoDpiTLPy+rF3GUxfR1HYbPbBIAvGlAAp3BBCuFN97SiVT/qoRY+ehBQQNVZCIUuvo1X0pt1fNmutbG41u/qBEnZj/qkotCrrDhkf3RnddbuBcD1rjNzTptQFvxQB2Sj2iWDF9Wp9Dk9fGOcdOOgwvpgZJAEqVAN7+IM9lIibC4NpD/g1sRW9IyRVfJinCc1mA3F/I8aVQUne35X8iGl3Pp9p/ZDJg1FLdniWOQDV8Tdn8gAWMU83aewq2zHgB8aDF8psKHdhV3KXM5rGFxrVCYHZUh0StAPWf5uDiSzMuGSebd+dQwCZnF+WqsDmVrMp0UAZsS24qtTWDzYJI5PYE43f1M7bhSG4uStboYz5J0pkjraopgKkXv90fDauzKxPiA91VaZvMq7O+mtQA/+L9O80zPe/gq3l0Ox8BW8s+Livu6ntPZqEiL03gEFMufvFoPVQ3Q0YWbMWf2Okhqs7EXbNjXK/diJ3tH+9MBkux82UL8DvZEmKzJ9rdkpCI36Rq5yFGeiNut95lxvEWisOQ91cy9at6MyYOI9vIVRnYi1CTZ6vfgtzcwymk0dU/Rcqh/5RxdnQKaTsOucznXn7FfNpx3GZYi9AldB6lgAyWAhx0XWZQCEpEqgiDGC/iMH+4M9qgo17gRgM7Fbl5aw/53lmgFaU9ftDM19AYfUYzH+zOF6vbBuFRRQ7KMSQw7QznV0+dV+CxN8gJyOw2Z0XSdwdjacKD/EXQ0ML5U1YPPr89rWn9FKbRXYi17dJdKAbxxMSIZnwffI+fcKTnkF9hJP0Rm1RIKJPkKT32PGvk5pWPsOs3zqQjP60BcZHwE0M23Ef0XpOvPRDSEFK8FWCToajldsYdFP6 X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(82310400026)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:35.7487 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9566f065-22bf-4e06-2041-08dce7c7b874 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003442.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB9147 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu The vport has a pointer to its own eswitch in vport->dev->priv.eswitch, so passing the same eswitch as a parameter to the various functions manipulating vport qos is superfluous at best and prone to errors at worst. More importantly, with the upcoming cross-esw scheduling changes, the eswitch that should receive the various scheduling element commands is NOT the same as the vport's eswitch, so the current code's assumptions will break. To avoid confusion and bugs, this commit drops the 'esw' parameter from all vport qos functions and uses the vport's own eswitch pointer instead. Signed-off-by: Cosmin Ratiu Reviewed-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/esw/devlink_port.c | 4 +- .../ethernet/mellanox/mlx5/core/esw/legacy.c | 2 +- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 95 +++++++++---------- .../net/ethernet/mellanox/mlx5/core/esw/qos.h | 5 +- .../net/ethernet/mellanox/mlx5/core/eswitch.c | 2 +- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 5 +- .../mellanox/mlx5/core/eswitch_offloads.c | 4 +- 7 files changed, 57 insertions(+), 60 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c index f8869c9b6802..86af1891395f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c @@ -187,7 +187,7 @@ int mlx5_esw_offloads_devlink_port_register(struct mlx5_eswitch *esw, struct mlx return err; } -void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_eswitch *esw, struct mlx5_vport *vport) +void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_vport *vport) { struct mlx5_devlink_port *dl_port; @@ -195,7 +195,7 @@ void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_eswitch *esw, struct return; dl_port = vport->dl_port; - mlx5_esw_qos_vport_update_group(esw, vport, NULL, NULL); + mlx5_esw_qos_vport_update_group(vport, NULL, NULL); devl_rate_leaf_destroy(&dl_port->dl_port); devl_port_unregister(&dl_port->dl_port); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c index 8587cd572da5..3c8388706e15 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c @@ -521,7 +521,7 @@ int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, u16 vport, return PTR_ERR(evport); mutex_lock(&esw->state_lock); - err = mlx5_esw_qos_set_vport_rate(esw, evport, max_rate, min_rate); + err = mlx5_esw_qos_set_vport_rate(evport, max_rate, min_rate); mutex_unlock(&esw->state_lock); return err; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 958b8894f5c0..baf68ffb07cc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -67,20 +67,19 @@ static int esw_qos_group_config(struct mlx5_eswitch *esw, struct mlx5_esw_rate_g return err; } -static int esw_qos_vport_config(struct mlx5_eswitch *esw, - struct mlx5_vport *vport, +static int esw_qos_vport_config(struct mlx5_vport *vport, u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { - struct mlx5_core_dev *dev = esw->dev; int err; if (!vport->qos.enabled) return -EIO; - err = esw_qos_sched_elem_config(dev, vport->qos.esw_sched_elem_ix, max_rate, bw_share); + err = esw_qos_sched_elem_config(vport->dev, vport->qos.esw_sched_elem_ix, max_rate, + bw_share); if (err) { - esw_warn(esw->dev, + esw_warn(vport->dev, "E-Switch modify vport scheduling element failed (vport=%d,err=%d)\n", vport->vport, err); NL_SET_ERR_MSG_MOD(extack, "E-Switch modify vport scheduling element failed"); @@ -169,7 +168,7 @@ static int esw_qos_normalize_group_min_rate(struct mlx5_eswitch *esw, if (bw_share == vport->qos.bw_share) continue; - err = esw_qos_vport_config(esw, vport, vport->qos.max_rate, bw_share, extack); + err = esw_qos_vport_config(vport, vport->qos.max_rate, bw_share, extack); if (err) return err; @@ -213,16 +212,17 @@ static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_e return 0; } -static int esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vport, +static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, u32 min_rate, struct netlink_ext_ack *extack) { + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; u32 fw_max_bw_share, previous_min_rate; bool min_rate_supported; int err; lockdep_assert_held(&esw->state_lock); - fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); - min_rate_supported = MLX5_CAP_QOS(esw->dev, esw_bw_share) && + fw_max_bw_share = MLX5_CAP_QOS(vport->dev, max_tsar_bw_share); + min_rate_supported = MLX5_CAP_QOS(vport->dev, esw_bw_share) && fw_max_bw_share >= MLX5_MIN_BW_SHARE; if (min_rate && !min_rate_supported) return -EOPNOTSUPP; @@ -238,15 +238,16 @@ static int esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, struct mlx5_vpor return err; } -static int esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vport, +static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, u32 max_rate, struct netlink_ext_ack *extack) { + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; u32 act_max_rate = max_rate; bool max_rate_supported; int err; lockdep_assert_held(&esw->state_lock); - max_rate_supported = MLX5_CAP_QOS(esw->dev, esw_rate_limit); + max_rate_supported = MLX5_CAP_QOS(vport->dev, esw_rate_limit); if (max_rate && !max_rate_supported) return -EOPNOTSUPP; @@ -257,7 +258,7 @@ static int esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, struct mlx5_vpor if (!max_rate) act_max_rate = vport->qos.group->max_rate; - err = esw_qos_vport_config(esw, vport, act_max_rate, vport->qos.bw_share, extack); + err = esw_qos_vport_config(vport, act_max_rate, vport->qos.bw_share, extack); if (!err) vport->qos.max_rate = max_rate; @@ -315,7 +316,7 @@ static int esw_qos_set_group_max_rate(struct mlx5_eswitch *esw, if (vport->qos.max_rate) continue; - err = esw_qos_vport_config(esw, vport, max_rate, vport->qos.bw_share, extack); + err = esw_qos_vport_config(vport, max_rate, vport->qos.bw_share, extack); if (err) NL_SET_ERR_MSG_MOD(extack, "E-Switch vport implicit rate limit setting failed"); @@ -343,13 +344,12 @@ static bool esw_qos_element_type_supported(struct mlx5_core_dev *dev, int type) return false; } -static int esw_qos_vport_create_sched_element(struct mlx5_eswitch *esw, - struct mlx5_vport *vport, +static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, u32 max_rate, u32 bw_share) { u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_esw_rate_group *group = vport->qos.group; - struct mlx5_core_dev *dev = esw->dev; + struct mlx5_core_dev *dev = vport->dev; void *attr; int err; @@ -369,7 +369,7 @@ static int esw_qos_vport_create_sched_element(struct mlx5_eswitch *esw, sched_ctx, &vport->qos.esw_sched_elem_ix); if (err) { - esw_warn(vport->dev, + esw_warn(dev, "E-Switch create vport scheduling element failed (vport=%d,err=%d)\n", vport->vport, err); return err; @@ -378,8 +378,7 @@ static int esw_qos_vport_create_sched_element(struct mlx5_eswitch *esw, return 0; } -static int esw_qos_update_group_scheduling_element(struct mlx5_eswitch *esw, - struct mlx5_vport *vport, +static int esw_qos_update_group_scheduling_element(struct mlx5_vport *vport, struct mlx5_esw_rate_group *curr_group, struct mlx5_esw_rate_group *new_group, struct netlink_ext_ack *extack) @@ -387,7 +386,7 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_eswitch *esw, u32 max_rate; int err; - err = mlx5_destroy_scheduling_element_cmd(esw->dev, + err = mlx5_destroy_scheduling_element_cmd(vport->dev, SCHEDULING_HIERARCHY_E_SWITCH, vport->qos.esw_sched_elem_ix); if (err) { @@ -398,7 +397,7 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_eswitch *esw, esw_qos_vport_set_group(vport, new_group); /* Use new group max rate if vport max rate is unlimited. */ max_rate = vport->qos.max_rate ? vport->qos.max_rate : new_group->max_rate; - err = esw_qos_vport_create_sched_element(esw, vport, max_rate, vport->qos.bw_share); + err = esw_qos_vport_create_sched_element(vport, max_rate, vport->qos.bw_share); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch vport group set failed."); goto err_sched; @@ -409,18 +408,18 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_eswitch *esw, err_sched: esw_qos_vport_set_group(vport, curr_group); max_rate = vport->qos.max_rate ? vport->qos.max_rate : curr_group->max_rate; - if (esw_qos_vport_create_sched_element(esw, vport, max_rate, vport->qos.bw_share)) - esw_warn(esw->dev, "E-Switch vport group restore failed (vport=%d)\n", + if (esw_qos_vport_create_sched_element(vport, max_rate, vport->qos.bw_share)) + esw_warn(vport->dev, "E-Switch vport group restore failed (vport=%d)\n", vport->vport); return err; } -static int esw_qos_vport_update_group(struct mlx5_eswitch *esw, - struct mlx5_vport *vport, +static int esw_qos_vport_update_group(struct mlx5_vport *vport, struct mlx5_esw_rate_group *group, struct netlink_ext_ack *extack) { + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; struct mlx5_esw_rate_group *new_group, *curr_group; int err; @@ -432,7 +431,7 @@ static int esw_qos_vport_update_group(struct mlx5_eswitch *esw, if (curr_group == new_group) return 0; - err = esw_qos_update_group_scheduling_element(esw, vport, curr_group, new_group, extack); + err = esw_qos_update_group_scheduling_element(vport, curr_group, new_group, extack); if (err) return err; @@ -669,9 +668,10 @@ static void esw_qos_put(struct mlx5_eswitch *esw) esw_qos_destroy(esw); } -static int esw_qos_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vport, +static int esw_qos_vport_enable(struct mlx5_vport *vport, u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err; lockdep_assert_held(&esw->state_lock); @@ -685,7 +685,7 @@ static int esw_qos_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo INIT_LIST_HEAD(&vport->qos.group_entry); esw_qos_vport_set_group(vport, esw->qos.group0); - err = esw_qos_vport_create_sched_element(esw, vport, max_rate, bw_share); + err = esw_qos_vport_create_sched_element(vport, max_rate, bw_share); if (err) goto err_out; @@ -700,8 +700,9 @@ static int esw_qos_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo return err; } -void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vport) +void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) { + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err; lockdep_assert_held(&esw->state_lock); @@ -723,20 +724,19 @@ void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo esw_qos_put(esw); } -int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vport, - u32 max_rate, u32 min_rate) +int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *vport, u32 max_rate, u32 min_rate) { + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err; lockdep_assert_held(&esw->state_lock); - err = esw_qos_vport_enable(esw, vport, 0, 0, NULL); + err = esw_qos_vport_enable(vport, 0, 0, NULL); if (err) return err; - err = esw_qos_set_vport_min_rate(esw, vport, min_rate, NULL); + err = esw_qos_set_vport_min_rate(vport, min_rate, NULL); if (!err) - err = esw_qos_set_vport_max_rate(esw, vport, max_rate, NULL); - + err = esw_qos_set_vport_max_rate(vport, max_rate, NULL); return err; } @@ -830,12 +830,12 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 mutex_lock(&esw->state_lock); if (!vport->qos.enabled) { /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ - err = esw_qos_vport_enable(esw, vport, rate_mbps, vport->qos.bw_share, NULL); + err = esw_qos_vport_enable(vport, rate_mbps, vport->qos.bw_share, NULL); } else { MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps); bitmask = MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; - err = mlx5_modify_scheduling_element_cmd(esw->dev, + err = mlx5_modify_scheduling_element_cmd(vport->dev, SCHEDULING_HIERARCHY_E_SWITCH, ctx, vport->qos.esw_sched_elem_ix, @@ -897,11 +897,11 @@ int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void return err; mutex_lock(&esw->state_lock); - err = esw_qos_vport_enable(esw, vport, 0, 0, extack); + err = esw_qos_vport_enable(vport, 0, 0, extack); if (err) goto unlock; - err = esw_qos_set_vport_min_rate(esw, vport, tx_share, extack); + err = esw_qos_set_vport_min_rate(vport, tx_share, extack); unlock: mutex_unlock(&esw->state_lock); return err; @@ -923,11 +923,11 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * return err; mutex_lock(&esw->state_lock); - err = esw_qos_vport_enable(esw, vport, 0, 0, extack); + err = esw_qos_vport_enable(vport, 0, 0, extack); if (err) goto unlock; - err = esw_qos_set_vport_max_rate(esw, vport, tx_max, extack); + err = esw_qos_set_vport_max_rate(vport, tx_max, extack); unlock: mutex_unlock(&esw->state_lock); return err; @@ -1017,20 +1017,20 @@ int mlx5_esw_devlink_rate_node_del(struct devlink_rate *rate_node, void *priv, return err; } -int mlx5_esw_qos_vport_update_group(struct mlx5_eswitch *esw, - struct mlx5_vport *vport, +int mlx5_esw_qos_vport_update_group(struct mlx5_vport *vport, struct mlx5_esw_rate_group *group, struct netlink_ext_ack *extack) { + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err = 0; mutex_lock(&esw->state_lock); if (!vport->qos.enabled && !group) goto unlock; - err = esw_qos_vport_enable(esw, vport, 0, 0, extack); + err = esw_qos_vport_enable(vport, 0, 0, extack); if (!err) - err = esw_qos_vport_update_group(esw, vport, group, extack); + err = esw_qos_vport_update_group(vport, group, extack); unlock: mutex_unlock(&esw->state_lock); return err; @@ -1045,9 +1045,8 @@ int mlx5_esw_devlink_rate_parent_set(struct devlink_rate *devlink_rate, struct mlx5_vport *vport = priv; if (!parent) - return mlx5_esw_qos_vport_update_group(vport->dev->priv.eswitch, - vport, NULL, extack); + return mlx5_esw_qos_vport_update_group(vport, NULL, extack); group = parent_priv; - return mlx5_esw_qos_vport_update_group(vport->dev->priv.eswitch, vport, group, extack); + return mlx5_esw_qos_vport_update_group(vport, group, extack); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h index 0141e9d52037..c4f04c3e6a59 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h @@ -6,9 +6,8 @@ #ifdef CONFIG_MLX5_ESWITCH -int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *evport, - u32 max_rate, u32 min_rate); -void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vport); +int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *evport, u32 max_rate, u32 min_rate); +void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport); int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void *priv, u64 tx_share, struct netlink_ext_ack *extack); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index 17f78091ad30..4a187f39daba 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -894,7 +894,7 @@ static void esw_vport_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *vport vport_num, 1, MLX5_VPORT_ADMIN_STATE_DOWN); - mlx5_esw_qos_vport_disable(esw, vport); + mlx5_esw_qos_vport_disable(vport); esw_vport_cleanup_acl(esw, vport); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index fec9e843f673..567276900a37 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -433,8 +433,7 @@ int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw, u16 vport_num, bool setting); int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, u16 vport, u32 max_rate, u32 min_rate); -int mlx5_esw_qos_vport_update_group(struct mlx5_eswitch *esw, - struct mlx5_vport *vport, +int mlx5_esw_qos_vport_update_group(struct mlx5_vport *vport, struct mlx5_esw_rate_group *group, struct netlink_ext_ack *extack); int mlx5_eswitch_set_vepa(struct mlx5_eswitch *esw, u8 setting); @@ -812,7 +811,7 @@ int mlx5_esw_offloads_sf_devlink_port_init(struct mlx5_eswitch *esw, struct mlx5 void mlx5_esw_offloads_sf_devlink_port_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *vport); int mlx5_esw_offloads_devlink_port_register(struct mlx5_eswitch *esw, struct mlx5_vport *vport); -void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_eswitch *esw, struct mlx5_vport *vport); +void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_vport *vport); struct devlink_port *mlx5_esw_offloads_devlink_port(struct mlx5_eswitch *esw, u16 vport_num); int mlx5_esw_sf_max_hpf_functions(struct mlx5_core_dev *dev, u16 *max_sfs, u16 *sf_base_id); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index f24f91d213f2..fd34f43d18d5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -2617,7 +2617,7 @@ int mlx5_esw_offloads_load_rep(struct mlx5_eswitch *esw, struct mlx5_vport *vpor return err; load_err: - mlx5_esw_offloads_devlink_port_unregister(esw, vport); + mlx5_esw_offloads_devlink_port_unregister(vport); return err; } @@ -2628,7 +2628,7 @@ void mlx5_esw_offloads_unload_rep(struct mlx5_eswitch *esw, struct mlx5_vport *v mlx5_esw_offloads_rep_unload(esw, vport->vport); - mlx5_esw_offloads_devlink_port_unregister(esw, vport); + mlx5_esw_offloads_devlink_port_unregister(vport); } static int esw_set_slave_root_fdb(struct mlx5_core_dev *master, From patchwork Tue Oct 8 18:32:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826791 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2072.outbound.protection.outlook.com [40.107.95.72]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61FE4213ED4 for ; Tue, 8 Oct 2024 18:33:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.72 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412424; cv=fail; b=JKWefSYZLq5np9oVn6uHMzv/hfaYqzN+QTqLcEQlcjK5YDdRegnfXJQ8WZV4xCI1KOt0JNlB4QQ92XAIpAQXAYkq4WuVzcj+6kBSEwaXmPZMi7zzuKSi0uYITXMIo8ZfToCVpuBEbj47vQAAUdUoBS4h9SDNez/b7/+SuylLrCg= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412424; c=relaxed/simple; bh=5cnuxLP1utqLiqnqOj4ylLuIR6PY3PWbKtL89VULPoI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UtMXeoXXAfzyX5DXDstAsPxojgM8VYW2uAKf7VZv0OlEksPXyqx45Td23pW8lV6AegOXIBZlagPa4ASXoC/zmJFAM+FFJtXe2SgK6PDpOeE4TneyOfd1dCoxex1mQlwA791ntTdSwEgx6ubEJF8+kSGACrFBkeSPzfJmfGFsWKA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=FQrM+aOR; arc=fail smtp.client-ip=40.107.95.72 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="FQrM+aOR" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ZefnwQeLi5ysb4aBu0BMrQjIBxz8Oe1zoeKSKkzXw73Ufb3N6HnXkrj4AQPHJ6wUyaxB6ahv41o+PEak0L8yyPekFMSG1aXEQjLshUvSbSdUbtHBGKMFRr7KdNcO9Jy9TOO6YR3imwvtSrfQDMDFBhL8Ps9Q1g5LYiqdsiisco1ZXQJhxw33wmUyXflv/Yq3LZRDdqi9FJeJGk5pu2BjSVYFRkogXcPqwmzI1zYMqF2+8dMtjz37uQjDiiI8w9ppitkQPeKT+QkECZ01z5VqSLv9VdyeafLks9GgERtMJtyYEQDvwJ7P9PbjvnAogS2LBo8s6wwGjy+EYY/3RF+1kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Cw0YgaMKDBwinA8tzYZU5YUarwq2I9oXstZ65UyLufM=; b=wopxWE8RC0J00XCGdT+2Yv+sS/YNJBVzpvlN5KTzOZXI6aUcorHNrqTp7EcEBG4UQkvlCrxjWH5FSHvk4YS5wfa03tgZw6F9gx07+Mo68aCs88LHCuZUKJ43mHCmg+tNCcAMZZNTycrIY1eMUWvmHrZVFrJMAVMRuBPE4tnWY0tLCvoBpzvXlV/niohVW5+byoQMx0KU/WvPqPiZqy4i88NZaxO+ufpjHhsTQqnui23duKXbikoLQtBgRDuRNJ4KWoCsrvH0t4v3elBZcMphhBjMKLXTFqUZSKwx9+a30pv1mA2T1aIlZYy53aXdVk8azbAcIGI5uQOQMMl5Xilcvw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Cw0YgaMKDBwinA8tzYZU5YUarwq2I9oXstZ65UyLufM=; b=FQrM+aORQY9fuIW0hbC7H9pzs7xYXwnK6LOj2RzvmONSazQ9yV9kJpPppocxaXCVHFzknQsPREvEoivcdu9BT/48CR+/W9PhONMXEI2krIlW+Ck3JsmB3t2qFBVhvlVwAzfXCeltZH8++8TZce8fUGYRdd062gyqHPSj7+8qJbqD3kkvjMjB1yWrroyZzfGSX8Mx6SsLBfD/cvVOTz/MffiY04OMxAIPhdRv/n1mMGqx/g69z9xORqmURfZ6SqUonmqNTUYGSv9K5qq0VQNg2Kt4QC8s3mTWUURFDVQZjUOmcGYxKLk+zmehB4uXeRVs7WW2npLt5y3/OqFafSgnfA== Received: from DS7PR03CA0179.namprd03.prod.outlook.com (2603:10b6:5:3b2::34) by IA1PR12MB6457.namprd12.prod.outlook.com (2603:10b6:208:3ab::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Tue, 8 Oct 2024 18:33:37 +0000 Received: from CY4PEPF0000E9D8.namprd05.prod.outlook.com (2603:10b6:5:3b2:cafe::22) by DS7PR03CA0179.outlook.office365.com (2603:10b6:5:3b2::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.24 via Frontend Transport; Tue, 8 Oct 2024 18:33:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9D8.mail.protection.outlook.com (10.167.241.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:36 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:23 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:23 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:20 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 08/14] net/mlx5: qos: Store the eswitch in a mlx5_esw_rate_group Date: Tue, 8 Oct 2024 21:32:16 +0300 Message-ID: <20241008183222.137702-9-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D8:EE_|IA1PR12MB6457:EE_ X-MS-Office365-Filtering-Correlation-Id: fd89e80b-5333-4ea5-9497-08dce7c7b904 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|376014|82310400026; X-Microsoft-Antispam-Message-Info: 0wK4mqzDaf+rCVeUkPmiTx/X87WetQHm/IFDeQKrvRSTd2eosX5ZEnBq7/RBwj9tUNQGKcEOa0VJOLoij3wmKV7gM5zBW98Z/G818+h40CvuyJejYzhrLkUTSxom0+MTHeiKgBP6mbyQf5RBzh5gZZI/2C2CbgNurGFwVGsrY51yztwZFlz0dmsbbLhkF40Vw6ufBY8oP4cE5IQ7+TLfp4jhkHvjF/T4lvdoS0bZbnOp+uCXw6mrJ2PgNeW/juLb+K1I4HPkaA/T11m0KubtCNrREUT9p5pzZIhEIkdcnGDtVnCYJHRXLRXhfn+ePHx1/WRqWdY8y13t34KbLzfIwC8sgKwAXkkxP5pYr1D4U+BHkLyVrRjOTmC2hzXULFK/HSZukifchzk4km2UiBJ0OCvcKAQppF3wKKJwHrORiSsTztMp+/SEWR7B2QAiP0Y7i1oObEV26AO225I8eFawSszL1GWeCWZOjBef9SNY3LL9aQu2ucetrX6mX7zRvYr1PhDz/yH7vnBN01Zs9kzacLiWM/LIATKBA0e58aBVFttQiqokoLH9XwLX8g8pFsGZaY6SOgVcRe7pDtkzjkji18sBYBE+Q1qOnnOkHHLZbodbGmkY7MAocjcCJugrUaSSiTMTJnKJjIiq27JA2X6jRZVMJAtCtgXyW/8Y05smjRbTXDWf1nrytffeA3cFczieecrTUrAgym6e+F/zkH8VsWac/xbT2f1I7lEGgnv4VyEKC9zVOeeUtaA7xQ9KlSsMPToOBC7AHyW2of6/PkmwTaLW/94A0+KDJcnVHrlYDJtipA6ErAYX+m6Ztv05waG54KaCPjXs3FK/53NQBMd/FJfylG+/V3CG1h1L9bYmuHeA+OkfqXy1UUgyZBLBVPVJLcOBdb9M9oSDx8SVVsoblxiIWOD95lCEtMwf1uakMpaJalwlbMbEsb3plFjiemcrS0kUNjSy6EjzrGRydf/piLedRii14M0fkYRbFtkvV9spOy19Jhxm2xCetyo3y9Eae9kaIEi2YvPMFW5/JyN6LS5PYK8RYz8xTxSc79r+fFZypmMaL32EAsH60QQUX5O/M9YnZDPD/g+bAvGRE1NXFf8tX7mjDCQwf8Y1qlSZn9eS79AaXk53MJzHA798MuuegXqyAZRh0Bca+n89hK6M3mSg3hARrvDb0Ymj7xLee/nSQc4bwmR8N8DnegrWcx0NTUpDJZ6nEkE5ChynPy+lvgNcs+CUovj7TAKxRz/CRcUe5tz9TDsKxrQ/+FkKmxM/5OcEGqmLl3eKIJNwi2Akn/xHpD6G1F59ARmTnjxwUP5FsdTJQnS/V32Avgig77JISdNRJSZcCMYk2xaRPyJrmt0QY5Tf8tV5RvHzr1a7TLkTKAaHnPFhXYnu8CMKENjQ X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:36.6954 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fd89e80b-5333-4ea5-9497-08dce7c7b904 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D8.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6457 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu The rate groups are about to be moved out of eswitches, so store a reference to the eswitch they belong to so things can still work later. This allows dropping the esw parameter from a couple of functions and simplifying some of the code. Use this opportunity to make sure that vport scheduling element commands are always sent to the group eswitch, because that will be relevant for cross-esw scheduling. For now though, the eswitches are not different. There is no functionality change here. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 115 ++++++++---------- 1 file changed, 52 insertions(+), 63 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index baf68ffb07cc..3de3460ec8cd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -20,6 +20,8 @@ struct mlx5_esw_rate_group { /* A computed value indicating relative min_rate between group members. */ u32 bw_share; struct list_head list; + /* The eswitch this group belongs to. */ + struct mlx5_eswitch *esw; /* Vport members of this group.*/ struct list_head members; }; @@ -52,10 +54,10 @@ static int esw_qos_sched_elem_config(struct mlx5_core_dev *dev, u32 sched_elem_i bitmask); } -static int esw_qos_group_config(struct mlx5_eswitch *esw, struct mlx5_esw_rate_group *group, +static int esw_qos_group_config(struct mlx5_esw_rate_group *group, u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { - struct mlx5_core_dev *dev = esw->dev; + struct mlx5_core_dev *dev = group->esw->dev; int err; err = esw_qos_sched_elem_config(dev, group->tsar_ix, max_rate, bw_share); @@ -71,15 +73,12 @@ static int esw_qos_vport_config(struct mlx5_vport *vport, u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { + struct mlx5_core_dev *dev = vport->qos.group->esw->dev; int err; - if (!vport->qos.enabled) - return -EIO; - - err = esw_qos_sched_elem_config(vport->dev, vport->qos.esw_sched_elem_ix, max_rate, - bw_share); + err = esw_qos_sched_elem_config(dev, vport->qos.esw_sched_elem_ix, max_rate, bw_share); if (err) { - esw_warn(vport->dev, + esw_warn(dev, "E-Switch modify vport scheduling element failed (vport=%d,err=%d)\n", vport->vport, err); NL_SET_ERR_MSG_MOD(extack, "E-Switch modify vport scheduling element failed"); @@ -91,10 +90,9 @@ static int esw_qos_vport_config(struct mlx5_vport *vport, return 0; } -static u32 esw_qos_calculate_group_min_rate_divider(struct mlx5_eswitch *esw, - struct mlx5_esw_rate_group *group) +static u32 esw_qos_calculate_group_min_rate_divider(struct mlx5_esw_rate_group *group) { - u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); + u32 fw_max_bw_share = MLX5_CAP_QOS(group->esw->dev, max_tsar_bw_share); struct mlx5_vport *vport; u32 max_guarantee = 0; @@ -152,12 +150,11 @@ static u32 esw_qos_calc_bw_share(u32 min_rate, u32 divider, u32 fw_max) return min_t(u32, max_t(u32, DIV_ROUND_UP(min_rate, divider), MLX5_MIN_BW_SHARE), fw_max); } -static int esw_qos_normalize_group_min_rate(struct mlx5_eswitch *esw, - struct mlx5_esw_rate_group *group, +static int esw_qos_normalize_group_min_rate(struct mlx5_esw_rate_group *group, struct netlink_ext_ack *extack) { - u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); - u32 divider = esw_qos_calculate_group_min_rate_divider(esw, group); + u32 fw_max_bw_share = MLX5_CAP_QOS(group->esw->dev, max_tsar_bw_share); + u32 divider = esw_qos_calculate_group_min_rate_divider(group); struct mlx5_vport *vport; u32 bw_share; int err; @@ -194,7 +191,7 @@ static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_e if (bw_share == group->bw_share) continue; - err = esw_qos_group_config(esw, group, group->max_rate, bw_share, extack); + err = esw_qos_group_config(group, group->max_rate, bw_share, extack); if (err) return err; @@ -203,7 +200,7 @@ static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_e /* All the group's vports need to be set with default bw_share * to enable them with QOS */ - err = esw_qos_normalize_group_min_rate(esw, group, extack); + err = esw_qos_normalize_group_min_rate(group, extack); if (err) return err; @@ -231,7 +228,7 @@ static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, previous_min_rate = vport->qos.min_rate; vport->qos.min_rate = min_rate; - err = esw_qos_normalize_group_min_rate(esw, vport->qos.group, extack); + err = esw_qos_normalize_group_min_rate(vport->qos.group, extack); if (err) vport->qos.min_rate = previous_min_rate; @@ -266,15 +263,15 @@ static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, return err; } -static int esw_qos_set_group_min_rate(struct mlx5_eswitch *esw, struct mlx5_esw_rate_group *group, +static int esw_qos_set_group_min_rate(struct mlx5_esw_rate_group *group, u32 min_rate, struct netlink_ext_ack *extack) { - u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); - struct mlx5_core_dev *dev = esw->dev; + struct mlx5_eswitch *esw = group->esw; u32 previous_min_rate; int err; - if (!MLX5_CAP_QOS(dev, esw_bw_share) || fw_max_bw_share < MLX5_MIN_BW_SHARE) + if (!MLX5_CAP_QOS(esw->dev, esw_bw_share) || + MLX5_CAP_QOS(esw->dev, max_tsar_bw_share) < MLX5_MIN_BW_SHARE) return -EOPNOTSUPP; if (min_rate == group->min_rate) @@ -295,8 +292,7 @@ static int esw_qos_set_group_min_rate(struct mlx5_eswitch *esw, struct mlx5_esw_ return err; } -static int esw_qos_set_group_max_rate(struct mlx5_eswitch *esw, - struct mlx5_esw_rate_group *group, +static int esw_qos_set_group_max_rate(struct mlx5_esw_rate_group *group, u32 max_rate, struct netlink_ext_ack *extack) { struct mlx5_vport *vport; @@ -305,7 +301,7 @@ static int esw_qos_set_group_max_rate(struct mlx5_eswitch *esw, if (group->max_rate == max_rate) return 0; - err = esw_qos_group_config(esw, group, max_rate, group->bw_share, extack); + err = esw_qos_group_config(group, max_rate, group->bw_share, extack); if (err) return err; @@ -349,7 +345,7 @@ static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, { u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_esw_rate_group *group = vport->qos.group; - struct mlx5_core_dev *dev = vport->dev; + struct mlx5_core_dev *dev = group->esw->dev; void *attr; int err; @@ -386,7 +382,7 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_vport *vport, u32 max_rate; int err; - err = mlx5_destroy_scheduling_element_cmd(vport->dev, + err = mlx5_destroy_scheduling_element_cmd(curr_group->esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, vport->qos.esw_sched_elem_ix); if (err) { @@ -409,7 +405,7 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_vport *vport, esw_qos_vport_set_group(vport, curr_group); max_rate = vport->qos.max_rate ? vport->qos.max_rate : curr_group->max_rate; if (esw_qos_vport_create_sched_element(vport, max_rate, vport->qos.bw_share)) - esw_warn(vport->dev, "E-Switch vport group restore failed (vport=%d)\n", + esw_warn(curr_group->esw->dev, "E-Switch vport group restore failed (vport=%d)\n", vport->vport); return err; @@ -437,8 +433,8 @@ static int esw_qos_vport_update_group(struct mlx5_vport *vport, /* Recalculate bw share weights of old and new groups */ if (vport->qos.bw_share || new_group->bw_share) { - esw_qos_normalize_group_min_rate(esw, curr_group, extack); - esw_qos_normalize_group_min_rate(esw, new_group, extack); + esw_qos_normalize_group_min_rate(curr_group, extack); + esw_qos_normalize_group_min_rate(new_group, extack); } return 0; @@ -453,6 +449,7 @@ __esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix) if (!group) return NULL; + group->esw = esw; group->tsar_ix = tsar_ix; INIT_LIST_HEAD(&group->members); list_add_tail(&group->list, &esw->qos.groups); @@ -537,10 +534,10 @@ esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta return group; } -static int __esw_qos_destroy_rate_group(struct mlx5_eswitch *esw, - struct mlx5_esw_rate_group *group, +static int __esw_qos_destroy_rate_group(struct mlx5_esw_rate_group *group, struct netlink_ext_ack *extack) { + struct mlx5_eswitch *esw = group->esw; int err; trace_mlx5_esw_group_qos_destroy(esw->dev, group, group->tsar_ix); @@ -560,18 +557,6 @@ static int __esw_qos_destroy_rate_group(struct mlx5_eswitch *esw, return err; } -static int esw_qos_destroy_rate_group(struct mlx5_eswitch *esw, - struct mlx5_esw_rate_group *group, - struct netlink_ext_ack *extack) -{ - int err; - - err = __esw_qos_destroy_rate_group(esw, group, extack); - esw_qos_put(esw); - - return err; -} - static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; @@ -633,7 +618,7 @@ static void esw_qos_destroy(struct mlx5_eswitch *esw) int err; if (esw->qos.group0->tsar_ix != esw->qos.root_tsar_ix) - __esw_qos_destroy_rate_group(esw, esw->qos.group0, NULL); + __esw_qos_destroy_rate_group(esw->qos.group0, NULL); else __esw_qos_free_rate_group(esw->qos.group0); esw->qos.group0 = NULL; @@ -703,6 +688,7 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; + struct mlx5_core_dev *dev; int err; lockdep_assert_held(&esw->state_lock); @@ -711,11 +697,13 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) WARN(vport->qos.group != esw->qos.group0, "Disabling QoS on port before detaching it from group"); - err = mlx5_destroy_scheduling_element_cmd(esw->dev, + dev = vport->qos.group->esw->dev; + err = mlx5_destroy_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, vport->qos.esw_sched_elem_ix); if (err) - esw_warn(esw->dev, "E-Switch destroy vport scheduling element failed (vport=%d,err=%d)\n", + esw_warn(dev, + "E-Switch destroy vport scheduling element failed (vport=%d,err=%d)\n", vport->vport, err); memset(&vport->qos, 0, sizeof(vport->qos)); @@ -832,10 +820,11 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ err = esw_qos_vport_enable(vport, rate_mbps, vport->qos.bw_share, NULL); } else { - MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps); + struct mlx5_core_dev *dev = vport->qos.group->esw->dev; + MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps); bitmask = MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; - err = mlx5_modify_scheduling_element_cmd(vport->dev, + err = mlx5_modify_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, ctx, vport->qos.esw_sched_elem_ix, @@ -936,17 +925,16 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * int mlx5_esw_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void *priv, u64 tx_share, struct netlink_ext_ack *extack) { - struct mlx5_core_dev *dev = devlink_priv(rate_node->devlink); - struct mlx5_eswitch *esw = dev->priv.eswitch; struct mlx5_esw_rate_group *group = priv; + struct mlx5_eswitch *esw = group->esw; int err; - err = esw_qos_devlink_rate_to_mbps(dev, "tx_share", &tx_share, extack); + err = esw_qos_devlink_rate_to_mbps(esw->dev, "tx_share", &tx_share, extack); if (err) return err; mutex_lock(&esw->state_lock); - err = esw_qos_set_group_min_rate(esw, group, tx_share, extack); + err = esw_qos_set_group_min_rate(group, tx_share, extack); mutex_unlock(&esw->state_lock); return err; } @@ -954,17 +942,16 @@ int mlx5_esw_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void int mlx5_esw_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, void *priv, u64 tx_max, struct netlink_ext_ack *extack) { - struct mlx5_core_dev *dev = devlink_priv(rate_node->devlink); - struct mlx5_eswitch *esw = dev->priv.eswitch; struct mlx5_esw_rate_group *group = priv; + struct mlx5_eswitch *esw = group->esw; int err; - err = esw_qos_devlink_rate_to_mbps(dev, "tx_max", &tx_max, extack); + err = esw_qos_devlink_rate_to_mbps(esw->dev, "tx_max", &tx_max, extack); if (err) return err; mutex_lock(&esw->state_lock); - err = esw_qos_set_group_max_rate(esw, group, tx_max, extack); + err = esw_qos_set_group_max_rate(group, tx_max, extack); mutex_unlock(&esw->state_lock); return err; } @@ -1004,15 +991,12 @@ int mlx5_esw_devlink_rate_node_del(struct devlink_rate *rate_node, void *priv, struct netlink_ext_ack *extack) { struct mlx5_esw_rate_group *group = priv; - struct mlx5_eswitch *esw; + struct mlx5_eswitch *esw = group->esw; int err; - esw = mlx5_devlink_eswitch_get(rate_node->devlink); - if (IS_ERR(esw)) - return PTR_ERR(esw); - mutex_lock(&esw->state_lock); - err = esw_qos_destroy_rate_group(esw, group, extack); + err = __esw_qos_destroy_rate_group(group, extack); + esw_qos_put(esw); mutex_unlock(&esw->state_lock); return err; } @@ -1024,6 +1008,11 @@ int mlx5_esw_qos_vport_update_group(struct mlx5_vport *vport, struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err = 0; + if (group && group->esw != esw) { + NL_SET_ERR_MSG_MOD(extack, "Cross E-Switch scheduling is not supported"); + return -EOPNOTSUPP; + } + mutex_lock(&esw->state_lock); if (!vport->qos.enabled && !group) goto unlock; From patchwork Tue Oct 8 18:32:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826793 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2048.outbound.protection.outlook.com [40.107.220.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D608213EEE for ; Tue, 8 Oct 2024 18:33:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.48 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412426; cv=fail; b=gtGb7VVhe595FDUD/+e6tipcv40diQBFRbS9CN7uoznXp+z73kugnrWQkWAZm7OGJruyF6cIIbVZhiRfvGxN1jdyPGiiK8+j4pvIM8Qmy1XYFYd5sIXaohZkzm2oeBFnVbueiwNMUPe7FsMIS6kfPBVivyzHmLVK0xF6G+vy1pU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412426; c=relaxed/simple; bh=DzKGIHyutrb3uOP1R4ZDX/oOoZvAdZv+O79e+avooZc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=HrVl7SruYj9aIPDTn/PkBsKSpJBUQyzyGY5CvrrPpJLrxVMK2/f5jBbB1mrz4WHWMEt6sk2btSVAbxlesD8YGPmGtPUE6NBvOwfzGCRRHKsvdCI07Q0YoDd2pVpXA/j02GaSgBYV5uHejr5PtsOad8l1TEll4k3mVClOubh1MPA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=PdbRMpsX; arc=fail smtp.client-ip=40.107.220.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="PdbRMpsX" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=YKjSmHwplr/WsTJ2WkSCNsfczy71GhNpeiA/AC9KoTGL6t8Z7pPFLLIou7YGKAsPF6sBq2Lw3W+k9TBz7HRtTfnhSJTes9fzrxJOCAimzdWaF5kpScm7H4KXgBimwR6riOZh1TZg5qwYS+/1iPyKusFlt1pfWqkDnhNQQ94Ho90eAlH5EG42vnGWKkdioBUfhEBLgzIDdRR8Ng7ys4GNPTZ/rrAKJiw7M55QSWUg459AqIF/wgOBX9OpTyJPeFEaprjtjOpvoFXZBzJGN8NetAMaSlfg8tN3IaodBxiICtlmg+672JEiHK39mMt/+wmWXHp6mBuok3yAW37wyoIi3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=afruZfTST3sZdZxLdav2JsIwo1WK6mbliBVNEv2uu80=; b=JuQ6La1m27F++PqyAXIkJU7fDjhCgLRSqaWei2c+nE9VJN8tu4XAUo0+3NU2igXliEoY9F4jd4Lsn14TK16qbN/UwbROLbqsqFHXYPZ6ADyquQfs15KbHZnqX8qaVNj2uCdIVJOms/6kvma4M6GCtYe5mTkBu1jv553Dkj9sq1m14+5oEUX0b9Uq75BIO9KhFcPaUUxP5TzJWDtQIuLO264gVM11gQGpd4ZrGNjTsH4uxdm8wfw15KQV4/DDhw0tBR1dnISTIzSKBCXaFEYKD2Zy0DiMyJzWHJFksmbfjkzVOdDuAS68LEWskeQu49FcAzscYW44kWvcJ2iloV3Ccw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=afruZfTST3sZdZxLdav2JsIwo1WK6mbliBVNEv2uu80=; b=PdbRMpsXl3LRU2F8Ngx3+zmRpiVY8shXl94iYW5CapA41czfu6Cu3hbDmg2BX1amV80gpjK8/xeI+Q9FHWmfBTuqwbNxe4unSS2LoZHQ672ZZYu3uK/NHyLejQxywkO2Vsu7acAd6hGoQ9l8yElPmwWwHikQs/xxWO6iS+JlcTcl6La8rAqRjnb2+O+WZe4LKdotHtNkSSK5G74UMXdmIe9reYEPJn5LBWXGZRhjlA7AcYgl1xuNYGJIyeJtyMsLWMytRLFEpKPh6c1MfyXhHW4fmbTGSf174YGfvRWB6BMH7tmUVuTNeMkzcJ856xbMlF8n100IcJ0au5ccWDnkhQ== Received: from CH5P222CA0010.NAMP222.PROD.OUTLOOK.COM (2603:10b6:610:1ee::28) by MW6PR12MB8708.namprd12.prod.outlook.com (2603:10b6:303:242::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.16; Tue, 8 Oct 2024 18:33:41 +0000 Received: from DS2PEPF00003447.namprd04.prod.outlook.com (2603:10b6:610:1ee:cafe::7d) by CH5P222CA0010.outlook.office365.com (2603:10b6:610:1ee::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.17 via Frontend Transport; Tue, 8 Oct 2024 18:33:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS2PEPF00003447.mail.protection.outlook.com (10.167.17.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:39 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:26 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:26 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:23 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 09/14] net/mlx5: qos: Add an explicit 'dev' to vport trace calls Date: Tue, 8 Oct 2024 21:32:17 +0300 Message-ID: <20241008183222.137702-10-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003447:EE_|MW6PR12MB8708:EE_ X-MS-Office365-Filtering-Correlation-Id: 0bfdb0ff-b892-48ba-71fb-08dce7c7baf2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|82310400026|376014; X-Microsoft-Antispam-Message-Info: Rb8xhoOpq4TVY4Ubx9vdt4LugNlImutoRjsiRztvz0q04ls9FG1XdEiEkoZX4DEmfp45GueZNE+NHGeEAPO1/myoQbpZaW+sxUpDmKMIddDCAXgz4CYZHlHdZ+ldV9oT/q2xUZTes6xX/1/IkQP94eLJW5XGzgDNjyonoGuDICSsBPlMfKwfnMfNlJGz+ihm6XD48aPedqguABFmoJYRF58q8/soyFx/673LblotaYL7JTQzr2Ys6Ixi8Oj9fUa/JgfET1v7w4Uu+DVWVOu+sCI2YO1E0QF1E9ZQCbwkFAHyoGtlrSbg6HMjfEGHnzG0G8dAcrHcfGqa7fmMYqNkHRdnvmWUUAXFNz/3+iLP/qRajWEni8lTE1uMzE/0hAoZgH16dUFPVG9Nb6S061kOcd73cgBQZj21v7cjWVkN/7lTRosGlktYFbn6pcMkq0qDGvHnzjUhOFNeWo8S6gI1XKxTHBzDi1QF6siVa4/nC3nzHeWZwE+Y1DpIlJsvzbJAruuWPzrkZL1fvK+mf3v/ixPTOY/J6CfsymSm8WMYRhK6XxkRoPJc4MUi800ILDBeqYiLf0C7V6w7ONL0o1KAubRv4hTgKDJ6rjpe3M2/MmnZ5wjWgUQhGVoPyh2A38QXv/F/0425MPhKYW60JKpLkN+e61kBoTtvSOougV99pSPzAV/WrSwViftaXRHXWm2L8d3Y1+4ABR7cnxtmG+2D2w4qTsM8WWgDAUf/7EGN2XFd3KFPog2yc8B4u+grLVd9QOJa+l0WCCGiv78ZC38dTEO9uaW0tNUyvkOHO6MdYY5LjUGC9JjEmHHIxyBsKFtVyBvhqCi25KShLTnUxjG9SIUnCrGvXPl4piyQPFVo+Yj8HAGyL/NQdve8shjYos8Jht4KtSEBHGTqB9WolA4byK53wlEcH08OWMs0CArzii1BFa79gX9q6mBnYe/grIzE0JaFOSP6+fV7WEJ7g6DBmm6wmxBN3/XflYb5w/MLO/FD2YD5gQjXdOihBUDqIHMNtHGiQ4QFy7P9ee9slhrCSlZy2v/Q7Kp4m/qI+QRkDLPY5Ps7raIzuuD18BNk1G7ZoneUFEO1jcPNe4Cw20Oz93MeMW29MpzcuqGJcVzcW853pj1OWGBtZ67P5Auz2fKoBegFeNIPPhYKPfa8z9y+RxrLhr584rJdT3e5MnFJhyE2Z93v8szu+/7KbdKOoRyU+HZr1Sv/oIAFWYIsmJ3RkIO8Ufh9waR0FY8fzVUQVHpXlqg7nI/90URAjaC3emvNtFdoRnL1NCy/AJmbCeNYdYlX7ZtWfhHnS73VV9Cyit0WVcx2zC8ew/uXQIpJnqvNFSRZfbilHLmzfknxpysD5ocFrhFeCJ9uQgR5DraSXmKV6JYcGJeOdnKcAZhiOpnk X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(82310400026)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:39.9477 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0bfdb0ff-b892-48ba-71fb-08dce7c7baf2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003447.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8708 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu vport qos trace calls used vport->dev implicitly as the device to which the command was sent (and thus the device logged in traces). But that will no longer be the case for cross-esw scheduling, where the commands have to be sent to the group esw device instead. This commit corrects that. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../mlx5/core/esw/diag/qos_tracepoint.h | 23 +++++++++++-------- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 6 ++--- 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h index 0ebbd699903d..645bad0d625f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h @@ -11,9 +11,9 @@ #include "eswitch.h" TRACE_EVENT(mlx5_esw_vport_qos_destroy, - TP_PROTO(const struct mlx5_vport *vport), - TP_ARGS(vport), - TP_STRUCT__entry(__string(devname, dev_name(vport->dev->device)) + TP_PROTO(const struct mlx5_core_dev *dev, const struct mlx5_vport *vport), + TP_ARGS(dev, vport), + TP_STRUCT__entry(__string(devname, dev_name(dev->device)) __field(unsigned short, vport_id) __field(unsigned int, sched_elem_ix) ), @@ -27,9 +27,10 @@ TRACE_EVENT(mlx5_esw_vport_qos_destroy, ); DECLARE_EVENT_CLASS(mlx5_esw_vport_qos_template, - TP_PROTO(const struct mlx5_vport *vport, u32 bw_share, u32 max_rate), - TP_ARGS(vport, bw_share, max_rate), - TP_STRUCT__entry(__string(devname, dev_name(vport->dev->device)) + TP_PROTO(const struct mlx5_core_dev *dev, const struct mlx5_vport *vport, + u32 bw_share, u32 max_rate), + TP_ARGS(dev, vport, bw_share, max_rate), + TP_STRUCT__entry(__string(devname, dev_name(dev->device)) __field(unsigned short, vport_id) __field(unsigned int, sched_elem_ix) __field(unsigned int, bw_share) @@ -50,13 +51,15 @@ DECLARE_EVENT_CLASS(mlx5_esw_vport_qos_template, ); DEFINE_EVENT(mlx5_esw_vport_qos_template, mlx5_esw_vport_qos_create, - TP_PROTO(const struct mlx5_vport *vport, u32 bw_share, u32 max_rate), - TP_ARGS(vport, bw_share, max_rate) + TP_PROTO(const struct mlx5_core_dev *dev, const struct mlx5_vport *vport, + u32 bw_share, u32 max_rate), + TP_ARGS(dev, vport, bw_share, max_rate) ); DEFINE_EVENT(mlx5_esw_vport_qos_template, mlx5_esw_vport_qos_config, - TP_PROTO(const struct mlx5_vport *vport, u32 bw_share, u32 max_rate), - TP_ARGS(vport, bw_share, max_rate) + TP_PROTO(const struct mlx5_core_dev *dev, const struct mlx5_vport *vport, + u32 bw_share, u32 max_rate), + TP_ARGS(dev, vport, bw_share, max_rate) ); DECLARE_EVENT_CLASS(mlx5_esw_group_qos_template, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 3de3460ec8cd..8b24076cbdb5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -85,7 +85,7 @@ static int esw_qos_vport_config(struct mlx5_vport *vport, return err; } - trace_mlx5_esw_vport_qos_config(vport, bw_share, max_rate); + trace_mlx5_esw_vport_qos_config(dev, vport, bw_share, max_rate); return 0; } @@ -675,7 +675,7 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, goto err_out; vport->qos.enabled = true; - trace_mlx5_esw_vport_qos_create(vport, bw_share, max_rate); + trace_mlx5_esw_vport_qos_create(vport->dev, vport, bw_share, max_rate); return 0; @@ -707,7 +707,7 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) vport->vport, err); memset(&vport->qos, 0, sizeof(vport->qos)); - trace_mlx5_esw_vport_qos_destroy(vport); + trace_mlx5_esw_vport_qos_destroy(dev, vport); esw_qos_put(esw); } From patchwork Tue Oct 8 18:32:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826794 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2043.outbound.protection.outlook.com [40.107.237.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1466A2139A7 for ; Tue, 8 Oct 2024 18:33:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.43 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412432; cv=fail; b=fFsX/eGy0XIjdtreEgxNMp0FT7Bv1rPDGicFUqmSA0MFnpolEBPk5kmY2PJSuXHY77nk6dkt6Is81/bWSyFfm9VDmfBwMfQFOu2T5xf6U4umvB3hifkPMwZ6lJ8p8DnKy045iLtLYoKToIs/4QhE1hQqnaCCD7/1EeUiQ4ixXAk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412432; c=relaxed/simple; bh=LSf8yUk9y0qqcCFww7/H1cE8F/gwBhmCQiTO23IC6i4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=G98I+kodpCt1b42uxizh5EjahEj6QmywGKf4L5AxK4dMczAGvi1Zf7FwtQdgH1yYggm93RTYXIPOysGNN9N91Bnt3ucc932NHbX1aIFaEELrPnOGlo0XryQLEU4qqIchi94bqWTk36O1RldAt1OYPf/s1KmLuTK8eJX7zD7O52U= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=UIvk22Xk; arc=fail smtp.client-ip=40.107.237.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="UIvk22Xk" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=iWXq/DrMcGeDV5TRaR2dJTvhufftRPUaiCRVXMqwVbtLK/IpEGFnc/Uj8fLnMMD++Nf7gFyhX3niylwBRYW7XptVm2pxZc38k1X8BTRBmQO5mb+pGO4l0XbKgRn38Vn4cZtVlFs85QpG9G5VKTVh3Imic1p36aB7+fmlZdGdXy/zk8KyD98SxY703/pmFIhXWrRmfHZBdKaByYjaFIBFBQyq496SFcSF3H8/ZkAu3hq7sMat9Of0rRqWK3Q86m6ItwhjPK2JlZkgi55N611EAoRDiTR5tSstjLVp1/OHNZG2mWktHJ+F073LKqqL83C+W6g5rHZRiMznnuK2+E5dOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=b1QKJErxvKQhdz/KTxBucioEURk1g5YCJqEeM2MhqBE=; b=OOdP2E5aVR3ziXcmU7HOui6RwJ8hLJsy6cRtvQJCRwZv7c+mjUcrIULiKHMGwSIuXSld3plf8cz/AYfR2b6erB0NUbpLd27R0f6MgyZhLV6POUSJWsQ3PlTv+gyuBPy5haGLEfW5fXH87Gejbm3WYNtW+Yn3WR1tf3U5Vw28dWn3mt3y9T1+Hmnhn9tZ7hcbttc1BF2Qhy1xwsKZ3orOmBZBQdgWQ2Abso1VNqS77PEAfzIe0lHzdUm5+gsfCdl9+kcw5JYWunmXzsmkdd9y5yiUoSjLzJtQ7Tl+km+QwXfiDrq5Tyut7oyXcobamLCXR1kjuLHlzjulCRxD0PQFcg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=b1QKJErxvKQhdz/KTxBucioEURk1g5YCJqEeM2MhqBE=; b=UIvk22XkYBqVPVTU+hKXdjxN1aTicKdDE/X9hNkUiV+4PYnLTqf8CnYwUgGpg24/a65eiQESCftx5QHdHIxvCtjEQKEtCK2xjfLTP5zlfm3V4zETzb8g1vekjecuU416cFmdfSSxi6IQ4xn54pD/OsJv+fpPfIlh37LLQZQxlOp5+YxMgLtFs//TWTLLbbfGU+csKJ8LZWhNgdkzlodHeL5CvoN6a0HIq1j7XJO8f5rEkys+5iJ3vSwfyWvexSI6yr3vL2RvcqNl2pcVMxT05J34cCDnMSxIBiPo8OEzAGqDcqHKrN2egwmB3n7wc29ZjSya/j1BSQjkYdqQDYNqgw== Received: from CY8PR10CA0002.namprd10.prod.outlook.com (2603:10b6:930:4f::7) by MW4PR12MB7358.namprd12.prod.outlook.com (2603:10b6:303:22b::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Tue, 8 Oct 2024 18:33:45 +0000 Received: from CY4PEPF0000E9D7.namprd05.prod.outlook.com (2603:10b6:930:4f:cafe::f3) by CY8PR10CA0002.outlook.office365.com (2603:10b6:930:4f::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.17 via Frontend Transport; Tue, 8 Oct 2024 18:33:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9D7.mail.protection.outlook.com (10.167.241.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:44 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:30 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:29 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:26 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 10/14] net/mlx5: qos: Rename rate group 'list' as 'parent_entry' Date: Tue, 8 Oct 2024 21:32:18 +0300 Message-ID: <20241008183222.137702-11-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D7:EE_|MW4PR12MB7358:EE_ X-MS-Office365-Filtering-Correlation-Id: 3f5e1e04-c056-4419-af84-08dce7c7bde5 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|376014|82310400026; X-Microsoft-Antispam-Message-Info: fVbk9wFYeIXVgIF1ApP1vKOHmY6LVMJzwG6M5sSXzQWQhYbgTMJ05gqVUis6QU3AwagcavfZ8aPpjCZVJS4dPzSXGKRoy1UW/cCPuRQK/0IPdJhIiq+akrdiF5msKK54wYX1S03Ew6OWtv6LGFMBiqwaP76skUrE2/96hKUylJ8LOcvKk+wNo9y1TR1R1NTrgHFbRGzwB//AWqDTISM2WhFceuQlL8LSiY0w0CxXuu6x3WeMdN05DTepZlKIOnreI9duvnVXvIN2vH3HyU0RCNpePcRTt7gMgylkk0g5uX7t8PBtAGBnLySP0JO3lmAPKOnfwk7IKTJ59+RzcDrhkHFC+OJzvlOL3PzX3LRdrHsk+2IMXsyCHq/nK0nlv0841THcj4IKl9IZrWh2JtGM9buNGGsN01pLD8/KWBHxk3Pr4mc9wiYHvoXQOC+hoUgFntnbM/4VVIkG2IMpcFZykqzm7DB1kE0TQuKeJUdQH8H4liEFfcyPoH/ZU/ZluJkh4BsuBOLoyaeMpU9M3fHxzApIkIB4YYDD5dyrghDFdWAykz7cfPZgszZFg29YH7sobOmsjlMhpAYzRSalPry/EzUaaexrbYtRRUmTjJBoEh5uFW6/Dt3TAxwc0Z9FiXLJQI7qgk8FmGNcxbwtolsIl7BYZ/z8I+jH4TW48XMUACm99zT77fry3enE0cTAbDqIIhHCc8p3w+GoQNleQAU6I0BTOxh4TjqQVg2WfB3IbnZvmpnnunrWD+kkaoIW6pASK2+y1fTCF470QF606gez7RELoTm8TSmB3vzBXdKnRqK8ZP8vNtKXMKH+YQXzoSlkQNhdIHaL358Po3R5TeReDN/U7Mu/oSNKqGy/eg9GF+aza2tZj+yyXSCvCGKFbJHswswETIKIqewcDNukZiHo3AR2VLB1h7BFBqXST//hR5OvLvIhCKcaJ6lLMjL7R7pnzlboxmNucTHCakC/D9Xsgelc0eA5TfPFW7XptJXrwLkPNhCDm3UURsylqzR+rIEP5kONc0vIRcD7CyrFsTQvkjq3kmIYn1Omhmj16A898lXtrE1aG5cCx9Az8idqNoiC2TgbJb53DOwnvALwJUGUNVugwqUI8lXhVXnFBU0BfJb5BKCMyMrwSpYRVGld9tm0/S/xvBn4EhP9kPqxjZ2JnWHQodTULp0bN9td3kQ33fbASNTeVptb6QYgNw+8CBZup96KhnuQSHqv2GX4Zo7p2j3T0RGXMYn4t3GsNNPI5QAvDMk+IIr4pdkTTZsqtsriHtQ8fUN1lXr72rjWWV5GlqBn6NaSRl6A1EdYUwa9b3ToigAl9XLa5dq8rlTbpEyT0RVx8iz8zNhq2a18XTl3PtnRjGJUaTAwm6GuaFEHDPKjLLGZTVuPzWzI0ksGMbWP X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:44.8815 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3f5e1e04-c056-4419-af84-08dce7c7bde5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D7.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7358 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu 'list' is not very descriptive, I prefer list membership to clearly specify which list the entry belongs to. This commit renames the list entry into the esw groups list as 'parent_entry' to make the code more readable. This is a no-op change. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 8b24076cbdb5..5891a68633af 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -19,7 +19,7 @@ struct mlx5_esw_rate_group { u32 min_rate; /* A computed value indicating relative min_rate between group members. */ u32 bw_share; - struct list_head list; + struct list_head parent_entry; /* The eswitch this group belongs to. */ struct mlx5_eswitch *esw; /* Vport members of this group.*/ @@ -128,7 +128,7 @@ static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw) /* Find max min_rate across all esw groups. * This will correspond to fw_max_bw_share in the final bw_share calculation. */ - list_for_each_entry(group, &esw->qos.groups, list) { + list_for_each_entry(group, &esw->qos.groups, parent_entry) { if (group->min_rate < max_guarantee || group->tsar_ix == esw->qos.root_tsar_ix) continue; max_guarantee = group->min_rate; @@ -183,7 +183,7 @@ static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_e u32 bw_share; int err; - list_for_each_entry(group, &esw->qos.groups, list) { + list_for_each_entry(group, &esw->qos.groups, parent_entry) { if (group->tsar_ix == esw->qos.root_tsar_ix) continue; bw_share = esw_qos_calc_bw_share(group->min_rate, divider, fw_max_bw_share); @@ -452,13 +452,13 @@ __esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix) group->esw = esw; group->tsar_ix = tsar_ix; INIT_LIST_HEAD(&group->members); - list_add_tail(&group->list, &esw->qos.groups); + list_add_tail(&group->parent_entry, &esw->qos.groups); return group; } static void __esw_qos_free_rate_group(struct mlx5_esw_rate_group *group) { - list_del(&group->list); + list_del(&group->parent_entry); kfree(group); } From patchwork Tue Oct 8 18:32:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826795 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2080.outbound.protection.outlook.com [40.107.101.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 90CA9212D09 for ; Tue, 8 Oct 2024 18:33:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.101.80 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412434; cv=fail; b=cHizrNHTUzo1AwZ3Y1ZUdhzoYf9OmEVoXIAajM2/XtnMsiuO4hosgTGUoepiIH2BobQ9u9BpQxTltN9jyPDZzbIbPl3Jrrlaz2SLImlL4o17hUFALo7Esz3O3cQFbJG8mV1j+i7wOUoeDDMAlz4uNETfjogurU8Ab3c2kN/99Tc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412434; c=relaxed/simple; bh=2ajWwBoqxqfQvOzuss5drpMDkshc1p7P2ozKojAYilc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NhMd9fWppW3jYx/LMs+nzicSUN5HPOdg/h297e6kqxvdMvUdzX5kYwVTDiCNf8up7f6yGwpzSbmvJaf+OW1idzFVFzHNOFw6InvoHTVv1Hf0VsmPqR4o1Nrdbr+kbB7+kcHBtZuH3mUa71TB0xpTB0YrTPg8u7upP2/3SWDT0oc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=mxWp24SU; arc=fail smtp.client-ip=40.107.101.80 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="mxWp24SU" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xBiKQQHpdZz9kl0ZwCB0PX++3PY4Hv6NBJVeROi2j/KQUOovDXpDGTBkKzssH7clRwQ+n7wvqeMClRAepZKg9AZ99XaU4xwteELEn4zAMByjCudnDwdjJzPQ+oT6AFEcCYkRtLYQVmKDzworXLL493VaqGMrsY0oY+ZbZMgwqB/TCdM0CMRadbvCU7xQ4+iGkOJZA6mJVoHAbZjhB/i7goKDGxFlhl+uIA0Tp3T7i/ine3AwgZaOxXKptkgFdLOg7JPPd2wA35OJis0dhQa/jS0LkwecbBD56VOd8vXCYNp3jbY6vXWjz2ToHqaHUNqQwpsRdKB+7z2z7xljysLUew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XjjOEDhaMTknoGh75d2pMLhHpUFmbuZ61wyybAKhirk=; b=ZEjj4S8V1sHzJ/34ac9mkRaJXbpeBzYOi170ulJ7gLhIzXFt4SjCpy5UTm3ASpEv1zRCWWHbX2jENU9didN6PhBWO6dVPdsPiq/2u8TsQTgj13TtOicEflXaw03kneq27ocjOvLYEyT7s7pcXlymLVusY50HXIPllpRhlw9tpvI4/MPbiKac+DuULhmjtj3sUbTTTjLUpWILRF+6wa8oGHxt7/ZMXicXfxMsY6kGfH0gvGQhU4qXW60yh1CKXboXbBI1qDJMGvNWfUHtA2QDczPd2KUBSNBo/k0dHT2jucpWurL3Kpzx/LFNYgXBUglUf0XS5B7fuM8Q+We5uo5jsA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XjjOEDhaMTknoGh75d2pMLhHpUFmbuZ61wyybAKhirk=; b=mxWp24SU0juXhVpIyV+I48G8uB4aKH7iUjogtfsYfMGyYL7RMj7lCagsVDaroFa/WpwgIC0Iexzeyn7cH1xjH3Ks1Uvy+wBJvjA9i9to/VB36FtOlfeBXRUo3M0Y6ozmPHqvZRGenYCn9ojl29wZUienWrQhAypHoUIIhmv5o1QapmWFyuefvMHRIy3ghh9EcAlkSdV/3mliLeg2fpj66Z3oxi4q+t3+k8sGqqEtRmbw5f/N0zq2Q04Iu8xUbuJ0Lcpstsah9WPlqmss2gYn6NgeVQ7F9NIjo40Ug3hdRxxIB97zWzj3IiceGFiYR0m7SZ+EOraKMg9qRVoAox6Qew== Received: from CH2PR05CA0068.namprd05.prod.outlook.com (2603:10b6:610:38::45) by DS7PR12MB5861.namprd12.prod.outlook.com (2603:10b6:8:78::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.22; Tue, 8 Oct 2024 18:33:47 +0000 Received: from DS2PEPF00003443.namprd04.prod.outlook.com (2603:10b6:610:38:cafe::c1) by CH2PR05CA0068.outlook.office365.com (2603:10b6:610:38::45) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.17 via Frontend Transport; Tue, 8 Oct 2024 18:33:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS2PEPF00003443.mail.protection.outlook.com (10.167.17.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:47 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:33 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:32 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:30 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 11/14] net/mlx5: qos: Store rate groups in a qos domain Date: Tue, 8 Oct 2024 21:32:19 +0300 Message-ID: <20241008183222.137702-12-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003443:EE_|DS7PR12MB5861:EE_ X-MS-Office365-Filtering-Correlation-Id: 34dd8372-6382-48e4-d68b-08dce7c7bf2f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: D67JpWdRy2YF1qaR+qt4OZV4L98uURHM0LpSZ8Hnn1BDMmY1euZXObni9D34PdRytBDAm2MBaLmbFxSlKEJUCLXY/4gABy00XTs0ZVfntDA4gT45ri6RpUrfw6Xdxryhd7nRhUE0d3rSgelrivyPo9KpI+RxZ8umLyjDtV1h2lHJsGWPv+1t4vf731S93JKwNnL0WAudkMliJgKw3c2ZdbRBYvWlfrRxW9mIPjjdV0qgYaYUd+nG9INRpxdQ1ESyoSDW9/D9KmMCfURg5YH/A24UgeOoPYydDyou7R17Y5WS6FS95yP6J6LMdh2KWodwZDZYp4aZxP3xEYQ9ZqufObn5rUqkyc3+WLCwAxwnzukHLyykDMqvQ6jONk4nfInnxaXHmS5HkFVHlMX8ZaU0hxG2cqvvANpzxV4aC5yIjSyAumkHyYxVuV7MIOyaoKevGSNYTW05T9kkbATq9JpJk19CGUhQgdxA4fOqaDCMVxzwFoIFZ9CMTKCZYwCNiSOZVFlu2kxwEUe/IjOi31Z/lztW61W2vIj0R9gqBvJ0U+5cHrS4UCGZnW5fCoGwSCF6ByEdofjkccWwzZZD6hTwfxNPzxozbyB7FS14Q0YYTpFrAczVvWX4pWyi9aC3JSEsJwlfwLGSWgxUhSqHuqinZ8AI3Tlms16lVhipMAjTPG8eB1Soqul2WpeNl0cemz2KsLZvEey6MIBmT9nRcPV+yxlO38+KZO/X8h4oDB90qxQ3Qk+gCFaYlmcuyK4u5I3vKlbLWBWJJHwHYr1/stUhxSScGNz83pQKw/Jp6covZ5Tx1mOwYfaf+88XOb7RXVB2ODl2kzPBxWd5btCyVQbOzs6V0/FDkMI81atv/BOtBoFVxD3DJXdeEm548gakOcZUZAAcOEB2TrgtBP3JSW3KnPKmQjr4Zfc/HUWiS6ql5Yk6uAt2ZVzLtc8ZEbGPQR2S1BIG+jrNLxoUSDkTxqauifvvAQ0t0pRmHuf+M3qSJrZK7syjZRuUB/NMQZ3+2VzcsPiLT63cW7OH6MNsPbPdn4/+tPJGLiSCrZLz7DVjaH4ZcAJkGcJnZjYwMCiha7urIcfnheu6fOCcH366XwcVRSq3Zg9smvB2JEzUb0+T9PWu7t8KszuNflwswqfsxmCW7aiUngDSBWJ17LFY+VYXJiBg/m78OGDMpQQPkQSWdDDimjiT+6QPhZ9JEygk6oZnnesYTcxdQzy8VmVyFglcBMChbmySJr6XVoVHUU/qzXWfXAm7EQ/iFr9cdF1n1S98gzX36HaymJ20/HACyaxXFZHaajHPbhk+luNqfSEbuw5mPPzTsa84AV118gV9J8NtU7XHRDSZ66Yv1tbr4ILfky4WXp+TdKe3XRVLwiaFvDU= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(1800799024)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:47.0453 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 34dd8372-6382-48e4-d68b-08dce7c7bf2f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003443.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5861 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu Groups are currently maintained as a list in their corresponding eswitch, protected by the esw state_lock. The upcoming cross-eswitch scheduling feature cannot work with this approach, as it would require acquiring multiple eswitch locks (in the correct order) in order to maintain group membership. This commit moves the rate groups into a new 'qos domain' struct and adds explicit qos init/cleanup steps to the eswitch init/cleanup. Upcoming patches will expand the qos domain struct and allow it to be shared between eswitches. For now, qos domains are private to each esw so there's only an extra indirection. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 58 ++++++++++++++++--- .../net/ethernet/mellanox/mlx5/core/esw/qos.h | 3 + .../net/ethernet/mellanox/mlx5/core/eswitch.c | 12 +++- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 3 +- 4 files changed, 65 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 5891a68633af..06b3a21a7475 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -11,6 +11,37 @@ /* Minimum supported BW share value by the HW is 1 Mbit/sec */ #define MLX5_MIN_BW_SHARE 1 +/* Holds rate groups associated with an E-Switch. */ +struct mlx5_qos_domain { + /* List of all mlx5_esw_rate_groups. */ + struct list_head groups; +}; + +static struct mlx5_qos_domain *esw_qos_domain_alloc(void) +{ + struct mlx5_qos_domain *qos_domain; + + qos_domain = kzalloc(sizeof(*qos_domain), GFP_KERNEL); + if (!qos_domain) + return NULL; + + INIT_LIST_HEAD(&qos_domain->groups); + + return qos_domain; +} + +static int esw_qos_domain_init(struct mlx5_eswitch *esw) +{ + esw->qos.domain = esw_qos_domain_alloc(); + + return esw->qos.domain ? 0 : -ENOMEM; +} + +static void esw_qos_domain_release(struct mlx5_eswitch *esw) +{ + kfree(esw->qos.domain); + esw->qos.domain = NULL; +} struct mlx5_esw_rate_group { u32 tsar_ix; @@ -19,6 +50,7 @@ struct mlx5_esw_rate_group { u32 min_rate; /* A computed value indicating relative min_rate between group members. */ u32 bw_share; + /* Membership in the qos domain 'groups' list. */ struct list_head parent_entry; /* The eswitch this group belongs to. */ struct mlx5_eswitch *esw; @@ -128,10 +160,10 @@ static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw) /* Find max min_rate across all esw groups. * This will correspond to fw_max_bw_share in the final bw_share calculation. */ - list_for_each_entry(group, &esw->qos.groups, parent_entry) { - if (group->min_rate < max_guarantee || group->tsar_ix == esw->qos.root_tsar_ix) - continue; - max_guarantee = group->min_rate; + list_for_each_entry(group, &esw->qos.domain->groups, parent_entry) { + if (group->esw == esw && group->tsar_ix != esw->qos.root_tsar_ix && + group->min_rate > max_guarantee) + max_guarantee = group->min_rate; } if (max_guarantee) @@ -183,8 +215,8 @@ static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_e u32 bw_share; int err; - list_for_each_entry(group, &esw->qos.groups, parent_entry) { - if (group->tsar_ix == esw->qos.root_tsar_ix) + list_for_each_entry(group, &esw->qos.domain->groups, parent_entry) { + if (group->esw != esw || group->tsar_ix == esw->qos.root_tsar_ix) continue; bw_share = esw_qos_calc_bw_share(group->min_rate, divider, fw_max_bw_share); @@ -452,7 +484,7 @@ __esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix) group->esw = esw; group->tsar_ix = tsar_ix; INIT_LIST_HEAD(&group->members); - list_add_tail(&group->parent_entry, &esw->qos.groups); + list_add_tail(&group->parent_entry, &esw->qos.domain->groups); return group; } @@ -586,7 +618,6 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta return err; } - INIT_LIST_HEAD(&esw->qos.groups); if (MLX5_CAP_QOS(dev, log_esw_max_sched_depth)) { esw->qos.group0 = __esw_qos_create_rate_group(esw, extack); } else { @@ -868,6 +899,17 @@ static int esw_qos_devlink_rate_to_mbps(struct mlx5_core_dev *mdev, const char * return 0; } +int mlx5_esw_qos_init(struct mlx5_eswitch *esw) +{ + return esw_qos_domain_init(esw); +} + +void mlx5_esw_qos_cleanup(struct mlx5_eswitch *esw) +{ + if (esw->qos.domain) + esw_qos_domain_release(esw); +} + /* Eswitch devlink rate API */ int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void *priv, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h index c4f04c3e6a59..44fb339c5dcc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h @@ -6,6 +6,9 @@ #ifdef CONFIG_MLX5_ESWITCH +int mlx5_esw_qos_init(struct mlx5_eswitch *esw); +void mlx5_esw_qos_cleanup(struct mlx5_eswitch *esw); + int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *evport, u32 max_rate, u32 min_rate); void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index 4a187f39daba..9de819c45d33 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1481,6 +1481,10 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs) MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE); mlx5_eq_notifier_register(esw->dev, &esw->nb); + err = mlx5_esw_qos_init(esw); + if (err) + goto err_qos_init; + if (esw->mode == MLX5_ESWITCH_LEGACY) { err = esw_legacy_enable(esw); } else { @@ -1489,7 +1493,7 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs) } if (err) - goto abort; + goto err_esw_enable; esw->fdb_table.flags |= MLX5_ESW_FDB_CREATED; @@ -1503,7 +1507,10 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs) return 0; -abort: +err_esw_enable: + mlx5_esw_qos_cleanup(esw); +err_qos_init: + mlx5_eq_notifier_unregister(esw->dev, &esw->nb); mlx5_esw_acls_ns_cleanup(esw); return err; } @@ -1631,6 +1638,7 @@ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw) if (esw->mode == MLX5_ESWITCH_OFFLOADS) devl_rate_nodes_destroy(devlink); + mlx5_esw_qos_cleanup(esw); } void mlx5_eswitch_disable(struct mlx5_eswitch *esw) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 567276900a37..e57be2eeec85 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -336,6 +336,7 @@ enum { }; struct dentry; +struct mlx5_qos_domain; struct mlx5_eswitch { struct mlx5_core_dev *dev; @@ -368,12 +369,12 @@ struct mlx5_eswitch { */ refcount_t refcnt; u32 root_tsar_ix; + struct mlx5_qos_domain *domain; /* Contains all vports with QoS enabled but no explicit group. * Cannot be NULL if QoS is enabled, but may be a fake group * referencing the root TSAR if the esw doesn't support groups. */ struct mlx5_esw_rate_group *group0; - struct list_head groups; /* Protected by esw->state_lock */ } qos; struct mlx5_esw_bridge_offloads *br_offloads; From patchwork Tue Oct 8 18:32:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826798 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2086.outbound.protection.outlook.com [40.107.220.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93AC6215F47 for ; Tue, 8 Oct 2024 18:34:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.86 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412444; cv=fail; b=HeRomd9DxeYWdi2Tghu1nV4K7r1xLfJ6lrX31MyEu9UqcTkh03GasLJKAvSBRoiU75FB0l2ClveLqEIn6ZZllSSmte0PNE1IX6lq0UvjnVCvA8A5peISi7Hg3fbZNKLAUpDBKnVcOIqnY0mdLATvsy9y7H7zVGabIkE/XR0M8KU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412444; c=relaxed/simple; bh=6OMkpsnz6PM+W4tnhIr8ZaDbmd2gAzN7iBcANUpdMVo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=bmSZhXVIEX9pTaihARN5wD9AmDVYGYh1hbv4Po5Z1y099iEVovAVN0HcOfkDPE+RSGRfQ14/kBtro/wlxcrn2k6ZGyv9WeYHYCoAA+v5Rmm76Y46OdigOex8Ktd/Uw0ZcpMN3CWetrUZXpoPzUFZta/i4ZaanUkgBF2VliCAYhA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=rX7koj/f; arc=fail smtp.client-ip=40.107.220.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="rX7koj/f" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=rH3NTd0ohzr1KAkA0ZcXAM+bpTfsw2V0lJwVeL5LW7UbL3f3L21IcLZjAtmVCPIW3WlW29uBzO9PbhDLTAtJyD5KNvIGSBklrsY6eIPPkoM6G7hP7ZFetdr5iywYl2kMLDsKCYFrbOVyARxQo32EOav/Cjz15msBZJOWzFeqcKGAk0dy8IdgizLvkNylPG0G5TKm9BMq48NrtjJEsq3VwFJ2iScvB85BadL05E31lIecxjzn7IzdkmwYjvV+ibbTALKl43mTvzXXNMdXyQeegmJOeaDhh0Ga/BXQk89wVZS20cnPV/iAo+BLzD3KCZPDIsa94d6K6sktmK/Ll5k1cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LZN/yWKg1KOG821pyAddTAAi+JzDpYvHu3v4OtEI1bo=; b=CiX4t8OIhLkcf8zwc94LqH6coKQJfCNgrMgVItqKPnT4SKKG0uFHmQSXjouOHmaHgeVHvEszg/l+Q5oCFHs3KhgOyHgUKWmwkVzMLJ/9g6XbCxCdZgC1xGXouxIRqJVLSu2BTtx4XWKqBxeqqE72W4VlyodgRBaJwHls58/5nQeY/L1ZxLEYGeqpTT4kCwIFsWV5j6pY3hCr2Ub9Gn7v1SOAtNUyD/d1R+bkU2/S/Og8OLfEan4ZytXa8vHXrJZ8t66DgD/3BCq9b2IfqkBjbjO7JUlFhkLgiv3vGMZ61H/5k/xidexATjq2FL1yueG6WKitM24fwkmTt31UNwFmjg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LZN/yWKg1KOG821pyAddTAAi+JzDpYvHu3v4OtEI1bo=; b=rX7koj/fpYUWwhFaT85gtNx5YliNT7gjPmUt3cw7jDvBxhGA42ZChci655TWLNC1r5bUrT3VB0EYmyPNmaGi3Fx+73A/Fc0r6GIXqp3zW88Rp7XfuMtAn2k1l4ylwEOeDhW2sltdXHepX4dq4ro5kPjWNHCoUugW6ttLNOhW9m9k8Pcn6OL88p47kGEjtfbiw9VcaANzqMQ2BCioA/mM163O088iNCIiR8MbV/3ZZd0pKMlr5P0RaZ+pixIdl5LjePWrnQPRlMVNRvYJnFG3jZDC2ZDuOdW+24jWlbYtHLDw+7fpJazN/j2SSzm4//qSqCKrNcGen0ogJ8iEb4BcXw== Received: from PH1PEPF000132E6.NAMP220.PROD.OUTLOOK.COM (2603:10b6:518:1::26) by DM4PR12MB6256.namprd12.prod.outlook.com (2603:10b6:8:a3::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Tue, 8 Oct 2024 18:33:55 +0000 Received: from CY4PEPF0000E9D9.namprd05.prod.outlook.com (2a01:111:f403:f912::5) by PH1PEPF000132E6.outlook.office365.com (2603:1036:903:47::3) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.20 via Frontend Transport; Tue, 8 Oct 2024 18:33:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9D9.mail.protection.outlook.com (10.167.241.72) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:54 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:36 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:36 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:33 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 12/14] net/mlx5: qos: Refactor locking to a qos domain mutex Date: Tue, 8 Oct 2024 21:32:20 +0300 Message-ID: <20241008183222.137702-13-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D9:EE_|DM4PR12MB6256:EE_ X-MS-Office365-Filtering-Correlation-Id: d4aa694b-c9d9-473a-3d4b-08dce7c7c3ba X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|1800799024|36860700013; X-Microsoft-Antispam-Message-Info: Va8ddxLwI6k7o0kjyRGrVj7do0eM7Zq1htAJdlnUUzIVPCbOP8PlSd1q3XGBqGuTlXKbBuHQ5pgWFhN2cxhMt/Uut6ajDBYYuHcw45jxKOQA5dAWrlUAEDgy7WxIUo+yGTv2F4FSGG6By+sHxP/6TAC2HzG7BAC2O+YKcnBRxAphPiDwp2Zel+yvuETpbmG0xCXYp3DebMUFWi7/qVJ5LRM9tFactuGfkLt332M3cfCy6jptOVsFynShtFlYnvlLvk0Y69b1lP3sVpVS+s7BTnXdWtWM/EtpnRm6IKDZ0Z+6A9R5/Ox35ZkMTKoDlbe43IFFBWEDh8XMjHx3WlesxZ2oPn8rlswKqILVhQN3tg7nZd0X9m0M8tZ5MadYyZ0VEVzG6SP66A0JxMnap0aYcgEJ4l8gAIgqiWQDM5epIwkh7l/b9PfVC1cVO7UuMbJshf3HAoxIQEf9zbTSVZ6m+E7T25V/7bhBSPpqWXPPAw83D3uYUpWZWCaUhthgltQDf2zlnHyRk8yHHcn+L8iQ7KLrHrURPsL2P2e/g/mTv74pTvD6K/9WP3k3m1JcG/Lp873BigR8CW1dsJxVL628jTbAuP2FIxRv9BXJ7huOXCBCHYFyjahz2h75KRbJ9yOgUZJ5ONflrJoPUsl3P+e3Jiu2QdMHVVvvW1h3A/vLZ1ljAmE4tfYQ/yMMj0oAY1S78WJJNDxWpF+kI/YYXIHjWN9E9tUGdfNZ6ds8rn5xsv5IQJppld3Txp9mMgm/5wp2MPHsUgWmgTJQLfjfQUm1Krb7sDO5qFQhhPNla1TB+LmUbjuh5h/4ZPgQJH+WFEcWrvEMJ7A8j9OynqZHixHcz2Wwg25GnF/u5J5pNtG0moN45eo8dCoL80N+WmBodZGciso7D+m24AmT3GnedElgbctJSEwgxHwFBddSHqWemjzH9VKZfkiG0Rx1MncB333vdpydwCIoMDzeUHBv02f8Zr8dpMa/x4R6EqAuOFFfMi5OgcG/VHA9YGctlg/PPdDQ5A2BQ7Tjwf9LbTOnejqA7EFaIFcpFZnyLwpSvLPDHRFMmzEBeu/i4UpAFYqtJS0dVFQ9atAlW6vAdcZ02gR12MRFFKmVIxkBBLgWynN/TgwLlnDoLtgTxdSwsSde5sJSkP6IOOb493QqBT8njebvzHRzRsATAYNBkc+n3b8Oyj+1P3xtN9EKMJXimnkIvymzaVks523WOcWUNxI4HxoQhsrYc19g4zBs5Tdc99XWU7fg2WocRCJ3fn+MBrH/MXIkuyvL2aBk6fi0qs6RiMH5tqlBbvUyTe5Nx4uhIWKtBG2f6V6z+QKQu9s9dwPAp922zVTaON9URCGyr42ICNXOAymnffrAR2I3bBppcsdZq2tS+XZBwv7QF6bDgXs+CZ0j X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(376014)(1800799024)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:54.6476 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d4aa694b-c9d9-473a-3d4b-08dce7c7c3ba X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D9.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6256 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu E-Switch qos changes used the esw state_lock to serialize qos changes. With the introduction of cross-esw scheduling, multiple E-Switches might be involved in a qos operation, so prepare for that by switching locking to use a qos domain mutex. Add three helper functions: - esw_qos_lock - esw_qos_unlock - esw_assert_qos_lock_held Convert existing direct lock/unlock/lockdep calls to them. Also call esw_assert_qos_lock_held in a couple more places. mlx5_esw_qos_set_vport_rate expected to be called with the esw state_lock already held. Change it to instead acquire the qos lock directly. mlx5_eswitch_get_vport_config also accessed qos properties with the esw state lock. Introduce a new function mlx5_esw_qos_get_vport_rate to access those with the correct lock and change get_vport_config to use it. Finally, mlx5_vport_disable is called from the cleanup path with the esw state_lock held, so have it additionally acquire the qos lock to make sure there are no races. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../ethernet/mellanox/mlx5/core/esw/legacy.c | 6 +- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 92 +++++++++++++------ .../net/ethernet/mellanox/mlx5/core/esw/qos.h | 1 + .../net/ethernet/mellanox/mlx5/core/eswitch.c | 8 +- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 6 +- 5 files changed, 74 insertions(+), 39 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c index 3c8388706e15..288c797e4a78 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c @@ -513,15 +513,11 @@ int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, u16 vport, u32 max_rate, u32 min_rate) { struct mlx5_vport *evport = mlx5_eswitch_get_vport(esw, vport); - int err; if (!mlx5_esw_allowed(esw)) return -EPERM; if (IS_ERR(evport)) return PTR_ERR(evport); - mutex_lock(&esw->state_lock); - err = mlx5_esw_qos_set_vport_rate(evport, max_rate, min_rate); - mutex_unlock(&esw->state_lock); - return err; + return mlx5_esw_qos_set_vport_rate(evport, max_rate, min_rate); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 06b3a21a7475..be9abeb6e4aa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -13,10 +13,27 @@ /* Holds rate groups associated with an E-Switch. */ struct mlx5_qos_domain { + /* Serializes access to all qos changes in the qos domain. */ + struct mutex lock; /* List of all mlx5_esw_rate_groups. */ struct list_head groups; }; +static void esw_qos_lock(struct mlx5_eswitch *esw) +{ + mutex_lock(&esw->qos.domain->lock); +} + +static void esw_qos_unlock(struct mlx5_eswitch *esw) +{ + mutex_unlock(&esw->qos.domain->lock); +} + +static void esw_assert_qos_lock_held(struct mlx5_eswitch *esw) +{ + lockdep_assert_held(&esw->qos.domain->lock); +} + static struct mlx5_qos_domain *esw_qos_domain_alloc(void) { struct mlx5_qos_domain *qos_domain; @@ -25,6 +42,7 @@ static struct mlx5_qos_domain *esw_qos_domain_alloc(void) if (!qos_domain) return NULL; + mutex_init(&qos_domain->lock); INIT_LIST_HEAD(&qos_domain->groups); return qos_domain; @@ -249,7 +267,7 @@ static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, bool min_rate_supported; int err; - lockdep_assert_held(&esw->state_lock); + esw_assert_qos_lock_held(esw); fw_max_bw_share = MLX5_CAP_QOS(vport->dev, max_tsar_bw_share); min_rate_supported = MLX5_CAP_QOS(vport->dev, esw_bw_share) && fw_max_bw_share >= MLX5_MIN_BW_SHARE; @@ -275,7 +293,7 @@ static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, bool max_rate_supported; int err; - lockdep_assert_held(&esw->state_lock); + esw_assert_qos_lock_held(esw); max_rate_supported = MLX5_CAP_QOS(vport->dev, esw_rate_limit); if (max_rate && !max_rate_supported) @@ -451,9 +469,7 @@ static int esw_qos_vport_update_group(struct mlx5_vport *vport, struct mlx5_esw_rate_group *new_group, *curr_group; int err; - if (!vport->enabled) - return -EINVAL; - + esw_assert_qos_lock_held(esw); curr_group = vport->qos.group; new_group = group ?: esw->qos.group0; if (curr_group == new_group) @@ -552,6 +568,7 @@ esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta struct mlx5_esw_rate_group *group; int err; + esw_assert_qos_lock_held(esw); if (!MLX5_CAP_QOS(esw->dev, log_esw_max_sched_depth)) return ERR_PTR(-EOPNOTSUPP); @@ -665,8 +682,7 @@ static int esw_qos_get(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { int err = 0; - lockdep_assert_held(&esw->state_lock); - + esw_assert_qos_lock_held(esw); if (!refcount_inc_not_zero(&esw->qos.refcnt)) { /* esw_qos_create() set refcount to 1 only on success. * No need to decrement on failure. @@ -679,7 +695,7 @@ static int esw_qos_get(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) static void esw_qos_put(struct mlx5_eswitch *esw) { - lockdep_assert_held(&esw->state_lock); + esw_assert_qos_lock_held(esw); if (refcount_dec_and_test(&esw->qos.refcnt)) esw_qos_destroy(esw); } @@ -690,7 +706,7 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err; - lockdep_assert_held(&esw->state_lock); + esw_assert_qos_lock_held(esw); if (vport->qos.enabled) return 0; @@ -723,8 +739,9 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) int err; lockdep_assert_held(&esw->state_lock); + esw_qos_lock(esw); if (!vport->qos.enabled) - return; + goto unlock; WARN(vport->qos.group != esw->qos.group0, "Disabling QoS on port before detaching it from group"); @@ -741,6 +758,8 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) trace_mlx5_esw_vport_qos_destroy(dev, vport); esw_qos_put(esw); +unlock: + esw_qos_unlock(esw); } int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *vport, u32 max_rate, u32 min_rate) @@ -748,17 +767,34 @@ int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *vport, u32 max_rate, u32 min_ struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err; - lockdep_assert_held(&esw->state_lock); + esw_qos_lock(esw); err = esw_qos_vport_enable(vport, 0, 0, NULL); if (err) - return err; + goto unlock; err = esw_qos_set_vport_min_rate(vport, min_rate, NULL); if (!err) err = esw_qos_set_vport_max_rate(vport, max_rate, NULL); +unlock: + esw_qos_unlock(esw); return err; } +bool mlx5_esw_qos_get_vport_rate(struct mlx5_vport *vport, u32 *max_rate, u32 *min_rate) +{ + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; + bool enabled; + + esw_qos_lock(esw); + enabled = vport->qos.enabled; + if (enabled) { + *max_rate = vport->qos.max_rate; + *min_rate = vport->qos.min_rate; + } + esw_qos_unlock(esw); + return enabled; +} + static u32 mlx5_esw_qos_lag_link_speed_get_locked(struct mlx5_core_dev *mdev) { struct ethtool_link_ksettings lksettings; @@ -846,7 +882,7 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 return err; } - mutex_lock(&esw->state_lock); + esw_qos_lock(esw); if (!vport->qos.enabled) { /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ err = esw_qos_vport_enable(vport, rate_mbps, vport->qos.bw_share, NULL); @@ -861,7 +897,7 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 vport->qos.esw_sched_elem_ix, bitmask); } - mutex_unlock(&esw->state_lock); + esw_qos_unlock(esw); return err; } @@ -927,14 +963,14 @@ int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void if (err) return err; - mutex_lock(&esw->state_lock); + esw_qos_lock(esw); err = esw_qos_vport_enable(vport, 0, 0, extack); if (err) goto unlock; err = esw_qos_set_vport_min_rate(vport, tx_share, extack); unlock: - mutex_unlock(&esw->state_lock); + esw_qos_unlock(esw); return err; } @@ -953,14 +989,14 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * if (err) return err; - mutex_lock(&esw->state_lock); + esw_qos_lock(esw); err = esw_qos_vport_enable(vport, 0, 0, extack); if (err) goto unlock; err = esw_qos_set_vport_max_rate(vport, tx_max, extack); unlock: - mutex_unlock(&esw->state_lock); + esw_qos_unlock(esw); return err; } @@ -975,9 +1011,9 @@ int mlx5_esw_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void if (err) return err; - mutex_lock(&esw->state_lock); + esw_qos_lock(esw); err = esw_qos_set_group_min_rate(group, tx_share, extack); - mutex_unlock(&esw->state_lock); + esw_qos_unlock(esw); return err; } @@ -992,9 +1028,9 @@ int mlx5_esw_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, void * if (err) return err; - mutex_lock(&esw->state_lock); + esw_qos_lock(esw); err = esw_qos_set_group_max_rate(group, tx_max, extack); - mutex_unlock(&esw->state_lock); + esw_qos_unlock(esw); return err; } @@ -1009,7 +1045,7 @@ int mlx5_esw_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv, if (IS_ERR(esw)) return PTR_ERR(esw); - mutex_lock(&esw->state_lock); + esw_qos_lock(esw); if (esw->mode != MLX5_ESWITCH_OFFLOADS) { NL_SET_ERR_MSG_MOD(extack, "Rate node creation supported only in switchdev mode"); @@ -1025,7 +1061,7 @@ int mlx5_esw_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv, *priv = group; unlock: - mutex_unlock(&esw->state_lock); + esw_qos_unlock(esw); return err; } @@ -1036,10 +1072,10 @@ int mlx5_esw_devlink_rate_node_del(struct devlink_rate *rate_node, void *priv, struct mlx5_eswitch *esw = group->esw; int err; - mutex_lock(&esw->state_lock); + esw_qos_lock(esw); err = __esw_qos_destroy_rate_group(group, extack); esw_qos_put(esw); - mutex_unlock(&esw->state_lock); + esw_qos_unlock(esw); return err; } @@ -1055,7 +1091,7 @@ int mlx5_esw_qos_vport_update_group(struct mlx5_vport *vport, return -EOPNOTSUPP; } - mutex_lock(&esw->state_lock); + esw_qos_lock(esw); if (!vport->qos.enabled && !group) goto unlock; @@ -1063,7 +1099,7 @@ int mlx5_esw_qos_vport_update_group(struct mlx5_vport *vport, if (!err) err = esw_qos_vport_update_group(vport, group, extack); unlock: - mutex_unlock(&esw->state_lock); + esw_qos_unlock(esw); return err; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h index 44fb339c5dcc..b4045efbaf9e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h @@ -10,6 +10,7 @@ int mlx5_esw_qos_init(struct mlx5_eswitch *esw); void mlx5_esw_qos_cleanup(struct mlx5_eswitch *esw); int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *evport, u32 max_rate, u32 min_rate); +bool mlx5_esw_qos_get_vport_rate(struct mlx5_vport *vport, u32 *max_rate, u32 *min_rate); void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport); int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void *priv, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index 9de819c45d33..2bcd42305f46 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -2068,6 +2068,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw, u16 vport, struct ifla_vf_info *ivi) { struct mlx5_vport *evport = mlx5_eswitch_get_vport(esw, vport); + u32 max_rate, min_rate; if (IS_ERR(evport)) return PTR_ERR(evport); @@ -2082,9 +2083,10 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw, ivi->qos = evport->info.qos; ivi->spoofchk = evport->info.spoofchk; ivi->trusted = evport->info.trusted; - if (evport->qos.enabled) { - ivi->min_tx_rate = evport->qos.min_rate; - ivi->max_tx_rate = evport->qos.max_rate; + + if (mlx5_esw_qos_get_vport_rate(evport, &max_rate, &min_rate)) { + ivi->max_tx_rate = max_rate; + ivi->min_tx_rate = min_rate; } mutex_unlock(&esw->state_lock); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index e57be2eeec85..3b901bd36d4b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -212,6 +212,7 @@ struct mlx5_vport { struct mlx5_vport_info info; + /* Protected with the E-Switch qos domain lock. */ struct { /* Initially false, set to true whenever any QoS features are used. */ bool enabled; @@ -363,10 +364,9 @@ struct mlx5_eswitch { struct rw_semaphore mode_lock; atomic64_t user_count; + /* Protected with the E-Switch qos domain lock. */ struct { - /* Protected by esw->state_lock. - * Initially 0, meaning no QoS users and QoS is disabled. - */ + /* Initially 0, meaning no QoS users and QoS is disabled. */ refcount_t refcnt; u32 root_tsar_ix; struct mlx5_qos_domain *domain; From patchwork Tue Oct 8 18:32:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826796 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2059.outbound.protection.outlook.com [40.107.94.59]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B252D212D0E for ; Tue, 8 Oct 2024 18:33:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.59 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412440; cv=fail; b=sSnlO/DXcyCm42avTKfUj36+jjQDZ5I38yLNOrPn57euWBzXX7lXWJWbJQhv36ieZikVhrFVhI2sYB0dvD9+R0UN2RIuECXZeKuxNcu+Z6zq1M4WaPPceLn3JlhsJ1NCBdBXBiEGoQpkRvRJsOfsDEizKyut9ci4SkeHZhxObiI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412440; c=relaxed/simple; bh=bp0gg4Bzq03AS4brawSAXuTXC0E/6RiGHxvmPnqSUF0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=d81DTZOwpY3WNIPhDmhxoNfXWkXJ7fgMHAPR6InKiYFTLJED6SqNuFHfsA3VzIa5KQCT55HvoaCYlX2EiTLTf5Pd51XOUxw0GfeCBjAAcMRnNbYlycrxD/pduzGKG6dqsVWqq1+XXKh4db1597+Wh0ldO87PY0cRxqCLBPhkbkM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Bq6PQoI6; arc=fail smtp.client-ip=40.107.94.59 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Bq6PQoI6" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=iqaDUd3ORIPywdXcgrberHiNctHxlBJA6lrxJ1DK3NcxJyIdDGSZ1/4Yosdh1RyXpdyqWoq4/8Iml5ce7u6UMSHqM4ttOFNbMO4V47P77TuimfXEh7ZRqn7kic5BEzbFdyk88V3pnZbDqx8Ae1lGxm8OW6hdGMd38olaDoJZ+nqDvkR36QC+SZfpIryFjE4Wt2Ta/0xfGxPRGBt//iX5GAnpK7ORsq0S3Oe7lvkV4v1la+uXpbAPw+EJLqecU6pU1zFs4yAvZONCEcPJ7rZQynycwUGGyPp0trUVxvXpO5OXBajG/qMI49Eoc7qE3ZuVCZj8AtKKQ7Xrq9Y+xubxPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dg/zUas8Z7Gkmit8xaAEY4bPvTzTLWQF/W/ttWDpJHg=; b=oFQ0FQtgry87U/sVh32yLa3huBBfqSwBuLiwo73cmJfDd5pyKcVAm10bbEuCmsrW+gorp1U96ZVztiELER+u+8DOl+wb9CniYMGZnysJTC49on/DpunY0SoMgk4nGYMwnOAzq7rGg69vF/5zjiFr2c0eeilJn7Gn0N+u9D5GQxd1IjMPWFRVnkAijgY904nZSDjyN1lX4K8u3l/99iowSo8sD+nImX62Hk612aZ+PRLJj6mLnypIe6k0ZzsWiN4MPcEIFZzGOf3B7dcyKveZD9s+UDDVG16qgX8qrq3540usAXeqd0VYtwyLOtnqahGQXUXT/AF4emjdkR9tbgcqLQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dg/zUas8Z7Gkmit8xaAEY4bPvTzTLWQF/W/ttWDpJHg=; b=Bq6PQoI6jxeIeaH3Zyw0t+DvkMKhl+eyXSpEBiWiqhyzhAfymRCVNnrYXbfZl6bCtK7gdbQX+xvB0sYISKOlgfuygTT82VZcPilM1fWWCWSEKF4rvAAmPnGwD+2Y5iTDQ6PFEvsdZyso/PfdReC/PE6gQuuO0uZQXWwZ2xsMYDkyDlppO27Q8Bhrs6kPiscvtPNCu8xZ3IHWfXflCMAmQXBCrB2WmyeQOrt8y1+sXGIt/SP97AqkUrZanmSkHC7qjtHC27IkDnNQdDq0q6TnG7bFJ5Qk0eEFuqW/IaylYLqz1gQ+bnU1yOovVtgCKbbzKiHF/HHqH4AZtJeDWZGvIw== Received: from CH5P222CA0018.NAMP222.PROD.OUTLOOK.COM (2603:10b6:610:1ee::29) by IA0PR12MB7555.namprd12.prod.outlook.com (2603:10b6:208:43d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.24; Tue, 8 Oct 2024 18:33:54 +0000 Received: from DS2PEPF00003447.namprd04.prod.outlook.com (2603:10b6:610:1ee:cafe::8d) by CH5P222CA0018.outlook.office365.com (2603:10b6:610:1ee::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.17 via Frontend Transport; Tue, 8 Oct 2024 18:33:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS2PEPF00003447.mail.protection.outlook.com (10.167.17.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:54 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:39 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:39 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:36 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 13/14] net/mlx5: Unify QoS element type checks across NIC and E-Switch Date: Tue, 8 Oct 2024 21:32:21 +0300 Message-ID: <20241008183222.137702-14-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003447:EE_|IA0PR12MB7555:EE_ X-MS-Office365-Filtering-Correlation-Id: fe2a494e-c216-4f79-e81a-08dce7c7c364 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|376014|36860700013; X-Microsoft-Antispam-Message-Info: LAd7WYuBs78WCuNWjzxRyIqc+saI/viL9Gb4MXi9m4zgZEmxwCf//bd+U74qlm8FuRn+EeiLbzeLrDQnw/We+cDfHdcmmc7+wbumPeXq1VB7D9jED5JvAKo/l6q3q35ko5dumm1aJhTosuH2x+nQgP23nno8EKNCD9JPIk8q3UqiMb7121pYSozbZfvATuOhIRyBWGCBvvE0p447gPjqodVLN8W0duMWjDtAGuhdKM6Lfp+7ifW+RX5RO1RH0KUZSgx96ibcNNSNs563B5e+wHyRpjKv4Src1GlfErf9kGctK4WlcdwBCQD83VBVbv/Q+R9Elau4s+Y67+SX+Kv2qT8zDh9Q7bkX+/phcapH9zorr7cC8DBroieBrzoDcdK6jHfpZVTEi858Iw+jkbe9DNYCIEFgq++lwOz/4LJMDTUSCQTza6o6nbc4Py+9SYx4RGJPjqu+ljRyWZ1LSHtrObJ2oTNkkHpyB+v4JBoRyyTnsGiXN9MK/zmVYjdzVPD+zT85tu4rSW2e4/ZHfP8Fn6QEFpHcHivb+t18YTo+c8kfQwiCWPx8wPP8ta0gEhTgEsISqQOmt5QU6lI0waPZTCfkB+bd39eQ7Kdn6Xc/MvwSPAN6U/kqzfgkHifm2e4izeM42MtTbmwmmJ19qiizVHUdFZ1KGoJLvrzR1okGd5ZQAc7+o2Vfm1ShevMPdm8z1S5mzophEYoRnoXMU0ZTcbZOGnm1tWASpj5I5cFcgW/t3DAr5qizkhCAyjm5GZd9Jni4VDvZk1sp7T6M+Zs6uR9Q5HETKscNwFNkt9uKTS1o0zRXH7uAnQSqeGETRRoGtELm3OTQofKBovvkgs7XuEVyBIXFyTOclXeGOWaveWmfMAaakyiAMDGdI5e8TrUvPOS46J1JU6/okeLqxTiXeWf/vDF5VgmhHXRph2Ue1affoV+V1+lVKR9thUZWgY8ANvlYVD2NxR/Nox8GFdZg4IR537TB9sxqfi9BOmK1jPcDVOpkLIrFuBTW9x/DohdXS3zF/DPpZz3uPCoZ67gXkbm8xJ4mjQcWFRSL/bwG5hB0JEQV73cQrm1bxKoNytN1Gcl9L5OGDj9jUnAWZA1O3rR+c1UKGh6c3wZbBsrflbN97tqv1CwqEp3aCguBE9mUBnu21GIk7+pzRNyGYMaRX5DellfudmfCEMbKFy6mmudXktm8CJUlhmxd9P/bmBJoTTM8M2OWdsHuO4XaRjxQ73DXONNUBUTcHhwfSWnqhe33qc0X06wX46iHGoxZzM4vBrBc30sSowkonTrHxJ5G8AEvZrz62nSud63hxyVdayXdim3Y/bCOrUk2kHRjr3XNb08lQz360+D9/xIcgUwoDdrvV4FhhzOsHKO9IGnlTc3kosEwD231N4WIFy0xq0Gt X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(376014)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:54.1039 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fe2a494e-c216-4f79-e81a-08dce7c7c364 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003447.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7555 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Refactor the QoS element type support check by introducing a new function, mlx5_qos_element_type_supported(), which handles element type validation for both NIC and E-Switch schedulers. This change removes the redundant esw_qos_element_type_supported() function and unifies the element type checks into a single implementation. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 27 ++++------------ .../ethernet/mellanox/mlx5/core/mlx5_core.h | 1 + drivers/net/ethernet/mellanox/mlx5/core/qos.c | 8 +++-- drivers/net/ethernet/mellanox/mlx5/core/rl.c | 31 +++++++++++++++++++ 4 files changed, 44 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index be9abeb6e4aa..ea68d86ea6ea 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -371,25 +371,6 @@ static int esw_qos_set_group_max_rate(struct mlx5_esw_rate_group *group, return err; } -static bool esw_qos_element_type_supported(struct mlx5_core_dev *dev, int type) -{ - switch (type) { - case SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR: - return MLX5_CAP_QOS(dev, esw_element_type) & - ELEMENT_TYPE_CAP_MASK_TSAR; - case SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT: - return MLX5_CAP_QOS(dev, esw_element_type) & - ELEMENT_TYPE_CAP_MASK_VPORT; - case SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT_TC: - return MLX5_CAP_QOS(dev, esw_element_type) & - ELEMENT_TYPE_CAP_MASK_VPORT_TC; - case SCHEDULING_CONTEXT_ELEMENT_TYPE_PARA_VPORT_TC: - return MLX5_CAP_QOS(dev, esw_element_type) & - ELEMENT_TYPE_CAP_MASK_PARA_VPORT_TC; - } - return false; -} - static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, u32 max_rate, u32 bw_share) { @@ -399,7 +380,9 @@ static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, void *attr; int err; - if (!esw_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT)) + if (!mlx5_qos_element_type_supported(dev, + SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT, + SCHEDULING_HIERARCHY_E_SWITCH)) return -EOPNOTSUPP; MLX5_SET(scheduling_context, sched_ctx, element_type, @@ -616,7 +599,9 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling)) return -EOPNOTSUPP; - if (!esw_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR) || + if (!mlx5_qos_element_type_supported(dev, + SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR, + SCHEDULING_HIERARCHY_E_SWITCH) || !(MLX5_CAP_QOS(dev, esw_tsar_type) & TSAR_TYPE_CAP_MASK_DWRR)) return -EOPNOTSUPP; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h index 62c770b0eaa8..5bb62051adc2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -224,6 +224,7 @@ void mlx5_sriov_disable(struct pci_dev *pdev, bool num_vf_change); int mlx5_core_sriov_set_msix_vec_count(struct pci_dev *vf, int msix_vec_count); int mlx5_core_enable_hca(struct mlx5_core_dev *dev, u16 func_id); int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id); +bool mlx5_qos_element_type_supported(struct mlx5_core_dev *dev, int type, u8 hierarchy); int mlx5_create_scheduling_element_cmd(struct mlx5_core_dev *dev, u8 hierarchy, void *context, u32 *element_id); int mlx5_modify_scheduling_element_cmd(struct mlx5_core_dev *dev, u8 hierarchy, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/qos.c index db2bd3ad63ba..4d353da3eb7b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/qos.c @@ -28,7 +28,9 @@ int mlx5_qos_create_leaf_node(struct mlx5_core_dev *mdev, u32 parent_id, { u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {0}; - if (!(MLX5_CAP_QOS(mdev, nic_element_type) & ELEMENT_TYPE_CAP_MASK_QUEUE_GROUP)) + if (!mlx5_qos_element_type_supported(mdev, + SCHEDULING_CONTEXT_ELEMENT_TYPE_QUEUE_GROUP, + SCHEDULING_HIERARCHY_NIC)) return -EOPNOTSUPP; MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent_id); @@ -47,7 +49,9 @@ int mlx5_qos_create_inner_node(struct mlx5_core_dev *mdev, u32 parent_id, u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {0}; void *attr; - if (!(MLX5_CAP_QOS(mdev, nic_element_type) & ELEMENT_TYPE_CAP_MASK_TSAR) || + if (!mlx5_qos_element_type_supported(mdev, + SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR, + SCHEDULING_HIERARCHY_NIC) || !(MLX5_CAP_QOS(mdev, nic_tsar_type) & TSAR_TYPE_CAP_MASK_DWRR)) return -EOPNOTSUPP; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rl.c b/drivers/net/ethernet/mellanox/mlx5/core/rl.c index 9f8b4005f4bd..efadd575fb35 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/rl.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/rl.c @@ -34,6 +34,37 @@ #include #include "mlx5_core.h" +bool mlx5_qos_element_type_supported(struct mlx5_core_dev *dev, int type, u8 hierarchy) +{ + int cap; + + switch (hierarchy) { + case SCHEDULING_HIERARCHY_E_SWITCH: + cap = MLX5_CAP_QOS(dev, esw_element_type); + break; + case SCHEDULING_HIERARCHY_NIC: + cap = MLX5_CAP_QOS(dev, nic_element_type); + break; + default: + return false; + } + + switch (type) { + case SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR: + return cap & ELEMENT_TYPE_CAP_MASK_TSAR; + case SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT: + return cap & ELEMENT_TYPE_CAP_MASK_VPORT; + case SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT_TC: + return cap & ELEMENT_TYPE_CAP_MASK_VPORT_TC; + case SCHEDULING_CONTEXT_ELEMENT_TYPE_PARA_VPORT_TC: + return cap & ELEMENT_TYPE_CAP_MASK_PARA_VPORT_TC; + case SCHEDULING_CONTEXT_ELEMENT_TYPE_QUEUE_GROUP: + return cap & ELEMENT_TYPE_CAP_MASK_QUEUE_GROUP; + } + + return false; +} + /* Scheduling element fw management */ int mlx5_create_scheduling_element_cmd(struct mlx5_core_dev *dev, u8 hierarchy, void *ctx, u32 *element_id) From patchwork Tue Oct 8 18:32:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13826797 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2071.outbound.protection.outlook.com [40.107.93.71]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8898C212D17 for ; Tue, 8 Oct 2024 18:33:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.71 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412441; cv=fail; b=hPkYwCDlVTq2tcangaXJdyyO+m931dh+VG+NnWJAweuekmejlkSB8flj9vQnmWAzG9pb0yQAVbOF8kJaKn5Q0HLWRC3DMLUoAY57tOo7L+p2Ih8AbuuxR4cVSi5HlNQKRX8lh1KcjOlq8ia9uhr0wKvVReHqTWCkBSbFwrvFX5w= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728412441; c=relaxed/simple; bh=kbkJOv++vTX2mCVHEaPPN3aq41wADSV0LRsAN8EYpbU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FcTH+GnzkgO/WHzYllEwYOGmYcegwstCAhYI4MWkNWLhEH2H5awawFkNjG7g60P5VlqGjsykg0RgVr6Ydpl+ZgTb2OXB+tGPepIxiCcbjLnrHyRhLF5p2lhOwQfcD0iThibyzPNsEtb2+yCJXsWwz/NX8sZjpu+ZXIWF+NuTugw= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=ZBA5mNMH; arc=fail smtp.client-ip=40.107.93.71 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="ZBA5mNMH" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=m6I40MPlbBvo0cA1SlaKbbiZqHsvBXqhXUN2/Aqjk4inWGPaermMoW8zTuxwMthxbsqfKjbXiY/EcCCVpFc/LgHrWfj8xQHmuZ8mgAACar6psC21Cq8Rj+ZWwFEApuJafYfemHvrF0WYT+nNZyLhSEclkVEVFUXAEsJN+ce+Uu7LhiPPUyWvKAB1jw3280tA39IJhi2n5FDLzCPbBwA9YF2pKlbCTP70H7vMyzI4yqlRx50JzCipxkavxRpI/2EK1DTbbiHcBsrwgw02eB8ZwJx7QYT5CeG8/fPqs4PsjI4t0upfRwHQKUDxy88G6kbPR//UNqGRI+9NzOw1W/mgAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BA2bEXaWtJGPc4jKsovtrtOPcHUWBPydYKHHCP2fZ3U=; b=QqSW2i1R1b7BW5ABq+GF8ptUudpbHYuJaAcHe6ulleeU7pfKftM03L71BKKiRIJe4Pu7emtm0ls1y1r/jbPvEbOADsy+C2qwd+h9W8+g2/YZVfLCPTWJ8KCBMIbuB17gG+obnrrBJcclGCEiMjnq6fBINJksYRxGk4/VnVeaXEMj8NQk/XcoQ7VgFoJKekOZfQyrlR8G5Dqk5AXDfZxlDJTK/AKhz0ynl4lFydh5BWoZw44+uyOcx0H0yvWC8ThoYfyuVM0p0f1p1HZBj9WfZdwaG2MmTMvxZ35WmcIlT877qQXu56zm2XoCv991Om4APsedndq5leHG0prA96yIFA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BA2bEXaWtJGPc4jKsovtrtOPcHUWBPydYKHHCP2fZ3U=; b=ZBA5mNMHuZJzbON6XCR0WCsWuX00KZcgR5/H1I0dgDuzKBAuCE85JNIwHhD7FgB4W6HjZ4khZlDPL408cAmbH6N6XICu9BD/jElmWbg+LJe/Mk5p0/AECVzfUMjucagJq20m3kHz48Nta/kkZXhcpjorxMMypj8xI1lS3aO/UgBxD8RagtMW4a7QjFlooGDgESCnqqA6GGU9lbQ+ZlU7As8Be8h687JnLC5FYwbk/qQWzOG0IHt1rgEAavBu2rYnJ1MVq20N40mZDiQRf2X7eTKKYt/aG4cRqqj9JMbmZSCJetuVnHr9qPCZFMi6mPZ6UW0R2eOeRJ46jfzHCLqSbA== Received: from CH5P222CA0017.NAMP222.PROD.OUTLOOK.COM (2603:10b6:610:1ee::21) by BL1PR12MB5779.namprd12.prod.outlook.com (2603:10b6:208:392::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Tue, 8 Oct 2024 18:33:56 +0000 Received: from DS2PEPF00003447.namprd04.prod.outlook.com (2603:10b6:610:1ee:cafe::75) by CH5P222CA0017.outlook.office365.com (2603:10b6:610:1ee::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.17 via Frontend Transport; Tue, 8 Oct 2024 18:33:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS2PEPF00003447.mail.protection.outlook.com (10.167.17.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Tue, 8 Oct 2024 18:33:55 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:43 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Oct 2024 11:33:42 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 8 Oct 2024 11:33:40 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 14/14] net/mlx5: Add support check for TSAR types in QoS scheduling Date: Tue, 8 Oct 2024 21:32:22 +0300 Message-ID: <20241008183222.137702-15-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241008183222.137702-1-tariqt@nvidia.com> References: <20241008183222.137702-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003447:EE_|BL1PR12MB5779:EE_ X-MS-Office365-Filtering-Correlation-Id: 65efe0d1-94bb-4045-a1bf-08dce7c7c461 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|36860700013|376014; X-Microsoft-Antispam-Message-Info: 2FFhb2Ov0bTTni1Lh6TQas1cBBTCTV5xQgzaJYTd+YapOBQorrYVhZD8iYUpOgQ2gyvMSXovVZGZKmpTg4CpCumIBTWn9UyQwx/aXcCqnnMrthh9K124NNJ81gyr4u6//3djOBWcFSwEtl2NxqW+0eksoLGz7tSCcPY8n1hR5bNZ6k6Ia8Y6lxOPd1rPCdIWphm59/3xbRkixeImC/02UqSFlZ6EbBLTt+O2tZUC1x4pMfRk2arcRANu+wepqaIDVduotN6GI1dUdV/lvodFzUSDIg4OZC/yLUrlmgFlL/SiXj0Wwnj3OxYZIpbZEGjq60rDJPPFi6qSvZHnJa82Yw0gEGTUNrjsONVH0Jthb990WtXuLxVCpJWSWTOTFWOA8+UH2xpgQI4NUQ86NHYlY+ed0SIe74WuXk21L6Kg2B8B0zyZ21z7BB+uG9JjVctSZwzvAh6/ZjZaEifvddFrk4Bpw56pYyt7nBnhePXGX9S7fQduetQuk72uWXYpHvopgIDd0H1b8Kh9lqF1GzSkbiuob0qAj0E1QOyumRUz4ANJ5xpvI2BnUqJaonb3hgQvcmAaWEa4OJUvf3gGxw+N7+s3dwDmDU4ueK15ffMGwJT92MYhdb8nQEEfSTytQafIQ+ZM/DVRPjC2gD/yYn6leGQdbi0TE+zVm9T1JED5cLSxqrOMDMVBcMuGN1AHP4DInW6dhHJl2d71qV/k32zKOo2+O4+1xtu7G6OGteaG/1zgRhKhx58iEBdpk0fVvmYfIfm/080ns+ucKUCZoTq0Zu82kqP7slqZDboqr9+PxoY63i0XiWKh2qDH/WfnadbePFUH1F/9WaaBKpjayXVbXkUdKvImxFEuZ9RMO8J2Dn179ljKTwdDcjoydB0o6n4qr6GqAeRy/vxaGPEy3GPGQdzmRyhHwFZ9AWJwoqLm73u85f/TpLetouDIGj19Oy7kkryyIOxgNgplaRGi/ViNweWXENABaH9J23vU913FAGqlxJBYeFs78qYChPgECtZnjZrxNypPKCMQVFLhB6+RyzDuPZIGdkqkY7TNqmO0MNidcLYJk/lgY+e7xUi/ANCl8UBsy+Fce5UVLc7nJWoOKu/xfW9MBt5iGZhPcU7vtmhd6dh9/6P19VVo3EH8XusT1sMBh1gvINCzsb5K1EzJkuC6lFexkls88Nf6TBQgYWaSi5kCWO+QPxAerpBXbYEEPzZfxJfNZzkCrayYNWj6fS/XNvFB9znZCIvFon/SSb2s27NtZErrp2s0I8YbNTmPzowD1B1kl1MDkLw29c329uwJDi9/h84rn4jOakMOqXgVFhfqwMCI3QVOv5sJx0YIBPG3SRL+HpcO3Sqi60xGa4FL2uVYIj9RHZWUg7amcuNPns2j5mfmfSgQRNPakhSB X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(36860700013)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2024 18:33:55.7602 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 65efe0d1-94bb-4045-a1bf-08dce7c7c461 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003447.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5779 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce a new function, mlx5_qos_tsar_type_supported(), to handle the validation of TSAR types within QoS scheduling contexts. Refactor the existing code to use this new function, replacing direct checks for TSAR type support in the NIC scheduling hierarchy. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 4 ++- .../ethernet/mellanox/mlx5/core/mlx5_core.h | 1 + drivers/net/ethernet/mellanox/mlx5/core/qos.c | 4 ++- drivers/net/ethernet/mellanox/mlx5/core/rl.c | 27 +++++++++++++++++++ 4 files changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index ea68d86ea6ea..ee6f76a6f0b5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -602,7 +602,9 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta if (!mlx5_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR, SCHEDULING_HIERARCHY_E_SWITCH) || - !(MLX5_CAP_QOS(dev, esw_tsar_type) & TSAR_TYPE_CAP_MASK_DWRR)) + !mlx5_qos_tsar_type_supported(dev, + TSAR_ELEMENT_TSAR_TYPE_DWRR, + SCHEDULING_HIERARCHY_E_SWITCH)) return -EOPNOTSUPP; MLX5_SET(scheduling_context, tsar_ctx, element_type, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h index 5bb62051adc2..99de67c3aa74 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -225,6 +225,7 @@ int mlx5_core_sriov_set_msix_vec_count(struct pci_dev *vf, int msix_vec_count); int mlx5_core_enable_hca(struct mlx5_core_dev *dev, u16 func_id); int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id); bool mlx5_qos_element_type_supported(struct mlx5_core_dev *dev, int type, u8 hierarchy); +bool mlx5_qos_tsar_type_supported(struct mlx5_core_dev *dev, int type, u8 hierarchy); int mlx5_create_scheduling_element_cmd(struct mlx5_core_dev *dev, u8 hierarchy, void *context, u32 *element_id); int mlx5_modify_scheduling_element_cmd(struct mlx5_core_dev *dev, u8 hierarchy, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/qos.c index 4d353da3eb7b..6be9981bb6b1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/qos.c @@ -52,7 +52,9 @@ int mlx5_qos_create_inner_node(struct mlx5_core_dev *mdev, u32 parent_id, if (!mlx5_qos_element_type_supported(mdev, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR, SCHEDULING_HIERARCHY_NIC) || - !(MLX5_CAP_QOS(mdev, nic_tsar_type) & TSAR_TYPE_CAP_MASK_DWRR)) + !mlx5_qos_tsar_type_supported(mdev, + TSAR_ELEMENT_TSAR_TYPE_DWRR, + SCHEDULING_HIERARCHY_NIC)) return -EOPNOTSUPP; MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent_id); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rl.c b/drivers/net/ethernet/mellanox/mlx5/core/rl.c index efadd575fb35..e393391966e0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/rl.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/rl.c @@ -34,6 +34,33 @@ #include #include "mlx5_core.h" +bool mlx5_qos_tsar_type_supported(struct mlx5_core_dev *dev, int type, u8 hierarchy) +{ + int cap; + + switch (hierarchy) { + case SCHEDULING_HIERARCHY_E_SWITCH: + cap = MLX5_CAP_QOS(dev, esw_tsar_type); + break; + case SCHEDULING_HIERARCHY_NIC: + cap = MLX5_CAP_QOS(dev, nic_tsar_type); + break; + default: + return false; + } + + switch (type) { + case TSAR_ELEMENT_TSAR_TYPE_DWRR: + return cap & TSAR_TYPE_CAP_MASK_DWRR; + case TSAR_ELEMENT_TSAR_TYPE_ROUND_ROBIN: + return cap & TSAR_TYPE_CAP_MASK_ROUND_ROBIN; + case TSAR_ELEMENT_TSAR_TYPE_ETS: + return cap & TSAR_TYPE_CAP_MASK_ETS; + } + + return false; +} + bool mlx5_qos_element_type_supported(struct mlx5_core_dev *dev, int type, u8 hierarchy) { int cap;