From patchwork Sun Oct 13 06:45:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833688 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2045.outbound.protection.outlook.com [40.107.237.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BDA7977104 for ; Sun, 13 Oct 2024 06:46:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.45 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728801999; cv=fail; b=KYuzYfCOgBr6j+du40/BziMO7+ewA52sBylTvN8DNv1yqpXuZJ0dg2AVGB6jnGjCC1d4UZQiMm7bO6e4E0P1HAr+8wYxPS/NXF5B7yGtW3Fc23qSBblkcsG8IShqZHhdCrfmwoeqF6yXsa3F79mUja03FnpQ1uK7ElinYKhDOyQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728801999; c=relaxed/simple; bh=Rnq4iyNTofZy9Jiy6ETVHtyLUkx10eu7u2PaFmM6zZ0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=MamzGprjnAhCearD15vRwV79HBjfZO39896g9dSUa/QCahSMDP3V5MD3a2eQX6iZkMrg1eSN40WYRp1zVgWuvhzXjvAZtgBQr2QaVltTwhfRRRjFgQz1ZE7+sZZuG5lLQFxJOlAAT7otDez+QZ9jn/EH1/nineMtXMNKf660buc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=HKrQtcQ9; arc=fail smtp.client-ip=40.107.237.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="HKrQtcQ9" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=C/v/0r9Zhym+lAVWl4sokuyDmIE1l0nUGmhtztqK6/YQSh0UhfGOz9H/+MmAO1UhG49UHi4v7te6YIHt9cNbrr53DbnkhQvC2pLPxwZ9a1MJdSpjkKKPrKVEi1hZXkE1Vixkd+AgKKFYzlSpAf1XsEYE84cCObnQ8qArPW2NIXC3LN/5rCB40CI5fwu7W7VsDMWCV853vF7FHitaxKwghfCMUUuT7nHE0u4VnfmDxD4BsR0uRTy6RtYGhhKHBo8/grJaIh9MZ8Yza2dg03MS4PT6mMNIOJDO7MGEDIaKN7g39VZUgjt8xbYfXBxt5ThmxxLCGJNVFqFFiu8NLDrI5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mt6KQGveKpv3h5Vcnvsjp6svIE0gMNHAEatb0cmOFgE=; b=upqir07R8cZdFthiaueCmkfK/nC2/H0AND9ssTcSjkCm2bp4qnTuHQtYYuJDR0lr+sO3YZj19GcMOoG4Ls6S5eEicck+wTGb+AhN8Om8Fws2VO6yFudrcI9/2fIrZufIC0zuv6qTw2oz8RygDt+YvnvT84jDyOKHvJvNjS2/xPAJbMCHOTu6yfPUt00vZJ0Y0flNURRlVLva1oTXleP2Zd1av7fEE3b7WGGDYWYzhGRK+/gig5pjSxg1T/HqivzY4ag1mPBweed8HxXjkjrdTVGmz1gisyGyik75b1wya87cI3POfjklJ46KOJtm58kj7b+QMeiTXM2OkJzc5amyDA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mt6KQGveKpv3h5Vcnvsjp6svIE0gMNHAEatb0cmOFgE=; b=HKrQtcQ93eOFIH7AobFz4mEsF0hjdSImhybHjJfc+gGTKDzElKQKQg8tlYGVRlMvTnJ9wx4hzBYF+JP2PESPxESnpIhxGGYWoNe+J64S5Ri3iywWqTcIi9fgUGiSBA7kepjQH2+dCW2sod0fxxbqEypf8lzae7IVdE3rg3YeiRg9hoxjL9i0we7Cp2cZXc0ikg4ItwOKxJqX5EXCtuwHDRW1CNqhpPgx1amoehlTyuB+/aWWFv70SyG89jI93R/wEz6ai7rGE/ySEMhAG7/Qewv6MRTQUaMjyynKpiaySoBOjvwzV2sxAwEgp0UcaKWugGPaRFWv7g8/ivnsh+Pi/w== Received: from CH5PR02CA0017.namprd02.prod.outlook.com (2603:10b6:610:1ed::19) by PH7PR12MB7331.namprd12.prod.outlook.com (2603:10b6:510:20e::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.18; Sun, 13 Oct 2024 06:46:32 +0000 Received: from CH1PEPF0000AD7C.namprd04.prod.outlook.com (2603:10b6:610:1ed:cafe::52) by CH5PR02CA0017.outlook.office365.com (2603:10b6:610:1ed::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24 via Frontend Transport; Sun, 13 Oct 2024 06:46:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD7C.mail.protection.outlook.com (10.167.244.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:29 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:19 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:18 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:16 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 01/15] net/mlx5: Refactor QoS group scheduling element creation Date: Sun, 13 Oct 2024 09:45:26 +0300 Message-ID: <20241013064540.170722-2-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD7C:EE_|PH7PR12MB7331:EE_ X-MS-Office365-Filtering-Correlation-Id: c2772408-7330-4944-6e12-08dceb52c4ba X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|82310400026|36860700013; X-Microsoft-Antispam-Message-Info: 6qL1UNI9q9dKXiFBZ1IWfci6aE82VB96lm1hYZhZQHIdLWyFmsRvJ6s1YEmgeUiMNN9nt2XLP/o54wMsK88ieTBs25CZxO224GUCpgVfVd4WmeIP6EUeibthMsEiolLzDIpaIKJH9nLmJ5p3G8ml04MXfsz2WcbCKl8HUkrrSjyxGsYIkV/6HYQ7qoDxr262Nu0scyXUSvhecwRUHwcvcP8Lr79Pb6kgqipFo51XDIWBlcSYOHoZ+09mrkfqv21D3JyOlEAB6aVEynzwp81A6lTYyHj61Jh805ODXkvRWX9Mzd3gD/QQcgvKBYd3B6VWq8URX246XF7/ldxmbrB+hyp/CeJw74GNN2IrcFs+jJ5rH85EZVTtC3BOuFZ7BW1erNJ8MzwJp9gD17PY4KDSspEucreqlXAEduA/KI3YokVUFzQCSzGob0RnVa1WS44sax5dLIius/7v4esXCeh4kjyZU8DtbECpeZZUI+X5ILsYD4ZuUwJuVzx+nL1GnS4mlDT9TG7YLK1Q7U3svkHoq5ReAs8KkgbVnrTNgQGGGCfMhq/aqdOS60X7kyVlStgaRXY5N3yB8qYBNnCbGoczZUrpeXmp0cVlzM9ikxN4uUvpvPWmWa2dLBP2Qax0sB0F4lFv3GTf4JUS3D+pAR/1ssCUTZ73k6/gwW86ZLYWZt/LQM8o3vDxgHlCWge2hW0MK9xGkBofMBk5xoKIwD7yycT5mI1FPglrGYjRo5cWmtdlHMTrKztdUD1ZQoVif+9/17NN8VlB7Kv5Ljg77cJCXDWnA0wMYV+m9tWBVLgAFyFoGeSCzvoD+9m0TEUOjyt0GCk/3PztxkU4WOpx8s0P8PXnyBvGi4LSjp9r7YVDY7EBqEUCKILem0i1zkqbuns+P/XgyFiw8aUcuX/iNtpHjT0RmMD3CFKG263uYxuekYOwTN0m3AddqttKMBz/j9/dROzGv/XYB0cQnfnSUn6nEZoqOlnBR2mIokb//Vwh9T9Ol8KSNd6H9QXWyXoZocmsxkXTmkiChkqQ6mgp0RBtbZ2tb1cbvvY/Mspo7+MxPsyMy7zHqpNSKYrR510ZY2yC3flUqEc3NHhQB+55L6oCP/pd+9KjyGoe3H8VNj20uNeealpFCcFZEeBgHPlcYbxSiySXXNY7hQMvyuU5lBi1NK83tjTcdfBYTsandDUkloYZkhMxDbF1FCfof5h+XiuJWmOrlgNbmYV20U8/qQVKfu8rRcOAKaII9ye3bXiGOByubnfZHKBW2F3EI8D3G75jaP6mCaaJMZLmteNOQpizDNZF0LuFyuyBE9QfIf9zDcBtU3A0Pdj+BEf4ETWYf9qK3oTXrnQiJmb1hrvht/rm8JJ7YypHBnCYI9ggSYnYJweE1fiCIqjh9DEnl10Rwi4x X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(82310400026)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:29.8759 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c2772408-7330-4944-6e12-08dceb52c4ba X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD7C.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7331 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce `esw_qos_create_group_sched_elem` to handle the creation of group scheduling elements for E-Switch QoS, Transmit Scheduling Arbiter (TSAR). This reduces duplication and simplifies code for TSAR setup. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 63 +++++++++---------- 1 file changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index ee6f76a6f0b5..e357ccd7bfd3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -371,6 +371,33 @@ static int esw_qos_set_group_max_rate(struct mlx5_esw_rate_group *group, return err; } +static int esw_qos_create_group_sched_elem(struct mlx5_core_dev *dev, u32 parent_element_id, + u32 *tsar_ix) +{ + u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; + void *attr; + + if (!mlx5_qos_element_type_supported(dev, + SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR, + SCHEDULING_HIERARCHY_E_SWITCH) || + !mlx5_qos_tsar_type_supported(dev, + TSAR_ELEMENT_TSAR_TYPE_DWRR, + SCHEDULING_HIERARCHY_E_SWITCH)) + return -EOPNOTSUPP; + + MLX5_SET(scheduling_context, tsar_ctx, element_type, + SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR); + MLX5_SET(scheduling_context, tsar_ctx, parent_element_id, + parent_element_id); + attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes); + MLX5_SET(tsar_element, attr, tsar_type, TSAR_ELEMENT_TSAR_TYPE_DWRR); + + return mlx5_create_scheduling_element_cmd(dev, + SCHEDULING_HIERARCHY_E_SWITCH, + tsar_ctx, + tsar_ix); +} + static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, u32 max_rate, u32 bw_share) { @@ -496,21 +523,10 @@ static void __esw_qos_free_rate_group(struct mlx5_esw_rate_group *group) static struct mlx5_esw_rate_group * __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { - u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_esw_rate_group *group; - int tsar_ix, err; - void *attr; + u32 tsar_ix, err; - MLX5_SET(scheduling_context, tsar_ctx, element_type, - SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR); - MLX5_SET(scheduling_context, tsar_ctx, parent_element_id, - esw->qos.root_tsar_ix); - attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes); - MLX5_SET(tsar_element, attr, tsar_type, TSAR_ELEMENT_TSAR_TYPE_DWRR); - err = mlx5_create_scheduling_element_cmd(esw->dev, - SCHEDULING_HIERARCHY_E_SWITCH, - tsar_ctx, - &tsar_ix); + err = esw_qos_create_group_sched_elem(esw->dev, esw->qos.root_tsar_ix, &tsar_ix); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch create TSAR for group failed"); return ERR_PTR(err); @@ -591,32 +607,13 @@ static int __esw_qos_destroy_rate_group(struct mlx5_esw_rate_group *group, static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { - u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_core_dev *dev = esw->dev; - void *attr; int err; if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling)) return -EOPNOTSUPP; - if (!mlx5_qos_element_type_supported(dev, - SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR, - SCHEDULING_HIERARCHY_E_SWITCH) || - !mlx5_qos_tsar_type_supported(dev, - TSAR_ELEMENT_TSAR_TYPE_DWRR, - SCHEDULING_HIERARCHY_E_SWITCH)) - return -EOPNOTSUPP; - - MLX5_SET(scheduling_context, tsar_ctx, element_type, - SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR); - - attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes); - MLX5_SET(tsar_element, attr, tsar_type, TSAR_ELEMENT_TSAR_TYPE_DWRR); - - err = mlx5_create_scheduling_element_cmd(dev, - SCHEDULING_HIERARCHY_E_SWITCH, - tsar_ctx, - &esw->qos.root_tsar_ix); + err = esw_qos_create_group_sched_elem(esw->dev, 0, &esw->qos.root_tsar_ix); if (err) { esw_warn(dev, "E-Switch create root TSAR failed (%d)\n", err); return err; From patchwork Sun Oct 13 06:45:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833689 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2076.outbound.protection.outlook.com [40.107.236.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 964A77A724 for ; Sun, 13 Oct 2024 06:46:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802000; cv=fail; b=Ej1dJTWK/KEhmH9yVn5D9H5mfZIptjQ3tCFtSfYN5Nc4s0RIKsWgXZQUKsa/GKvPH7zt3iNVkKknI2mLsK2N5yI7QYQjjhbLdYUBk14KlDIkEYf8PHdJtTy6fG4nNnwNRvXhBX3LAVnZyPi8uukOLm5N3G+eLng61Sa3OKTs1UU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802000; c=relaxed/simple; bh=k/7X5CAFjMDxloR3bk8AiU6ZZjNkk0f+E/GF7FH2pNU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mB64sgvZe7rWlQur/LnmcEJfFuvS8u6HyWcBW1vjsT2e/O3hexd0S0d6tGQjj1+bi+gNLcAXddj6alZy+52YSOavkWI2U7PbjfA87MBgWa/MWxkFXq1zu6g9WWhSAxyX6HGItfrCzEpBP8CB5FGHyC5zkwZCM144rPvQumEEMpY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=tvtbkMu2; arc=fail smtp.client-ip=40.107.236.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="tvtbkMu2" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xZKp6E2aT1d6V+dE94mdi8Dd4gTnj+VjQgj4Cnw1eYlq2xq/6tgzeAP8uGCdHZBiv+BlDP783J+JmIYNuKaN4yImSR0Dff4a6W089v5FMeMddSx1fKng/WzEL+4nenkZxcjHS6drNgb5xLADAmSv9PSRzxd/hj7D7WcFHtM9IljUOIGShhwACT9T95adbuxgGxa7N+kYWD7iP30wsOvEkRx3Krs8UOa7mLKKM5hDfZ1jIITmz+b7edMl/gBxfBn2JMP8f/LpcAJzbse3wEqupKsNBMOeE22QXugQWbrLhyFTePamR7658cvVFZtF8pYqIYuMB9Jr8IMBn+DyDHaybw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tVNLzUBdlzn1mfYU+fvHCDW3NjUOydo6xybChaZQK/E=; b=PhaEANY0cyg+FV6agKeT4XyeghoZC+hkdOeziB9ZRqg2APT0impw5mJZx4BEpzaLu9SLXcONON3uhBQhAbNttxJhHM4Zld2TDyLoKMhH7O7+/PANnkOJd5xwt80jeHn0BWKaaH7vFqbVNsQGj5W7nVtJgyw84cRc5D0+CIVgHzCeSxuuQWVjfvryZv6NUzT2xE0scKp67HNnt7jaB2PgoomLSZY7IqBFnjHNeTFswGmB2vEKIouXnX8eLG4qrYmXwLViLVYoIkgqY5Jz0ZwV7DchTB6EtuAyq2KAu1RAYetOhTd4EmbztjdhAjrtGB7mBkVji05CEHZVOkV75qgAFw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tVNLzUBdlzn1mfYU+fvHCDW3NjUOydo6xybChaZQK/E=; b=tvtbkMu2ZN3bFJFbIJyOnVJEXiTd0uLgtmWR6JGnaU91aKBp0C0vd65Dagu+T9xZyJxiVY7BvV7dbtxqn8/MeWx9hPVwnsfjPLiPvqtZB4bQoQ/J8KGU5uUToJ8DkCX6Q/GvJbBeC1C3oWPvD1mFH0FoFzmxszaxdIduMMxSkD2OjQPbfORkzJGGZfefwR7tv5rCGtjV18Rgs8WLRobLpfNOejthPKwruEdPjaUYKjhjdAJQ2ZOr1GaCYqGiHennTRAnCMTq1W3G32pY/5YsER69DG0GlVlN8C/aIXJxvxUHSXjvncczQmz09NOxbTdZilLy5fynzyA6k2MwiAJKjQ== Received: from CH5PR02CA0003.namprd02.prod.outlook.com (2603:10b6:610:1ed::23) by IA1PR12MB8496.namprd12.prod.outlook.com (2603:10b6:208:446::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24; Sun, 13 Oct 2024 06:46:34 +0000 Received: from CH1PEPF0000AD7C.namprd04.prod.outlook.com (2603:10b6:610:1ed:cafe::52) by CH5PR02CA0003.outlook.office365.com (2603:10b6:610:1ed::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.22 via Frontend Transport; Sun, 13 Oct 2024 06:46:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD7C.mail.protection.outlook.com (10.167.244.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:33 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:22 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:21 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:19 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 02/15] net/mlx5: Introduce node type to rate group structure Date: Sun, 13 Oct 2024 09:45:27 +0300 Message-ID: <20241013064540.170722-3-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD7C:EE_|IA1PR12MB8496:EE_ X-MS-Office365-Filtering-Correlation-Id: 5bba32c5-77c9-4ebe-3024-08dceb52c6c2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: XIoMeLiBNXNJzdT8S+CfUIYU+9ALGlNb8w7tBCwW/mpG0orOGmYhsKw2mUecrzRvRH1qVx5AqX1d8ZQTS0WgypOiszgy14DSgsDZMXDIUQvak9lLiI8iIPNUCkI1Uujf7anwZXpaOxQe9pxLM9+dnZ6Aux0ElxidZ/xGLf0FPBfJ8LyT4neb9dMgMPx6Mc3czya0SgkMILu2chK/LSLtnUd/ySbBnlxFD0RaoglWvPPnlRy6pKf5unSVK2jYRPVTCtmBb5ob0nsnElurtNgOKciQuLOk7YkDOBJY2z81TS5A/OA7CpVLWNiCajTxt69XgRQm+tGvNlDzLGxkvnY/5PI19m0x9Jn0lWvxGZr/sdi0wKK18b+zIe12yfG1tSGWqMxxe1jVG+jhh3isO+y7HbrjlchU/2PRGN2eWkPBoxeXkfExShoqU8Uh0PCnq71gFMXbagPH45LqZRmSpqA0qUgOEM24lbkTFLn6V6EYpQy+rJo2RcW0rGfjKrNRZX7P9p+oEVPnY44scsVTi/v2SVuqB6ma4eVC/pVVq5gLBlbRFFvgRfUH1rP0NCs8QQhot7dUdZU16uRDvn1fD5ZojYxUS4dKkxR7EtsLftbiS4sdAKeKl2pQgHpyJSceK4RuUAPSx/+sXx6rcJscQQ6YHW5WMIRu8zPkgcKlVkxjMXESFWMwY5q/NkIXT3djruqygrAyJFPj+Icv8SkDeEWnkrlfH2nKmO8uVHsy/VsLyPqth4JDJ0c5Zg2pcbqEdtDSljKDUOY5N375k4Zm7I7+4lhBe1PaMMwT/lLDgzt35vGXWw58BmC+3+ot6NqlUjo8zYW3MR2hqt7q51GGabFUEviFsUSR8S2/Xrz+nncAP0dQQBx/r+YvM0IQFgoxpMAO+2adFKuxQZzHsiwzxQezWToq9wDABpJ4F9TqPrMJCseJ5RXCd0pe2c7A/aRV3H2ZyNplGpPJ0MGfFgsvruaLgKrMj2EwvEGDXUDC71HOpdBMt/WIdu6DDVTtLC8uOx+bYIfFPXe+NI0q9uEHlivoYXQZbApj+C/HXsQOpzKyPVhNMEaDrTEzqnNbCk0gFPcQIr5RTgOPyPvb60CF+Ci2hkPCsW13aeByhmwkORxzeC9P4nVoxp7rSEp37owOKqFi38/CJJ+9ooOi4qdjX4xXyGEPAdMu1RhObgqnrv3CzdjM3hZ3K4e6Cqb4/4hIT8h6EuJ+k/N7inZD1FqSYnCJkFkih35a2jpOb0BW/GCKzAWp8H0AAAqeUbU1QJy6c/j2tz1oECMd662mLzvoIv8yVDwkTQDa31ISAsxRz9UDHIjd7slwqgbbV1OlIrRnmxX548sKIriJrqVgeCqCtch/Dhx+bM294CM9tZ/z8xNikA2APQyg++ZjRfrtnnlQ/qO+ X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(376014)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:33.2666 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5bba32c5-77c9-4ebe-3024-08dceb52c6c2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD7C.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8496 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce the `sched_node_type` enum to represent both the group and its members as scheduling nodes in the rate hierarchy. Add the `type` field to the rate group structure to specify the type of the node membership in the rate hierarchy. Generalize comments to reflect this flexibility within the rate group structure. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 28 ++++++++++++------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index e357ccd7bfd3..b2b60b0b6506 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -61,6 +61,10 @@ static void esw_qos_domain_release(struct mlx5_eswitch *esw) esw->qos.domain = NULL; } +enum sched_node_type { + SCHED_NODE_TYPE_VPORTS_TSAR, +}; + struct mlx5_esw_rate_group { u32 tsar_ix; /* Bandwidth parameters. */ @@ -68,11 +72,13 @@ struct mlx5_esw_rate_group { u32 min_rate; /* A computed value indicating relative min_rate between group members. */ u32 bw_share; - /* Membership in the qos domain 'groups' list. */ + /* Membership in the parent list. */ struct list_head parent_entry; + /* The type of this group node in the rate hierarchy. */ + enum sched_node_type type; /* The eswitch this group belongs to. */ struct mlx5_eswitch *esw; - /* Vport members of this group.*/ + /* Members of this group.*/ struct list_head members; }; @@ -499,7 +505,7 @@ static int esw_qos_vport_update_group(struct mlx5_vport *vport, } static struct mlx5_esw_rate_group * -__esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix) +__esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_node_type type) { struct mlx5_esw_rate_group *group; @@ -509,6 +515,7 @@ __esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix) group->esw = esw; group->tsar_ix = tsar_ix; + group->type = type; INIT_LIST_HEAD(&group->members); list_add_tail(&group->parent_entry, &esw->qos.domain->groups); return group; @@ -521,7 +528,7 @@ static void __esw_qos_free_rate_group(struct mlx5_esw_rate_group *group) } static struct mlx5_esw_rate_group * -__esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +__esw_qos_create_vports_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { struct mlx5_esw_rate_group *group; u32 tsar_ix, err; @@ -532,7 +539,7 @@ __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *ex return ERR_PTR(err); } - group = __esw_qos_alloc_rate_group(esw, tsar_ix); + group = __esw_qos_alloc_rate_group(esw, tsar_ix, SCHED_NODE_TYPE_VPORTS_TSAR); if (!group) { NL_SET_ERR_MSG_MOD(extack, "E-Switch alloc group failed"); err = -ENOMEM; @@ -562,7 +569,7 @@ static int esw_qos_get(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) static void esw_qos_put(struct mlx5_eswitch *esw); static struct mlx5_esw_rate_group * -esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +esw_qos_create_vports_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { struct mlx5_esw_rate_group *group; int err; @@ -575,7 +582,7 @@ esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta if (err) return ERR_PTR(err); - group = __esw_qos_create_rate_group(esw, extack); + group = __esw_qos_create_vports_rate_group(esw, extack); if (IS_ERR(group)) esw_qos_put(esw); @@ -620,12 +627,13 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta } if (MLX5_CAP_QOS(dev, log_esw_max_sched_depth)) { - esw->qos.group0 = __esw_qos_create_rate_group(esw, extack); + esw->qos.group0 = __esw_qos_create_vports_rate_group(esw, extack); } else { /* The eswitch doesn't support scheduling groups. * Create a software-only group0 using the root TSAR to attach vport QoS to. */ - if (!__esw_qos_alloc_rate_group(esw, esw->qos.root_tsar_ix)) + if (!__esw_qos_alloc_rate_group(esw, esw->qos.root_tsar_ix, + SCHED_NODE_TYPE_VPORTS_TSAR)) esw->qos.group0 = ERR_PTR(-ENOMEM); } if (IS_ERR(esw->qos.group0)) { @@ -1037,7 +1045,7 @@ int mlx5_esw_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv, goto unlock; } - group = esw_qos_create_rate_group(esw, extack); + group = esw_qos_create_vports_rate_group(esw, extack); if (IS_ERR(group)) { err = PTR_ERR(group); goto unlock; From patchwork Sun Oct 13 06:45:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833690 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2057.outbound.protection.outlook.com [40.107.236.57]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29C811304AB for ; Sun, 13 Oct 2024 06:46:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.57 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802001; cv=fail; b=s2ME0tf+4ItyzcN0t81EG+v9G1P+96Tk+N8hJ0lgN9vFSrGFSJOOJlq949iUiQXlBBOIOEQUS9MS0sKXWCXbQDb9tqIT25YkAgx0jH46KFkUWocaUBOPaTpbF5MrNOgNfjbANgBG4X/mL19xWqT26L49dRl7zwd+Yc0ubvSYkAY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802001; c=relaxed/simple; bh=DeMDKS6cmECbetHr/ZN7q4+T520WH/PEOoha1SQvcMs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QUnhTqxQ4jGRL0VUWXKjLR8p412mNUO52zR0UVrT2Byu4ED7O6fkhcvPGcwObEELCVdjgfGeSNVZB96rU9pwaZuWfDIU6Ww8LsOVE+mxClOg3SuBB/PbGTe7PR8hFzlV5vxsauXWIt38iaavS91xfOapC3dekJIDxWE/jEvDgg8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=W6ae2yQR; arc=fail smtp.client-ip=40.107.236.57 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="W6ae2yQR" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=vvsfrbiuPwym49TJWLyO4z4LHg1rlBzUiX/MRxw8xf2KFzIu8zlnPQa1rShRJdtlR0iNxidhwrfzBZEHWba8+ZdXzpoeEjh1g3cVDPvzV26NJ3aU6QkIWyrAPI8Zvl9TyZPcvEPFALlG9RoTmDcocpMUs0CmpwT4YGzvYnmlbNdUb7dm/h+FBFH1P0dli90mURVa+JlyM1DTPn9JiliE3KcfcS/60ae5oCtZ37qyJ+4y4lBMu9f87cDfhnYqGlc9ccMG5WmRqPCKomgLEUjzjCGsZtng8SbpH2UQKanIbsfgnQVUAdj0QQiMmfIB1BTerITWcUuzlyst7m1BBiHFDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RarYE3jsQByPF8RQbeEGUZocew3Ygmu1nmD6tYuB5XM=; b=Xe5tq6SIEonNeNg81Uvg+VxFmFs+yOG52KeH27qxpzcRHscLA6yuTFcFBSqmS4uCAVIXLh9bMa82X0Cv+BTjKWdk9wMRnPRiVcja8IXug03tHlOcSFvvAIkvqBb1+bQd3XijVfeGingt8vyTYZ0F8VCEByogpEoGo/hU2Hd7pPL4aSe5melcN/R0+/atEDRgfxAWopRaqS+8sKZzVF0PFgXuRGY9UiB8hQK/7Kkubz+fUwPLxddofCT4EcMbuJz8rrQAc1OZopFdLA6RiuvhjyXC6erX8hslvH1UA3VwImldrDwQ9/jVIqW2i/76Nz5l0ILMAceEQsIZYld4h6Idmw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RarYE3jsQByPF8RQbeEGUZocew3Ygmu1nmD6tYuB5XM=; b=W6ae2yQRrx6y+tT/BLOTkcMhFmKCuMazey0WvtbNPpo62b5H23vXkyeLmWDKcgSu66HxOacGKSW4V57LNYE3LhylIIwKS9TdOSOdAXEUSopQ8B9nwr85mbegvHOYtFAD4Z7wG6n4l1aKLBlFDvjouK6JAu0n69n04QnvC/Mxnu5lgzBXu7erzrEZQwBUxnYiKti8NIAcruAZPGvTYU74c9vdPJLNqJToP5ERFxf+D63tv6SmO0t9hxkfbwgWv3c6kRAQe2ol2Y/63NXeP3MTyI0gyVbC4L0McVMl23bJREIDur6Mlo3XGA7Mwh9M6ftQZpSvWnHlVAAH3TjW+mYanQ== Received: from CH5P222CA0006.NAMP222.PROD.OUTLOOK.COM (2603:10b6:610:1ee::27) by SA1PR12MB7341.namprd12.prod.outlook.com (2603:10b6:806:2ba::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.21; Sun, 13 Oct 2024 06:46:34 +0000 Received: from CH2PEPF0000013F.namprd02.prod.outlook.com (2603:10b6:610:1ee:cafe::a0) by CH5P222CA0006.outlook.office365.com (2603:10b6:610:1ee::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.26 via Frontend Transport; Sun, 13 Oct 2024 06:46:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by CH2PEPF0000013F.mail.protection.outlook.com (10.167.244.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:32 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:25 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:24 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:22 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 03/15] net/mlx5: Add parent group support in rate group structure Date: Sun, 13 Oct 2024 09:45:28 +0300 Message-ID: <20241013064540.170722-4-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000013F:EE_|SA1PR12MB7341:EE_ X-MS-Office365-Filtering-Correlation-Id: e77633c5-5c54-490e-8588-08dceb52c66c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: iAzJIrz7xSkh7Rtyg9EoCCWdgXT9ZVuGG/A1IlvhfIJm1aqZLEImB88BMzYWC46KBfPs0PJNYXROF26p5vdXznX8b5M0p3eUuSFJt+njy1hWZgCWXnmnDmyORMBh8ZLLD+bqWaGMYcVbg57WW+qHR+oZh6A4hCKBUu49gYXltCfrVO2rg1DfBhYgVCN+TzatZ1EjOKrivCCOFmq0N/DhKDjanklgTOeQXMNfRlHR7bWaLiTn1RYKyARrBaMOl2N5ttTDEdsG3iGoFfPtT79YI1hGO3HWOWb8ldRP7OCsL0upfrYKT+xQGzGbfu2uasGF+RbghAmiCUskpiet/+4ITdagO0AGJNsn4bFr4jul75DTCU+D1ms9mjaCqBk+7FYUnw2gibZBF/Tv+XZhJXbupHPONjiPjuYqne+ro2VNLuWa8zoVb3LRz7gLKG2eHIVoE9k9uMZvvfWVF2gUBW3kN6elpYJcFGG69jjUEW9yyX5V8RJtBTa3PFuA7xRvlhJmYdIbYV4a18T45t5nW6z8r4STn0BME+PllLSQ/1p3oUs2JwCKNvT++dZo6NmXCA3UtTDYMxPwY6wvPvwSrUoZbPdUudiufJKEdCD8DdhvzkY+Hk8+/h7UX+UpPoqpUNAwMGUUUfIga1JhtJmJ8G+aB2THEMRq3d8TRhfwRsK90JvGWZnoH5F6+/WQLrLsSPsZWvXnzRqM6NmjRtT6tib+hWrh6SU2Nc+Bi8PZJitgn511DM1o4UF+pu1uy7hvxnLy4vETEpeDWxHrSaZoS9FZspCXPR5q/WzB1NlDbFYXyTT3ydEbbwJcFd0a0+vRm2GZA4I/KZIiSpd5c/DFXVtxt2SaV3rcBYRvaaJ51zdFfcy44WtlTboYcCeY6uxwXEtAELp937FzXNHchSxu/0uOyMhpkJiTC7ZUc5V1s5J1uujd38Jqc7L+jatFQ9QLflibGQdYPPLMK1Z606v4g8kGt5oXTV2ESWxmWeGwlhD85P0zdDxCmE2v9pjsMGMNwzabaSC+4X1EFIpbomO9z3alBHll6env7bJKpdbKfFC1nq1huZDLnRY7SlxdWPScsFQcYGEM4aGVLhETp2T0KOmHxlc1SS9WuNGyY2pnMIsXJfstiCPspns2f8qrqp/U8BgtLOBvwnrsivM/LiKvK6qqtu8du6ujejMANXmqzM5mwwiE5DfaFmU0Vk5v8uxq+wa/itsjUxDcGhTn7LHuwPshZQFlwO1SGjIC1svpg4cluZFRY4yUZsE3jRciTsOYRzXTYppDw0r7x6W8YtyTXM1AgkwHUd6HUtJMrj0gSmofEhK7Rh9Cmj55C/XSrkggaLtIGC1w1A1BQD2E0r3KW4QbT6DxAW/GNzrVehsswRfNX0mlpyE5e/6q7fLN6+vwUXS5 X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(1800799024)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:32.5489 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e77633c5-5c54-490e-8588-08dceb52c66c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000013F.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7341 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce a `parent` field in the `mlx5_esw_rate_group` structure to support hierarchical group relationships. The `parent` can reference another group or be set to `NULL`, indicating the group is connected to the root TSAR. This change enables the ability to manage groups in a hierarchical structure for future enhancements. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index b2b60b0b6506..e9ddd7f4ac80 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -72,6 +72,8 @@ struct mlx5_esw_rate_group { u32 min_rate; /* A computed value indicating relative min_rate between group members. */ u32 bw_share; + /* The parent group of this group. */ + struct mlx5_esw_rate_group *parent; /* Membership in the parent list. */ struct list_head parent_entry; /* The type of this group node in the rate hierarchy. */ @@ -505,7 +507,8 @@ static int esw_qos_vport_update_group(struct mlx5_vport *vport, } static struct mlx5_esw_rate_group * -__esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_node_type type) +__esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_node_type type, + struct mlx5_esw_rate_group *parent) { struct mlx5_esw_rate_group *group; @@ -516,6 +519,7 @@ __esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_nod group->esw = esw; group->tsar_ix = tsar_ix; group->type = type; + group->parent = parent; INIT_LIST_HEAD(&group->members); list_add_tail(&group->parent_entry, &esw->qos.domain->groups); return group; @@ -528,7 +532,8 @@ static void __esw_qos_free_rate_group(struct mlx5_esw_rate_group *group) } static struct mlx5_esw_rate_group * -__esw_qos_create_vports_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +__esw_qos_create_vports_rate_group(struct mlx5_eswitch *esw, struct mlx5_esw_rate_group *parent, + struct netlink_ext_ack *extack) { struct mlx5_esw_rate_group *group; u32 tsar_ix, err; @@ -539,7 +544,7 @@ __esw_qos_create_vports_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ return ERR_PTR(err); } - group = __esw_qos_alloc_rate_group(esw, tsar_ix, SCHED_NODE_TYPE_VPORTS_TSAR); + group = __esw_qos_alloc_rate_group(esw, tsar_ix, SCHED_NODE_TYPE_VPORTS_TSAR, parent); if (!group) { NL_SET_ERR_MSG_MOD(extack, "E-Switch alloc group failed"); err = -ENOMEM; @@ -582,7 +587,7 @@ esw_qos_create_vports_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ac if (err) return ERR_PTR(err); - group = __esw_qos_create_vports_rate_group(esw, extack); + group = __esw_qos_create_vports_rate_group(esw, NULL, extack); if (IS_ERR(group)) esw_qos_put(esw); @@ -627,13 +632,13 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta } if (MLX5_CAP_QOS(dev, log_esw_max_sched_depth)) { - esw->qos.group0 = __esw_qos_create_vports_rate_group(esw, extack); + esw->qos.group0 = __esw_qos_create_vports_rate_group(esw, NULL, extack); } else { /* The eswitch doesn't support scheduling groups. * Create a software-only group0 using the root TSAR to attach vport QoS to. */ if (!__esw_qos_alloc_rate_group(esw, esw->qos.root_tsar_ix, - SCHED_NODE_TYPE_VPORTS_TSAR)) + SCHED_NODE_TYPE_VPORTS_TSAR, NULL)) esw->qos.group0 = ERR_PTR(-ENOMEM); } if (IS_ERR(esw->qos.group0)) { From patchwork Sun Oct 13 06:45:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833691 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2043.outbound.protection.outlook.com [40.107.243.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 034F91311B5 for ; Sun, 13 Oct 2024 06:46:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.43 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802008; cv=fail; b=kM5MfgGQRMqOF3CSOav938lfGBBMDxhOTCWrv0tp5t1JpAZhKv8eqLbvKsso9yRW7I3+PmZaGUj02CI0PYLA7hWBonVK/aiQmveHErCy2opFyYgWxYhGuJx/s4FP5NNj/+5EbXGYGFZoX/D1llhEIGNTvsLSraFasHUP92/TJAU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802008; c=relaxed/simple; bh=gne4HbOWRdvfzMerhuljgp4drublCHnUQSr4o1bnYX8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BOaH+a0E8K1Q7HvZlkmR34lK0aelK1cK24S8UXOHolwbVq9e3NcZR2W9HyR+XIFc93Piw81sqF2adfc+cbbyKS1w9WE1g3Ih8XXQeZSsaVzJEYQAIKdOwazqwG1LSyVDwJfbuXN06pRR+di5sOrFNQTzvbH236CFTCnk/SX1Hb0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=qzQaR4ew; arc=fail smtp.client-ip=40.107.243.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="qzQaR4ew" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=FsGU5lWdPT+f9zuKuyou8CoLOjnGrilKKqNnPe4Or9xDtIfzTVBzsZXBB+hNujrNrnofFTxQ/eo/Ypgyjv6IENowJhJvFxvgXj83xUpHt07l78WkMqNJc2ZM0ff8NQVb1w02sN/V2jedjjCK4LvVj1UbvNCfB5Y6+8BP9+UZXDmP5xMvHrvHiKIXTv43SHdsvMSaTPG0sDwvjDmwvfyElth48Xt9WDBPFvMTmKQeJ+JSkUoCbbVSkCbD5iAcJDQYKjKn8SAQNwde2dM6evITPbNX91cUygPGSG32p0oZeVK2YRHc+z6UqBLZdfrbbNztKozn48Vkq50kbWVcikTnZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bl7/RjmojJ5wX6lOUOU9w+82A7KPvQVosGIyHRUoC8Y=; b=pu/2aziUR7ClSJDMVN0I0IWdAueLi17+/Wk2qqBt22Oq9GZ7ZSwy8ofE6+kJKwtW36vx/BGsQLHZz/W1Z7DXKq/qVnsQxEJI+cKlL1Z0KUMfqoqkvAeZ+l2a6g9C71MlGnTz4+iWC0OExVPhcOX4GCL32nzHgrhTkW+tFy/sjW85t0o/Cx01DYsc8T6G2OdTG4InfLZQW+s6zorTRK2aV0JXHzJRqrnygZ1UWLzO9Pk/ECzfUjOLjODw/IeE8yI6uO1HBleg0P65xYFQfTOUCsJ4XWNIM7YvcnEYcy6HnnCGYJruyRRsldaRWPvRCsMOzSxlHbTW9K4YmRcYSt/qMw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bl7/RjmojJ5wX6lOUOU9w+82A7KPvQVosGIyHRUoC8Y=; b=qzQaR4ewWazTtc4oiRAD2I92Dg7RlJVGgIPmoowAqYJVyIwXSJgUz/bs5XTk1ZG7ORD8g0M5AIIY39fhTjmhPupSzHV2KNBewU3qo9bwOPk+54ShLE8pQJRcHZk+UApPXt0UpDJjh0zBFheW1c2SWRqmR09nURfk+08BmrTITJ4txzBlb5dTfbWzwBLdAumcsP3nwNJdk8wG/YKMHZJsKpUbfWf8lBLyrByT9e+Ht30a6aox+QMCzuCTrY7+ADkvvyGMQYDTL5SFT9DnPjpsJpA9wGjvG+CW4W14go9zmgFVQMgMET1irLBqoOoQOlBmqY4nxzEFu9eaLOMj6VQ4XQ== Received: from CH2PR03CA0001.namprd03.prod.outlook.com (2603:10b6:610:59::11) by SA3PR12MB7923.namprd12.prod.outlook.com (2603:10b6:806:317::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.18; Sun, 13 Oct 2024 06:46:41 +0000 Received: from CH1PEPF0000AD7F.namprd04.prod.outlook.com (2603:10b6:610:59:cafe::b8) by CH2PR03CA0001.outlook.office365.com (2603:10b6:610:59::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24 via Frontend Transport; Sun, 13 Oct 2024 06:46:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD7F.mail.protection.outlook.com (10.167.244.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:40 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:28 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:27 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:25 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 04/15] net/mlx5: Restrict domain list insertion to root TSAR ancestors Date: Sun, 13 Oct 2024 09:45:29 +0300 Message-ID: <20241013064540.170722-5-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD7F:EE_|SA3PR12MB7923:EE_ X-MS-Office365-Filtering-Correlation-Id: 40df5ff8-2a81-4f14-0bf4-08dceb52cb36 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|1800799024|376014; X-Microsoft-Antispam-Message-Info: ljQ6CceWTEKDzEbcshPg7H1gu3ZkyWe0gphFCgTfbVf45dvWG6t716jGttruse/pi5VD6xdIrj0ILpDdRVlKNAExgiqm8pYz301oZ+3J3Q3JMfIvYdbu6r+m9pDvvL/nTo+ZmpemmF4TVtj3/XT0ftvJ3YZ66HYAcO3g1byZYgZeYIMRoNEgNexZmZGfG1W7FtW/6FyIxC3cExAMV0HTNkzrTXm8VF1fh7IAdIxdPWrdud2es2lnFYLbl9hq8pDINHtAieLdAdAvmEq4sn4jptiU/TjdBqjFCu6UD9rVeCHTkYOwW53FSQbMAoMkZbbhHmWH10mhu4VjqhgnS5GZSYHR6CZgBvlmngSbPHntxihAxC6l1vcKXRu7aXOd+WdtJJsB1mUzgRrhgw6uNh2Pd+QQyj691rFURELtwIQX9tdwC6Drs3mvkiAt/qzkuHaeV0pYCHzV7irXXrFLzjAKggMYDpWKjsP3wxoT58VhwO/S7abURUlJPNW/F2mtny1zk6yAVsbE8Mc7TtOvFbJwteWzrPJqnZfUJvYjJHzj2yhXyAcDpkug7UVs7XVZo55ZEmmbv0PwzEpwO2BhanMfXH3kh3Iex7TyLZ1uT/eZBEpiEbVjF0jpiFlVA4LTpKr2IEJ2ovUITXTt9ATbkejn18sMQUoUVPD6GHcXd59iXn3yjS3lPwJsGxE5vsSL9ngu+1N4AW2ckVNaf8FIOZyZf9P5Tq8Ldk8ymVx66xHjfcGi+oBcdDm4MwMpd2ycfj68ny38XhcQymKe96PDt8nnIyp5korgKGP/miLasV5i/eOhe6RcK3uNcOHyScQWGyH0eAlnqjWgJYri4wvN7BtkbBIMHQP0nfYA7QCM3Dwcoq9XT31lESCKn3BmDXCooNrSZhJtSW5DebjNS+I4ns7lEBaB+vwoOgoNon78Sk0xMr52ooHwQI5zefY5O2/Jpx3sz9RvfSYJrXlyh+Vd6cGzwH0rWhYkWt6SA3M3gCh3U9dNJD3tVNCwuK/6uv0v3QORHWW/eb4GVEsfHvlxWL+xFgD3y/mz3omnafMHS2Eb/+tLUekEROQdXEhAAPSGoN3PkgT1iQuMH1lcKbhG8jWXaqaXkoZfVVK+Tny3e0hFcWTZ1FXMmleqJG4m5RywgYuHzzs5qkt4iGsDPvlOEHwwN8KhjFQN+OvTiWK2zM45PPUAurPUAc2V0VWYIS/jkEsAq6QsD0lp/21vmBCRl43I0X73LB/1Al3nsm3Sno26h2WTJR/mC1nAaD5T1SscZi2v7BZ1tWxZnA14kgNDY/4UMnJWUwDsBxR0w/AO1sB1+xfQYXwKTz0hh0KLY7a+lfTxs+zmGRWeNS3gCCuKBiE82QAzfXMccy4sRUW7MyX/MTg= X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(1800799024)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:40.7367 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 40df5ff8-2a81-4f14-0bf4-08dceb52cb36 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD7F.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7923 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Update the logic for adding rate groups to the E-Switch domain list, ensuring only groups with the root Transmit Scheduling Arbiter as their parent are included. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index e9ddd7f4ac80..65fd346d0e91 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -511,6 +511,7 @@ __esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_nod struct mlx5_esw_rate_group *parent) { struct mlx5_esw_rate_group *group; + struct list_head *parent_list; group = kzalloc(sizeof(*group), GFP_KERNEL); if (!group) @@ -521,7 +522,9 @@ __esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_nod group->type = type; group->parent = parent; INIT_LIST_HEAD(&group->members); - list_add_tail(&group->parent_entry, &esw->qos.domain->groups); + parent_list = parent ? &parent->members : &esw->qos.domain->groups; + list_add_tail(&group->parent_entry, parent_list); + return group; } From patchwork Sun Oct 13 06:45:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833692 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2075.outbound.protection.outlook.com [40.107.93.75]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A77AF130499 for ; Sun, 13 Oct 2024 06:46:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.75 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802010; cv=fail; b=nGu1ZQF0kgB4EbkvtpuHrGVyX/QnG9r5RWhIgTKJB4vrik/81tFRo9QlXnTD9qr0+aHtra7/rtDL7P/RSx1NpAIFYzmjcf/nEpHBvmQLXp0jBEKPM9DD/uk5/T2Iuce3GGuMswuTZy6tszRl7pHSta6uWWXhDmGmY7jRdo2CW58= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802010; c=relaxed/simple; bh=/8CJOu2e5aX6Ix/AuyjZfeI32Y1tTfEkg9g8gMt4j5w=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=eVTPiNnsGz5sqIuwJyc21ghT0as3z3D363zR2Dio29HfZ3x6FuAL4lax1wfPWrZKd+Gli+s20qMAlNj3bAJO8WqmlHUdNyvwVlbRLZZxkegwKyewCbiFlyRpre5CIsAy1dt9Ibe7JrCH3P+6SQmMvE8KMwaiKbV5Q6lU61UqNaA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=NicPT2Qk; arc=fail smtp.client-ip=40.107.93.75 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="NicPT2Qk" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=qzYd1hddC3PR+zNVjjVDW4AmNrGiSKZmWXLpX2QbbWKDdf+0c2TSZViWysZ1WwkEfsP9Viq+r77eYghnBkKr6YxbLc8yvyb3yRjKk02ZgZUWHSlxdfTawzuK+Vo36bFlBHgvrDSUOWaApaEqqTXbf7HI3IYl7UhxMhbJ1GxjpILOPHKWfXMSoOF2CQKlGjWF8F9vJrFHjNOPcr9PJzugoWBQNfU4obZ+S/JLUTRAKymgshIk6voWb6xWJ4he9sXYnWWgqlTOPniO3ozW1fwDTlpyPhiqiDLD0vGz1wGLEBhOt2EvjVKe8HLzWV7aiiGcZmdbXmRlMHbT4h8n/n3ZpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9BW5T9Bd9p9Rib1LgHHq8T114lk2QU25Q6qS/0Bfv+o=; b=tD2/lXEW35eEh9R35SIZgO+pwfRV32JC75vbjIAyDgKZELjIHi+uLmySDJrLzvUcDfNasBWjm87p75KDm/CITYGf5e/LIjjpVDbbYJ/cq54FhqOT9q3eQHCX3f+hEIfvwWM39B/NiEYpPmzoyQS3UYA+bcggKR0XsOaH18YnlH2ke3zMIF19+eiDH45eqhQLEccxGZS7AcqMBqjXJTuOhIJ6xFzX/81AMZL99h9ecUdmcpM4UsDImxdMA8YtSzJDtcIHri3t7A9TYKRBkOUumk9wXNOcCbud5naogajv9++JIMQRNDwf8ihpc6BpG4GU2+ZULb2eiE/HP2xqqnDC+Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9BW5T9Bd9p9Rib1LgHHq8T114lk2QU25Q6qS/0Bfv+o=; b=NicPT2QkRgj033piKcIM2VijVOxiEl+vyxElakyWiqcbCsR/ciK7pah8cLx2KJNASPxp4yAEBDa31rZsqv0zYoo2sKZl4NiJAswQBeEkQTDQRvDMieW/UE3cx4pu6+3at6uxGU5dQwIza+HHIXJOeXjX3BS0Fmf0+42+srkJ9jYmKNLEKHeXsCpLtq1Ou5o4AMfMRXSL+BNBiWeIYYk+YfL/iH8AQHPs0vDNJ+Fr5dc9BsCblqq+1JmrMomxXD7LvuWalQuOLmRPAWEBVIqaVyQZV2f333Rbv4QU4uHn/VHuLhZ0Kx6GZJll/7kppyehF3onTOEpabqjRrJbHq8ZXA== Received: from CH2PR03CA0024.namprd03.prod.outlook.com (2603:10b6:610:59::34) by SA1PR12MB8094.namprd12.prod.outlook.com (2603:10b6:806:336::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24; Sun, 13 Oct 2024 06:46:44 +0000 Received: from CH1PEPF0000AD7F.namprd04.prod.outlook.com (2603:10b6:610:59:cafe::fa) by CH2PR03CA0024.outlook.office365.com (2603:10b6:610:59::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.21 via Frontend Transport; Sun, 13 Oct 2024 06:46:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD7F.mail.protection.outlook.com (10.167.244.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:44 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:31 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:30 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:28 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 05/15] net/mlx5: Rename vport QoS group reference to parent Date: Sun, 13 Oct 2024 09:45:30 +0300 Message-ID: <20241013064540.170722-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD7F:EE_|SA1PR12MB8094:EE_ X-MS-Office365-Filtering-Correlation-Id: 215f3b98-3934-4b4d-2fa9-08dceb52cd40 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: ivjunYxwebkHDVALvpXsAJJoUNmn4tcBS+M+TIHHhFbR++7VxD4EwMJdMhzvlxK5WWobrx92Ufhj0GwkZqJT00FJmR8yUN1DRomysG2cx79NlIZYZ4TJy/OI4LdpKRPZqFXldKqGflgDAnoAq4bwE2A8jLftFwxYXxwZ6NC2hgHbAN74eZpJaLMOODr+qNkwq+Hua5oX5UdspUemBLpx2QAB8rzZRycJ9oWcTGBso9mYkSY4ZwnUYNhLDa57sE/C96fzY2YgP66Ib00QEV0WD7H/kTvFISuBu6cbSkwQDrlb0ZY7sc8bS5DNKBALArZ121SxO0tuE18PghGNpR9CqUy+O+isoRLsNSByPFnC4uLOEs8zQQ1F8eaNkE8ahV7d74/QUepw+XZSbn+GrqVdrVEBqScMmQo+PeB73MGSdw5/fckvA0knnogO3uRtApLIo4ko8Gce8HqJMCIHvj7hEYlnHy1Xe6Cp4td3zoxi0mo2ykbjTdDKwF60klWHLvPWMguB0dY225KNVjiL2mvSx41k7/srCnEzQFbrXYCaP9s3iqE1iUIrfmWE/+hrb6IgwP/SJIOZ2sMHPclVQMlTa+PtY7rt8oREGdipqQS6GWoWGborl/8kYIaAxUYLeZVFyc3Db22yx5/iZQanQ7eZNUJZVhb+B8o3X+PguTAHTXJ+ip/k86Ju8SDAQHzPNfeEBmnLJs4n5FKYbE+4oE1Uq+oKfv55wn3pIXGnncPnQY9A4ZpaFdmKt+09kT5aY1Y0lp6hmWBLVPYlHyU48nUoxgM8x4trVT7kyf4hX8xiE6NQs7zPh+luCiC/dKNLIJ1irLa7u0voCuKx1kjPmQHVzwriVD/a9Q4tlX1qcZhSmxw+dh/ge1MNOHa9i+rWfBahGN0WJwgTuZQKLl3naeN37aXKxFIfeUEAazb2qvV3uIZKcNhfi5q29SExE1EOZXnEEVcakiRn7FvUDVZiYNB+fDQVm0lZMie5V+7Je0bnSrqXnmZtOXbsw5TAFRfLRY+Mti0yx5oxJlCqt5NyCuneCYDDjVXD/lHn2Uvw57Dy/ritGJT0poOjG0tSFiYQ0JmnwKXCojPsFbLrojFYXhBwtdo5PIABXqHoy5zBFV5shzdstZZCsFoWeDFPgE++9t8soTHWPTV9jCPX2OkJTAnJGprldUv54og7DatDctQwk4fHP82tiUZHVm2PeuCyWYJ9WAJIxxnAwjHa8sw94wVYXFheMRrlv/aSaEUJeotSBm0D5EHBuL2wJT365pD1Mf4sOCOHgWMD3Dqcu/dxSzy3z7oHt168Ap8IYBP2Yw7Cis+YtGkBf13ovA+WmNnFP6xYPgElT8vetrSnUzvuTjhk81Fj0INeFZy9JFjYRr2tTg20XYdIiCArHrHmvj33Lnq+ X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(376014)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:44.1742 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 215f3b98-3934-4b4d-2fa9-08dceb52cd40 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD7F.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8094 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Rename the `group` field in the `mlx5_vport` structure to `parent` to clarify the vport's role as a member of a parent group and distinguish it from the concept of a general group. Additionally, rename `group_entry` to `parent_entry` to reflect this update. This distinction will be important for handling more complex group structures and scheduling elements. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../mlx5/core/esw/diag/qos_tracepoint.h | 8 ++-- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 42 +++++++++---------- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 6 ++- 3 files changed, 29 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h index 645bad0d625f..2aea01959073 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h @@ -35,18 +35,18 @@ DECLARE_EVENT_CLASS(mlx5_esw_vport_qos_template, __field(unsigned int, sched_elem_ix) __field(unsigned int, bw_share) __field(unsigned int, max_rate) - __field(void *, group) + __field(void *, parent) ), TP_fast_assign(__assign_str(devname); __entry->vport_id = vport->vport; __entry->sched_elem_ix = vport->qos.esw_sched_elem_ix; __entry->bw_share = bw_share; __entry->max_rate = max_rate; - __entry->group = vport->qos.group; + __entry->parent = vport->qos.parent; ), - TP_printk("(%s) vport=%hu sched_elem_ix=%u bw_share=%u, max_rate=%u group=%p\n", + TP_printk("(%s) vport=%hu sched_elem_ix=%u bw_share=%u, max_rate=%u parent=%p\n", __get_str(devname), __entry->vport_id, __entry->sched_elem_ix, - __entry->bw_share, __entry->max_rate, __entry->group + __entry->bw_share, __entry->max_rate, __entry->parent ) ); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 65fd346d0e91..67b87f1598a5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -84,11 +84,11 @@ struct mlx5_esw_rate_group { struct list_head members; }; -static void esw_qos_vport_set_group(struct mlx5_vport *vport, struct mlx5_esw_rate_group *group) +static void esw_qos_vport_set_parent(struct mlx5_vport *vport, struct mlx5_esw_rate_group *parent) { - list_del_init(&vport->qos.group_entry); - vport->qos.group = group; - list_add_tail(&vport->qos.group_entry, &group->members); + list_del_init(&vport->qos.parent_entry); + vport->qos.parent = parent; + list_add_tail(&vport->qos.parent_entry, &parent->members); } static int esw_qos_sched_elem_config(struct mlx5_core_dev *dev, u32 sched_elem_ix, @@ -131,7 +131,7 @@ static int esw_qos_vport_config(struct mlx5_vport *vport, u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { - struct mlx5_core_dev *dev = vport->qos.group->esw->dev; + struct mlx5_core_dev *dev = vport->qos.parent->esw->dev; int err; err = esw_qos_sched_elem_config(dev, vport->qos.esw_sched_elem_ix, max_rate, bw_share); @@ -157,7 +157,7 @@ static u32 esw_qos_calculate_group_min_rate_divider(struct mlx5_esw_rate_group * /* Find max min_rate across all vports in this group. * This will correspond to fw_max_bw_share in the final bw_share calculation. */ - list_for_each_entry(vport, &group->members, qos.group_entry) { + list_for_each_entry(vport, &group->members, qos.parent_entry) { if (vport->qos.min_rate > max_guarantee) max_guarantee = vport->qos.min_rate; } @@ -217,7 +217,7 @@ static int esw_qos_normalize_group_min_rate(struct mlx5_esw_rate_group *group, u32 bw_share; int err; - list_for_each_entry(vport, &group->members, qos.group_entry) { + list_for_each_entry(vport, &group->members, qos.parent_entry) { bw_share = esw_qos_calc_bw_share(vport->qos.min_rate, divider, fw_max_bw_share); if (bw_share == vport->qos.bw_share) @@ -286,7 +286,7 @@ static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, previous_min_rate = vport->qos.min_rate; vport->qos.min_rate = min_rate; - err = esw_qos_normalize_group_min_rate(vport->qos.group, extack); + err = esw_qos_normalize_group_min_rate(vport->qos.parent, extack); if (err) vport->qos.min_rate = previous_min_rate; @@ -311,7 +311,7 @@ static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, /* Use parent group limit if new max rate is 0. */ if (!max_rate) - act_max_rate = vport->qos.group->max_rate; + act_max_rate = vport->qos.parent->max_rate; err = esw_qos_vport_config(vport, act_max_rate, vport->qos.bw_share, extack); @@ -366,7 +366,7 @@ static int esw_qos_set_group_max_rate(struct mlx5_esw_rate_group *group, group->max_rate = max_rate; /* Any unlimited vports in the group should be set with the value of the group. */ - list_for_each_entry(vport, &group->members, qos.group_entry) { + list_for_each_entry(vport, &group->members, qos.parent_entry) { if (vport->qos.max_rate) continue; @@ -409,9 +409,9 @@ static int esw_qos_create_group_sched_elem(struct mlx5_core_dev *dev, u32 parent static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, u32 max_rate, u32 bw_share) { + struct mlx5_esw_rate_group *parent = vport->qos.parent; u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; - struct mlx5_esw_rate_group *group = vport->qos.group; - struct mlx5_core_dev *dev = group->esw->dev; + struct mlx5_core_dev *dev = parent->esw->dev; void *attr; int err; @@ -424,7 +424,7 @@ static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT); attr = MLX5_ADDR_OF(scheduling_context, sched_ctx, element_attributes); MLX5_SET(vport_element, attr, vport_number, vport->vport); - MLX5_SET(scheduling_context, sched_ctx, parent_element_id, group->tsar_ix); + MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent->tsar_ix); MLX5_SET(scheduling_context, sched_ctx, max_average_bw, max_rate); MLX5_SET(scheduling_context, sched_ctx, bw_share, bw_share); @@ -458,7 +458,7 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_vport *vport, return err; } - esw_qos_vport_set_group(vport, new_group); + esw_qos_vport_set_parent(vport, new_group); /* Use new group max rate if vport max rate is unlimited. */ max_rate = vport->qos.max_rate ? vport->qos.max_rate : new_group->max_rate; err = esw_qos_vport_create_sched_element(vport, max_rate, vport->qos.bw_share); @@ -470,7 +470,7 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_vport *vport, return 0; err_sched: - esw_qos_vport_set_group(vport, curr_group); + esw_qos_vport_set_parent(vport, curr_group); max_rate = vport->qos.max_rate ? vport->qos.max_rate : curr_group->max_rate; if (esw_qos_vport_create_sched_element(vport, max_rate, vport->qos.bw_share)) esw_warn(curr_group->esw->dev, "E-Switch vport group restore failed (vport=%d)\n", @@ -488,7 +488,7 @@ static int esw_qos_vport_update_group(struct mlx5_vport *vport, int err; esw_assert_qos_lock_held(esw); - curr_group = vport->qos.group; + curr_group = vport->qos.parent; new_group = group ?: esw->qos.group0; if (curr_group == new_group) return 0; @@ -714,8 +714,8 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, if (err) return err; - INIT_LIST_HEAD(&vport->qos.group_entry); - esw_qos_vport_set_group(vport, esw->qos.group0); + INIT_LIST_HEAD(&vport->qos.parent_entry); + esw_qos_vport_set_parent(vport, esw->qos.group0); err = esw_qos_vport_create_sched_element(vport, max_rate, bw_share); if (err) @@ -742,10 +742,10 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) esw_qos_lock(esw); if (!vport->qos.enabled) goto unlock; - WARN(vport->qos.group != esw->qos.group0, + WARN(vport->qos.parent != esw->qos.group0, "Disabling QoS on port before detaching it from group"); - dev = vport->qos.group->esw->dev; + dev = vport->qos.parent->esw->dev; err = mlx5_destroy_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, vport->qos.esw_sched_elem_ix); @@ -887,7 +887,7 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ err = esw_qos_vport_enable(vport, rate_mbps, vport->qos.bw_share, NULL); } else { - struct mlx5_core_dev *dev = vport->qos.group->esw->dev; + struct mlx5_core_dev *dev = vport->qos.parent->esw->dev; MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps); bitmask = MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 3b901bd36d4b..e789fb14989b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -221,8 +221,10 @@ struct mlx5_vport { u32 max_rate; /* A computed value indicating relative min_rate between vports in a group. */ u32 bw_share; - struct mlx5_esw_rate_group *group; - struct list_head group_entry; + /* The parent group of this vport scheduling element. */ + struct mlx5_esw_rate_group *parent; + /* Membership in the parent 'members' list. */ + struct list_head parent_entry; } qos; u16 vport; From patchwork Sun Oct 13 06:45:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833694 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2078.outbound.protection.outlook.com [40.107.96.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4314F130499 for ; Sun, 13 Oct 2024 06:46:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.96.78 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802016; cv=fail; b=tN9xffs6bmzBccsfSjOZ1+HHBPEiOTAqM/W6QBcBu8TM8mCLuMzkZ70i4YQF75unhjLUl1rL9Zi9IXHzYhcEGknpzI4ez1g5MlVvkyfbt8F19PXrc7Ks0kBcZ3+6hs23oQSjh2rc/I/ilQM6QvzMKqq8k9VlwtE5XH+EcG3Tgzs= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802016; c=relaxed/simple; bh=zDjJ7+0xs7IgII2zX2JoYN1Bt5avYrW3IQ5xHVOWLcE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=P1nWbpot0DA/EMl9Yb1+sCAmIhPV8JQwGkNr0MpomhCLOfNn0bclp9kwkDvkXaRLVCno76SDq3veZaUhyuRnhHckFSYCKXpvHhVx+4R65b2jzRany1mw/hxpeBb/MEs9bmQXf9HH2SlGhu785BWJT0ksAMsKzlpZlQ58aLPUPGU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=DSrEXsAg; arc=fail smtp.client-ip=40.107.96.78 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="DSrEXsAg" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=BNZwTvOgvqFoFbQNMkwujcNAtuudrqMVjcm6Ttb3Z1yugVeIwLusyMzkkcLx8SdSRoAGs1cgVojHm6c6X5VylDiZobGWyp24Rls0Ak8Dg4phq+nxRhtfzuL8L3fwVNzWcbPTgpQjsvsXjVcVJ+nL6a65n0aIQgkMK6sgno8dK5634AJEmqRljc24EGUudg6Xs0BXr5I1V5Dv63M6AkF7J7zv9ztWS5/O+ArMYUI0thP8QhgGNwhey5V+89Ul5GZghKkK22aO9+qZWiltEJn3lkl8VHyvSCxVfv7Ye54qUBk8VhZ6KWID5LeBTBwR5eCQX9O2rE8m+Y1MblrNd65s7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YMM2EQjsaaD77icFzdowU/DeOZKwb4ThTVuY7J17xr0=; b=NWNm27kFMRQGOwTRbiI0DEl/YiWmEd7tIp4B6ZwoA8Ejmdht24/BKoEDb/Ia2D0NOLvYFvwrQXe/rySNJmYjLZn89hdAtOE09+3w3Cbga8gk+BDdBqXMaPAllnM9ydlMkhMUsscNU6DUZcMyoHfTQxpQ/WjIjLkOAEsXLHn0sPITPjLrl9i0gjWawW7YbvLNyjx2QeDCGOwTeVubO49lK0wULY0tY8n69Z1zv1fljyey46pthWmHDs5cfWhazBRm9zPN4QxN3H5rUgp9acIVlyIs8023ietZf7zrEiKL9qcrkDsJhyX5E0cXC7I84lf3+FowUimS9zGcBs6lQGcZzA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YMM2EQjsaaD77icFzdowU/DeOZKwb4ThTVuY7J17xr0=; b=DSrEXsAgOwZf+uhEOBCpc2piy2fmKK14LP5X1jhKr4ED+wXrDN0cS3PzJa2Uw1R0FAct9zWfJXMuN/kEqS+qiMmvaUzQ4raKbSkHMv743t2kvVYddAmf+R1F16jSrwIxBgHhJb1HXpQ4O7cCkRvfD93dOfi2VRUINEkuVKZNns3DHqUehdnb3A6qI2cE6RLsJIEPR0QMajIS1YJbubFrSpcuQlGikwmPQ5vGGpHSMfbaNC8NYNH61YM5knNiCbgjmyqelt8A+FF/+yOP27obmoMvb66HGO7vTGHHVjPK+GyYMUFZd/ehu/cBMmymlyDQJ/XQBAbl3H6QpVIQvyU5hQ== Received: from CH5P222CA0010.NAMP222.PROD.OUTLOOK.COM (2603:10b6:610:1ee::28) by SN7PR12MB7934.namprd12.prod.outlook.com (2603:10b6:806:346::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.16; Sun, 13 Oct 2024 06:46:46 +0000 Received: from CH2PEPF0000013F.namprd02.prod.outlook.com (2603:10b6:610:1ee:cafe::29) by CH5P222CA0010.outlook.office365.com (2603:10b6:610:1ee::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24 via Frontend Transport; Sun, 13 Oct 2024 06:46:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by CH2PEPF0000013F.mail.protection.outlook.com (10.167.244.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:45 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:34 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:34 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:31 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 06/15] net/mlx5: Introduce node struct and rename group terminology to node Date: Sun, 13 Oct 2024 09:45:31 +0300 Message-ID: <20241013064540.170722-7-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000013F:EE_|SN7PR12MB7934:EE_ X-MS-Office365-Filtering-Correlation-Id: 1321c7cc-0878-4bf8-52a2-08dceb52ce14 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|376014|82310400026; X-Microsoft-Antispam-Message-Info: 92caekNz0DdI+PiZ7ZlQ0RcQWaydyZvkScUvGFCH+YDVQnMkPKfuy4APz2TiUGHL8O9POuoZ5ii1xY93jQj+KHzT5vFAQAoUNgESTF0EFMJ+MyfFi+hiaWdeDd9QtHWIReL/hH9K/1zGC9ENga6t7VfG/BGSuC1TVYElVt2urVJYKdM75xZARZVLa7cpAaeL0gAbt9Q5KTX2uRbec+yvdNkr/k7DUAvPjEh6/2qz8Z7tpUWyH8idR69z2TRFv2FIXY1OUBLS2YO77DKFz9gCTvk5XshsP1Tn9YtjKdw6gywYTaHQjRtmYNXetLO9gOvr+OGq76VhlfgvyI9QTCIHmEsFeabZNLfJOzpBvJ/i0mfg+ZFou2Wp+bQq2c2DBioIgNg4FrdLU6tODiaD0s3paA4J5rt9SFhKogRdxYCIOrIFfFldIQYS5Y5OyPNkWP2cmilyIyT9qi0AcZNJlUOgPCNyh08hRPmfiauNLn5xTr57hKDjN9Dn04f6gnySaA2YTlF6vWXTUrkqNrggjddIY6QNLIo9xP0axKIg+oZAPXyf7npqhpeVzQnJ9qc9jeChSAdYLY3AWNZVaiWXsmKOyhvZLuEhV+x++4fMBL3MEPNQ7zjQUNA9qD3rYfObkj2Nu7wkrGjOc2z44HhPJIU8F3MrYiR87QcPCOA28nxxqUSOqWTFk+qMglV47onhT1ZEUg0klpww0vwt6eJDb1Er0WqxUhzm4gOxCR9lhRmD3wvYVISk23QSpNoKJKL6ukGaos9MRRg8AXhBWMkhDkGRE6HlGBAdnYsFBJnL4ObHcpcSKQlA22CmVtfsLDXuhKY6sQPRfCNvXesV9y0T690n659u9nxjy+lwt8GTIqIgxsHKwo3jVu9oUMiz1cakxhTqO6dlvQDiL8U/AZqbMwqK8t8hp+DuKnqSyHsJwv42zhymN+az2lw2crU7DP9lOqPmASzNOvOrM0OSyzyxoEfZv26bycy8Yk3NNk1Bh1UnHwoAq7F8BRzOj0euWmrBEf9mVTfMOybFgDRIi0k9clvD5VOTxMo+iuRFXAFcxypZjXvsr0FhR0dK0Qzv99i/899HGQti6+DaC0bNiKaJaqrho1VQ59u5+fCvkYD3pgJySBCG782fhOmR9WDIK3AE73f//mvACNHRXfqfOUmt/j+xa8ZvBpa7MqFOyEENTmoKGBXlyojmjZis3FuiZQMfiGMfogEDOw0jFMVpOS68zKY6s0phJmO86FWqLqThtD1gkhnaEnZbZ2Z0tTUndv4C9MiHRPI7reLy3xCNRAsoaRxL+niRoqykgFpk9sxSSuWEGiVf3Y0ReBzbY24i87S2JTTGqx83JVjYWALiaapUIzRxTHqrvQxH6jFHAPb4foa2MU4= X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:45.5645 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1321c7cc-0878-4bf8-52a2-08dceb52ce14 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000013F.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7934 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce the `mlx5_esw_sched_node` struct, consolidating all rate hierarchy related details, including membership and scheduling parameters. Since the group concept aligns with the `mlx5_esw_sched_node`, replace the `mlx5_esw_rate_group` struct with it and rename the "group" terminology to "node" throughout the rate hierarchy. All relevant code paths and structures have been updated to use the "node" terminology accordingly, laying the groundwork for future patches that will unify the handling of different types of members within the rate hierarchy. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/esw/devlink_port.c | 2 +- .../mlx5/core/esw/diag/qos_tracepoint.h | 40 +- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 377 +++++++++--------- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 16 +- 4 files changed, 218 insertions(+), 217 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c index 86af1891395f..d0f38818363f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c @@ -195,7 +195,7 @@ void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_vport *vport) return; dl_port = vport->dl_port; - mlx5_esw_qos_vport_update_group(vport, NULL, NULL); + mlx5_esw_qos_vport_update_node(vport, NULL, NULL); devl_rate_leaf_destroy(&dl_port->dl_port); devl_port_unregister(&dl_port->dl_port); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h index 2aea01959073..0b50ef0871f2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h @@ -62,57 +62,57 @@ DEFINE_EVENT(mlx5_esw_vport_qos_template, mlx5_esw_vport_qos_config, TP_ARGS(dev, vport, bw_share, max_rate) ); -DECLARE_EVENT_CLASS(mlx5_esw_group_qos_template, +DECLARE_EVENT_CLASS(mlx5_esw_node_qos_template, TP_PROTO(const struct mlx5_core_dev *dev, - const struct mlx5_esw_rate_group *group, + const struct mlx5_esw_sched_node *node, unsigned int tsar_ix), - TP_ARGS(dev, group, tsar_ix), + TP_ARGS(dev, node, tsar_ix), TP_STRUCT__entry(__string(devname, dev_name(dev->device)) - __field(const void *, group) + __field(const void *, node) __field(unsigned int, tsar_ix) ), TP_fast_assign(__assign_str(devname); - __entry->group = group; + __entry->node = node; __entry->tsar_ix = tsar_ix; ), - TP_printk("(%s) group=%p tsar_ix=%u\n", - __get_str(devname), __entry->group, __entry->tsar_ix + TP_printk("(%s) node=%p tsar_ix=%u\n", + __get_str(devname), __entry->node, __entry->tsar_ix ) ); -DEFINE_EVENT(mlx5_esw_group_qos_template, mlx5_esw_group_qos_create, +DEFINE_EVENT(mlx5_esw_node_qos_template, mlx5_esw_node_qos_create, TP_PROTO(const struct mlx5_core_dev *dev, - const struct mlx5_esw_rate_group *group, + const struct mlx5_esw_sched_node *node, unsigned int tsar_ix), - TP_ARGS(dev, group, tsar_ix) + TP_ARGS(dev, node, tsar_ix) ); -DEFINE_EVENT(mlx5_esw_group_qos_template, mlx5_esw_group_qos_destroy, +DEFINE_EVENT(mlx5_esw_node_qos_template, mlx5_esw_node_qos_destroy, TP_PROTO(const struct mlx5_core_dev *dev, - const struct mlx5_esw_rate_group *group, + const struct mlx5_esw_sched_node *node, unsigned int tsar_ix), - TP_ARGS(dev, group, tsar_ix) + TP_ARGS(dev, node, tsar_ix) ); -TRACE_EVENT(mlx5_esw_group_qos_config, +TRACE_EVENT(mlx5_esw_node_qos_config, TP_PROTO(const struct mlx5_core_dev *dev, - const struct mlx5_esw_rate_group *group, + const struct mlx5_esw_sched_node *node, unsigned int tsar_ix, u32 bw_share, u32 max_rate), - TP_ARGS(dev, group, tsar_ix, bw_share, max_rate), + TP_ARGS(dev, node, tsar_ix, bw_share, max_rate), TP_STRUCT__entry(__string(devname, dev_name(dev->device)) - __field(const void *, group) + __field(const void *, node) __field(unsigned int, tsar_ix) __field(unsigned int, bw_share) __field(unsigned int, max_rate) ), TP_fast_assign(__assign_str(devname); - __entry->group = group; + __entry->node = node; __entry->tsar_ix = tsar_ix; __entry->bw_share = bw_share; __entry->max_rate = max_rate; ), - TP_printk("(%s) group=%p tsar_ix=%u bw_share=%u max_rate=%u\n", - __get_str(devname), __entry->group, __entry->tsar_ix, + TP_printk("(%s) node=%p tsar_ix=%u bw_share=%u max_rate=%u\n", + __get_str(devname), __entry->node, __entry->tsar_ix, __entry->bw_share, __entry->max_rate ) ); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 67b87f1598a5..d3289c1cb87a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -11,12 +11,12 @@ /* Minimum supported BW share value by the HW is 1 Mbit/sec */ #define MLX5_MIN_BW_SHARE 1 -/* Holds rate groups associated with an E-Switch. */ +/* Holds rate node associated with an E-Switch. */ struct mlx5_qos_domain { /* Serializes access to all qos changes in the qos domain. */ struct mutex lock; - /* List of all mlx5_esw_rate_groups. */ - struct list_head groups; + /* List of all mlx5_esw_sched_nodes. */ + struct list_head nodes; }; static void esw_qos_lock(struct mlx5_eswitch *esw) @@ -43,7 +43,7 @@ static struct mlx5_qos_domain *esw_qos_domain_alloc(void) return NULL; mutex_init(&qos_domain->lock); - INIT_LIST_HEAD(&qos_domain->groups); + INIT_LIST_HEAD(&qos_domain->nodes); return qos_domain; } @@ -65,30 +65,30 @@ enum sched_node_type { SCHED_NODE_TYPE_VPORTS_TSAR, }; -struct mlx5_esw_rate_group { - u32 tsar_ix; +struct mlx5_esw_sched_node { + u32 ix; /* Bandwidth parameters. */ u32 max_rate; u32 min_rate; - /* A computed value indicating relative min_rate between group members. */ + /* A computed value indicating relative min_rate between node's children. */ u32 bw_share; - /* The parent group of this group. */ - struct mlx5_esw_rate_group *parent; - /* Membership in the parent list. */ - struct list_head parent_entry; - /* The type of this group node in the rate hierarchy. */ + /* The parent node in the rate hierarchy. */ + struct mlx5_esw_sched_node *parent; + /* Entry in the parent node's children list. */ + struct list_head entry; + /* The type of this node in the rate hierarchy. */ enum sched_node_type type; - /* The eswitch this group belongs to. */ + /* The eswitch this node belongs to. */ struct mlx5_eswitch *esw; - /* Members of this group.*/ - struct list_head members; + /* The children nodes of this node, empty list for leaf nodes. */ + struct list_head children; }; -static void esw_qos_vport_set_parent(struct mlx5_vport *vport, struct mlx5_esw_rate_group *parent) +static void esw_qos_vport_set_parent(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent) { list_del_init(&vport->qos.parent_entry); vport->qos.parent = parent; - list_add_tail(&vport->qos.parent_entry, &parent->members); + list_add_tail(&vport->qos.parent_entry, &parent->children); } static int esw_qos_sched_elem_config(struct mlx5_core_dev *dev, u32 sched_elem_ix, @@ -112,17 +112,17 @@ static int esw_qos_sched_elem_config(struct mlx5_core_dev *dev, u32 sched_elem_i bitmask); } -static int esw_qos_group_config(struct mlx5_esw_rate_group *group, - u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) +static int esw_qos_node_config(struct mlx5_esw_sched_node *node, + u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { - struct mlx5_core_dev *dev = group->esw->dev; + struct mlx5_core_dev *dev = node->esw->dev; int err; - err = esw_qos_sched_elem_config(dev, group->tsar_ix, max_rate, bw_share); + err = esw_qos_sched_elem_config(dev, node->ix, max_rate, bw_share); if (err) - NL_SET_ERR_MSG_MOD(extack, "E-Switch modify group TSAR element failed"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch modify node TSAR element failed"); - trace_mlx5_esw_group_qos_config(dev, group, group->tsar_ix, bw_share, max_rate); + trace_mlx5_esw_node_qos_config(dev, node, node->ix, bw_share, max_rate); return err; } @@ -148,16 +148,16 @@ static int esw_qos_vport_config(struct mlx5_vport *vport, return 0; } -static u32 esw_qos_calculate_group_min_rate_divider(struct mlx5_esw_rate_group *group) +static u32 esw_qos_calculate_node_min_rate_divider(struct mlx5_esw_sched_node *node) { - u32 fw_max_bw_share = MLX5_CAP_QOS(group->esw->dev, max_tsar_bw_share); + u32 fw_max_bw_share = MLX5_CAP_QOS(node->esw->dev, max_tsar_bw_share); struct mlx5_vport *vport; u32 max_guarantee = 0; - /* Find max min_rate across all vports in this group. + /* Find max min_rate across all vports in this node. * This will correspond to fw_max_bw_share in the final bw_share calculation. */ - list_for_each_entry(vport, &group->members, qos.parent_entry) { + list_for_each_entry(vport, &node->children, qos.parent_entry) { if (vport->qos.min_rate > max_guarantee) max_guarantee = vport->qos.min_rate; } @@ -165,13 +165,13 @@ static u32 esw_qos_calculate_group_min_rate_divider(struct mlx5_esw_rate_group * if (max_guarantee) return max_t(u32, max_guarantee / fw_max_bw_share, 1); - /* If vports max min_rate divider is 0 but their group has bw_share + /* If vports max min_rate divider is 0 but their node has bw_share * configured, then set bw_share for vports to minimal value. */ - if (group->bw_share) + if (node->bw_share) return 1; - /* A divider of 0 sets bw_share for all group vports to 0, + /* A divider of 0 sets bw_share for all node vports to 0, * effectively disabling min guarantees. */ return 0; @@ -180,23 +180,23 @@ static u32 esw_qos_calculate_group_min_rate_divider(struct mlx5_esw_rate_group * static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw) { u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); - struct mlx5_esw_rate_group *group; + struct mlx5_esw_sched_node *node; u32 max_guarantee = 0; - /* Find max min_rate across all esw groups. + /* Find max min_rate across all esw nodes. * This will correspond to fw_max_bw_share in the final bw_share calculation. */ - list_for_each_entry(group, &esw->qos.domain->groups, parent_entry) { - if (group->esw == esw && group->tsar_ix != esw->qos.root_tsar_ix && - group->min_rate > max_guarantee) - max_guarantee = group->min_rate; + list_for_each_entry(node, &esw->qos.domain->nodes, entry) { + if (node->esw == esw && node->ix != esw->qos.root_tsar_ix && + node->min_rate > max_guarantee) + max_guarantee = node->min_rate; } if (max_guarantee) return max_t(u32, max_guarantee / fw_max_bw_share, 1); - /* If no group has min_rate configured, a divider of 0 sets all - * groups' bw_share to 0, effectively disabling min guarantees. + /* If no node has min_rate configured, a divider of 0 sets all + * nodes' bw_share to 0, effectively disabling min guarantees. */ return 0; } @@ -208,16 +208,16 @@ static u32 esw_qos_calc_bw_share(u32 min_rate, u32 divider, u32 fw_max) return min_t(u32, max_t(u32, DIV_ROUND_UP(min_rate, divider), MLX5_MIN_BW_SHARE), fw_max); } -static int esw_qos_normalize_group_min_rate(struct mlx5_esw_rate_group *group, - struct netlink_ext_ack *extack) +static int esw_qos_normalize_node_min_rate(struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) { - u32 fw_max_bw_share = MLX5_CAP_QOS(group->esw->dev, max_tsar_bw_share); - u32 divider = esw_qos_calculate_group_min_rate_divider(group); + u32 fw_max_bw_share = MLX5_CAP_QOS(node->esw->dev, max_tsar_bw_share); + u32 divider = esw_qos_calculate_node_min_rate_divider(node); struct mlx5_vport *vport; u32 bw_share; int err; - list_for_each_entry(vport, &group->members, qos.parent_entry) { + list_for_each_entry(vport, &node->children, qos.parent_entry) { bw_share = esw_qos_calc_bw_share(vport->qos.min_rate, divider, fw_max_bw_share); if (bw_share == vport->qos.bw_share) @@ -237,28 +237,29 @@ static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_e { u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); u32 divider = esw_qos_calculate_min_rate_divider(esw); - struct mlx5_esw_rate_group *group; + struct mlx5_esw_sched_node *node; u32 bw_share; int err; - list_for_each_entry(group, &esw->qos.domain->groups, parent_entry) { - if (group->esw != esw || group->tsar_ix == esw->qos.root_tsar_ix) + list_for_each_entry(node, &esw->qos.domain->nodes, entry) { + if (node->esw != esw || node->ix == esw->qos.root_tsar_ix) continue; - bw_share = esw_qos_calc_bw_share(group->min_rate, divider, fw_max_bw_share); + bw_share = esw_qos_calc_bw_share(node->min_rate, divider, + fw_max_bw_share); - if (bw_share == group->bw_share) + if (bw_share == node->bw_share) continue; - err = esw_qos_group_config(group, group->max_rate, bw_share, extack); + err = esw_qos_node_config(node, node->max_rate, bw_share, extack); if (err) return err; - group->bw_share = bw_share; + node->bw_share = bw_share; - /* All the group's vports need to be set with default bw_share + /* All the node's vports need to be set with default bw_share * to enable them with QOS */ - err = esw_qos_normalize_group_min_rate(group, extack); + err = esw_qos_normalize_node_min_rate(node, extack); if (err) return err; @@ -286,7 +287,7 @@ static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, previous_min_rate = vport->qos.min_rate; vport->qos.min_rate = min_rate; - err = esw_qos_normalize_group_min_rate(vport->qos.parent, extack); + err = esw_qos_normalize_node_min_rate(vport->qos.parent, extack); if (err) vport->qos.min_rate = previous_min_rate; @@ -309,7 +310,7 @@ static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, if (max_rate == vport->qos.max_rate) return 0; - /* Use parent group limit if new max rate is 0. */ + /* Use parent node limit if new max rate is 0. */ if (!max_rate) act_max_rate = vport->qos.parent->max_rate; @@ -321,10 +322,10 @@ static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, return err; } -static int esw_qos_set_group_min_rate(struct mlx5_esw_rate_group *group, - u32 min_rate, struct netlink_ext_ack *extack) +static int esw_qos_set_node_min_rate(struct mlx5_esw_sched_node *node, + u32 min_rate, struct netlink_ext_ack *extack) { - struct mlx5_eswitch *esw = group->esw; + struct mlx5_eswitch *esw = node->esw; u32 previous_min_rate; int err; @@ -332,17 +333,17 @@ static int esw_qos_set_group_min_rate(struct mlx5_esw_rate_group *group, MLX5_CAP_QOS(esw->dev, max_tsar_bw_share) < MLX5_MIN_BW_SHARE) return -EOPNOTSUPP; - if (min_rate == group->min_rate) + if (min_rate == node->min_rate) return 0; - previous_min_rate = group->min_rate; - group->min_rate = min_rate; + previous_min_rate = node->min_rate; + node->min_rate = min_rate; err = esw_qos_normalize_min_rate(esw, extack); if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch group min rate setting failed"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch node min rate setting failed"); /* Attempt restoring previous configuration */ - group->min_rate = previous_min_rate; + node->min_rate = previous_min_rate; if (esw_qos_normalize_min_rate(esw, extack)) NL_SET_ERR_MSG_MOD(extack, "E-Switch BW share restore failed"); } @@ -350,23 +351,23 @@ static int esw_qos_set_group_min_rate(struct mlx5_esw_rate_group *group, return err; } -static int esw_qos_set_group_max_rate(struct mlx5_esw_rate_group *group, - u32 max_rate, struct netlink_ext_ack *extack) +static int esw_qos_set_node_max_rate(struct mlx5_esw_sched_node *node, + u32 max_rate, struct netlink_ext_ack *extack) { struct mlx5_vport *vport; int err; - if (group->max_rate == max_rate) + if (node->max_rate == max_rate) return 0; - err = esw_qos_group_config(group, max_rate, group->bw_share, extack); + err = esw_qos_node_config(node, max_rate, node->bw_share, extack); if (err) return err; - group->max_rate = max_rate; + node->max_rate = max_rate; - /* Any unlimited vports in the group should be set with the value of the group. */ - list_for_each_entry(vport, &group->members, qos.parent_entry) { + /* Any unlimited vports in the node should be set with the value of the node. */ + list_for_each_entry(vport, &node->children, qos.parent_entry) { if (vport->qos.max_rate) continue; @@ -379,8 +380,8 @@ static int esw_qos_set_group_max_rate(struct mlx5_esw_rate_group *group, return err; } -static int esw_qos_create_group_sched_elem(struct mlx5_core_dev *dev, u32 parent_element_id, - u32 *tsar_ix) +static int esw_qos_create_node_sched_elem(struct mlx5_core_dev *dev, u32 parent_element_id, + u32 *tsar_ix) { u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; void *attr; @@ -409,7 +410,7 @@ static int esw_qos_create_group_sched_elem(struct mlx5_core_dev *dev, u32 parent static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, u32 max_rate, u32 bw_share) { - struct mlx5_esw_rate_group *parent = vport->qos.parent; + struct mlx5_esw_sched_node *parent = vport->qos.parent; u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_core_dev *dev = parent->esw->dev; void *attr; @@ -424,7 +425,7 @@ static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT); attr = MLX5_ADDR_OF(scheduling_context, sched_ctx, element_attributes); MLX5_SET(vport_element, attr, vport_number, vport->vport); - MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent->tsar_ix); + MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent->ix); MLX5_SET(scheduling_context, sched_ctx, max_average_bw, max_rate); MLX5_SET(scheduling_context, sched_ctx, bw_share, bw_share); @@ -442,15 +443,15 @@ static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, return 0; } -static int esw_qos_update_group_scheduling_element(struct mlx5_vport *vport, - struct mlx5_esw_rate_group *curr_group, - struct mlx5_esw_rate_group *new_group, - struct netlink_ext_ack *extack) +static int esw_qos_update_node_scheduling_element(struct mlx5_vport *vport, + struct mlx5_esw_sched_node *curr_node, + struct mlx5_esw_sched_node *new_node, + struct netlink_ext_ack *extack) { u32 max_rate; int err; - err = mlx5_destroy_scheduling_element_cmd(curr_group->esw->dev, + err = mlx5_destroy_scheduling_element_cmd(curr_node->esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, vport->qos.esw_sched_elem_ix); if (err) { @@ -458,128 +459,128 @@ static int esw_qos_update_group_scheduling_element(struct mlx5_vport *vport, return err; } - esw_qos_vport_set_parent(vport, new_group); - /* Use new group max rate if vport max rate is unlimited. */ - max_rate = vport->qos.max_rate ? vport->qos.max_rate : new_group->max_rate; + esw_qos_vport_set_parent(vport, new_node); + /* Use new node max rate if vport max rate is unlimited. */ + max_rate = vport->qos.max_rate ? vport->qos.max_rate : new_node->max_rate; err = esw_qos_vport_create_sched_element(vport, max_rate, vport->qos.bw_share); if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch vport group set failed."); + NL_SET_ERR_MSG_MOD(extack, "E-Switch vport node set failed."); goto err_sched; } return 0; err_sched: - esw_qos_vport_set_parent(vport, curr_group); - max_rate = vport->qos.max_rate ? vport->qos.max_rate : curr_group->max_rate; + esw_qos_vport_set_parent(vport, curr_node); + max_rate = vport->qos.max_rate ? vport->qos.max_rate : curr_node->max_rate; if (esw_qos_vport_create_sched_element(vport, max_rate, vport->qos.bw_share)) - esw_warn(curr_group->esw->dev, "E-Switch vport group restore failed (vport=%d)\n", + esw_warn(curr_node->esw->dev, "E-Switch vport node restore failed (vport=%d)\n", vport->vport); return err; } -static int esw_qos_vport_update_group(struct mlx5_vport *vport, - struct mlx5_esw_rate_group *group, - struct netlink_ext_ack *extack) +static int esw_qos_vport_update_node(struct mlx5_vport *vport, + struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; - struct mlx5_esw_rate_group *new_group, *curr_group; + struct mlx5_esw_sched_node *new_node, *curr_node; int err; esw_assert_qos_lock_held(esw); - curr_group = vport->qos.parent; - new_group = group ?: esw->qos.group0; - if (curr_group == new_group) + curr_node = vport->qos.parent; + new_node = node ?: esw->qos.node0; + if (curr_node == new_node) return 0; - err = esw_qos_update_group_scheduling_element(vport, curr_group, new_group, extack); + err = esw_qos_update_node_scheduling_element(vport, curr_node, new_node, extack); if (err) return err; - /* Recalculate bw share weights of old and new groups */ - if (vport->qos.bw_share || new_group->bw_share) { - esw_qos_normalize_group_min_rate(curr_group, extack); - esw_qos_normalize_group_min_rate(new_group, extack); + /* Recalculate bw share weights of old and new nodes */ + if (vport->qos.bw_share || new_node->bw_share) { + esw_qos_normalize_node_min_rate(curr_node, extack); + esw_qos_normalize_node_min_rate(new_node, extack); } return 0; } -static struct mlx5_esw_rate_group * -__esw_qos_alloc_rate_group(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_node_type type, - struct mlx5_esw_rate_group *parent) +static struct mlx5_esw_sched_node * +__esw_qos_alloc_rate_node(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_node_type type, + struct mlx5_esw_sched_node *parent) { - struct mlx5_esw_rate_group *group; - struct list_head *parent_list; + struct list_head *parent_children; + struct mlx5_esw_sched_node *node; - group = kzalloc(sizeof(*group), GFP_KERNEL); - if (!group) + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) return NULL; - group->esw = esw; - group->tsar_ix = tsar_ix; - group->type = type; - group->parent = parent; - INIT_LIST_HEAD(&group->members); - parent_list = parent ? &parent->members : &esw->qos.domain->groups; - list_add_tail(&group->parent_entry, parent_list); + node->esw = esw; + node->ix = tsar_ix; + node->type = type; + node->parent = parent; + INIT_LIST_HEAD(&node->children); + parent_children = parent ? &parent->children : &esw->qos.domain->nodes; + list_add_tail(&node->entry, parent_children); - return group; + return node; } -static void __esw_qos_free_rate_group(struct mlx5_esw_rate_group *group) +static void __esw_qos_free_node(struct mlx5_esw_sched_node *node) { - list_del(&group->parent_entry); - kfree(group); + list_del(&node->entry); + kfree(node); } -static struct mlx5_esw_rate_group * -__esw_qos_create_vports_rate_group(struct mlx5_eswitch *esw, struct mlx5_esw_rate_group *parent, - struct netlink_ext_ack *extack) +static struct mlx5_esw_sched_node * +__esw_qos_create_vports_rate_node(struct mlx5_eswitch *esw, struct mlx5_esw_sched_node *parent, + struct netlink_ext_ack *extack) { - struct mlx5_esw_rate_group *group; + struct mlx5_esw_sched_node *node; u32 tsar_ix, err; - err = esw_qos_create_group_sched_elem(esw->dev, esw->qos.root_tsar_ix, &tsar_ix); + err = esw_qos_create_node_sched_elem(esw->dev, esw->qos.root_tsar_ix, &tsar_ix); if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch create TSAR for group failed"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch create TSAR for node failed"); return ERR_PTR(err); } - group = __esw_qos_alloc_rate_group(esw, tsar_ix, SCHED_NODE_TYPE_VPORTS_TSAR, parent); - if (!group) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch alloc group failed"); + node = __esw_qos_alloc_rate_node(esw, tsar_ix, SCHED_NODE_TYPE_VPORTS_TSAR, parent); + if (!node) { + NL_SET_ERR_MSG_MOD(extack, "E-Switch alloc node failed"); err = -ENOMEM; - goto err_alloc_group; + goto err_alloc_node; } err = esw_qos_normalize_min_rate(esw, extack); if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch groups normalization failed"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch nodes normalization failed"); goto err_min_rate; } - trace_mlx5_esw_group_qos_create(esw->dev, group, group->tsar_ix); + trace_mlx5_esw_node_qos_create(esw->dev, node, node->ix); - return group; + return node; err_min_rate: - __esw_qos_free_rate_group(group); -err_alloc_group: + __esw_qos_free_node(node); +err_alloc_node: if (mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, - tsar_ix)) - NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR for group failed"); + node->ix)) + NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR for node failed"); return ERR_PTR(err); } static int esw_qos_get(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack); static void esw_qos_put(struct mlx5_eswitch *esw); -static struct mlx5_esw_rate_group * -esw_qos_create_vports_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +static struct mlx5_esw_sched_node * +esw_qos_create_vports_rate_node(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { - struct mlx5_esw_rate_group *group; + struct mlx5_esw_sched_node *node; int err; esw_assert_qos_lock_held(esw); @@ -590,31 +591,31 @@ esw_qos_create_vports_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ac if (err) return ERR_PTR(err); - group = __esw_qos_create_vports_rate_group(esw, NULL, extack); - if (IS_ERR(group)) + node = __esw_qos_create_vports_rate_node(esw, NULL, extack); + if (IS_ERR(node)) esw_qos_put(esw); - return group; + return node; } -static int __esw_qos_destroy_rate_group(struct mlx5_esw_rate_group *group, - struct netlink_ext_ack *extack) +static int __esw_qos_destroy_rate_node(struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) { - struct mlx5_eswitch *esw = group->esw; + struct mlx5_eswitch *esw = node->esw; int err; - trace_mlx5_esw_group_qos_destroy(esw->dev, group, group->tsar_ix); + trace_mlx5_esw_node_qos_destroy(esw->dev, node, node->ix); err = mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, - group->tsar_ix); + node->ix); if (err) NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR_ID failed"); - __esw_qos_free_rate_group(group); + __esw_qos_free_node(node); err = esw_qos_normalize_min_rate(esw, extack); if (err) - NL_SET_ERR_MSG_MOD(extack, "E-Switch groups normalization failed"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch nodes normalization failed"); return err; @@ -628,32 +629,32 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling)) return -EOPNOTSUPP; - err = esw_qos_create_group_sched_elem(esw->dev, 0, &esw->qos.root_tsar_ix); + err = esw_qos_create_node_sched_elem(esw->dev, 0, &esw->qos.root_tsar_ix); if (err) { esw_warn(dev, "E-Switch create root TSAR failed (%d)\n", err); return err; } if (MLX5_CAP_QOS(dev, log_esw_max_sched_depth)) { - esw->qos.group0 = __esw_qos_create_vports_rate_group(esw, NULL, extack); + esw->qos.node0 = __esw_qos_create_vports_rate_node(esw, NULL, extack); } else { - /* The eswitch doesn't support scheduling groups. - * Create a software-only group0 using the root TSAR to attach vport QoS to. + /* The eswitch doesn't support scheduling nodes. + * Create a software-only node0 using the root TSAR to attach vport QoS to. */ - if (!__esw_qos_alloc_rate_group(esw, esw->qos.root_tsar_ix, - SCHED_NODE_TYPE_VPORTS_TSAR, NULL)) - esw->qos.group0 = ERR_PTR(-ENOMEM); + if (!__esw_qos_alloc_rate_node(esw, esw->qos.root_tsar_ix, + SCHED_NODE_TYPE_VPORTS_TSAR, NULL)) + esw->qos.node0 = ERR_PTR(-ENOMEM); } - if (IS_ERR(esw->qos.group0)) { - err = PTR_ERR(esw->qos.group0); - esw_warn(dev, "E-Switch create rate group 0 failed (%d)\n", err); - goto err_group0; + if (IS_ERR(esw->qos.node0)) { + err = PTR_ERR(esw->qos.node0); + esw_warn(dev, "E-Switch create rate node 0 failed (%d)\n", err); + goto err_node0; } refcount_set(&esw->qos.refcnt, 1); return 0; -err_group0: +err_node0: if (mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, esw->qos.root_tsar_ix)) esw_warn(esw->dev, "E-Switch destroy root TSAR failed.\n"); @@ -665,11 +666,11 @@ static void esw_qos_destroy(struct mlx5_eswitch *esw) { int err; - if (esw->qos.group0->tsar_ix != esw->qos.root_tsar_ix) - __esw_qos_destroy_rate_group(esw->qos.group0, NULL); + if (esw->qos.node0->ix != esw->qos.root_tsar_ix) + __esw_qos_destroy_rate_node(esw->qos.node0, NULL); else - __esw_qos_free_rate_group(esw->qos.group0); - esw->qos.group0 = NULL; + __esw_qos_free_node(esw->qos.node0); + esw->qos.node0 = NULL; err = mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, @@ -715,7 +716,7 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, return err; INIT_LIST_HEAD(&vport->qos.parent_entry); - esw_qos_vport_set_parent(vport, esw->qos.group0); + esw_qos_vport_set_parent(vport, esw->qos.node0); err = esw_qos_vport_create_sched_element(vport, max_rate, bw_share); if (err) @@ -742,8 +743,8 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) esw_qos_lock(esw); if (!vport->qos.enabled) goto unlock; - WARN(vport->qos.parent != esw->qos.group0, - "Disabling QoS on port before detaching it from group"); + WARN(vport->qos.parent != esw->qos.node0, + "Disabling QoS on port before detaching it from node"); dev = vport->qos.parent->esw->dev; err = mlx5_destroy_scheduling_element_cmd(dev, @@ -1003,8 +1004,8 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * int mlx5_esw_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void *priv, u64 tx_share, struct netlink_ext_ack *extack) { - struct mlx5_esw_rate_group *group = priv; - struct mlx5_eswitch *esw = group->esw; + struct mlx5_esw_sched_node *node = priv; + struct mlx5_eswitch *esw = node->esw; int err; err = esw_qos_devlink_rate_to_mbps(esw->dev, "tx_share", &tx_share, extack); @@ -1012,7 +1013,7 @@ int mlx5_esw_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void return err; esw_qos_lock(esw); - err = esw_qos_set_group_min_rate(group, tx_share, extack); + err = esw_qos_set_node_min_rate(node, tx_share, extack); esw_qos_unlock(esw); return err; } @@ -1020,8 +1021,8 @@ int mlx5_esw_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void int mlx5_esw_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, void *priv, u64 tx_max, struct netlink_ext_ack *extack) { - struct mlx5_esw_rate_group *group = priv; - struct mlx5_eswitch *esw = group->esw; + struct mlx5_esw_sched_node *node = priv; + struct mlx5_eswitch *esw = node->esw; int err; err = esw_qos_devlink_rate_to_mbps(esw->dev, "tx_max", &tx_max, extack); @@ -1029,7 +1030,7 @@ int mlx5_esw_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, void * return err; esw_qos_lock(esw); - err = esw_qos_set_group_max_rate(group, tx_max, extack); + err = esw_qos_set_node_max_rate(node, tx_max, extack); esw_qos_unlock(esw); return err; } @@ -1037,7 +1038,7 @@ int mlx5_esw_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, void * int mlx5_esw_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv, struct netlink_ext_ack *extack) { - struct mlx5_esw_rate_group *group; + struct mlx5_esw_sched_node *node; struct mlx5_eswitch *esw; int err = 0; @@ -1053,13 +1054,13 @@ int mlx5_esw_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv, goto unlock; } - group = esw_qos_create_vports_rate_group(esw, extack); - if (IS_ERR(group)) { - err = PTR_ERR(group); + node = esw_qos_create_vports_rate_node(esw, extack); + if (IS_ERR(node)) { + err = PTR_ERR(node); goto unlock; } - *priv = group; + *priv = node; unlock: esw_qos_unlock(esw); return err; @@ -1068,36 +1069,36 @@ int mlx5_esw_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv, int mlx5_esw_devlink_rate_node_del(struct devlink_rate *rate_node, void *priv, struct netlink_ext_ack *extack) { - struct mlx5_esw_rate_group *group = priv; - struct mlx5_eswitch *esw = group->esw; + struct mlx5_esw_sched_node *node = priv; + struct mlx5_eswitch *esw = node->esw; int err; esw_qos_lock(esw); - err = __esw_qos_destroy_rate_group(group, extack); + err = __esw_qos_destroy_rate_node(node, extack); esw_qos_put(esw); esw_qos_unlock(esw); return err; } -int mlx5_esw_qos_vport_update_group(struct mlx5_vport *vport, - struct mlx5_esw_rate_group *group, - struct netlink_ext_ack *extack) +int mlx5_esw_qos_vport_update_node(struct mlx5_vport *vport, + struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err = 0; - if (group && group->esw != esw) { + if (node && node->esw != esw) { NL_SET_ERR_MSG_MOD(extack, "Cross E-Switch scheduling is not supported"); return -EOPNOTSUPP; } esw_qos_lock(esw); - if (!vport->qos.enabled && !group) + if (!vport->qos.enabled && !node) goto unlock; err = esw_qos_vport_enable(vport, 0, 0, extack); if (!err) - err = esw_qos_vport_update_group(vport, group, extack); + err = esw_qos_vport_update_node(vport, node, extack); unlock: esw_qos_unlock(esw); return err; @@ -1108,12 +1109,12 @@ int mlx5_esw_devlink_rate_parent_set(struct devlink_rate *devlink_rate, void *priv, void *parent_priv, struct netlink_ext_ack *extack) { - struct mlx5_esw_rate_group *group; + struct mlx5_esw_sched_node *node; struct mlx5_vport *vport = priv; if (!parent) - return mlx5_esw_qos_vport_update_group(vport, NULL, extack); + return mlx5_esw_qos_vport_update_node(vport, NULL, extack); - group = parent_priv; - return mlx5_esw_qos_vport_update_group(vport, group, extack); + node = parent_priv; + return mlx5_esw_qos_vport_update_node(vport, node, extack); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index e789fb14989b..38f912f5a707 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -222,7 +222,7 @@ struct mlx5_vport { /* A computed value indicating relative min_rate between vports in a group. */ u32 bw_share; /* The parent group of this vport scheduling element. */ - struct mlx5_esw_rate_group *parent; + struct mlx5_esw_sched_node *parent; /* Membership in the parent 'members' list. */ struct list_head parent_entry; } qos; @@ -372,11 +372,11 @@ struct mlx5_eswitch { refcount_t refcnt; u32 root_tsar_ix; struct mlx5_qos_domain *domain; - /* Contains all vports with QoS enabled but no explicit group. - * Cannot be NULL if QoS is enabled, but may be a fake group - * referencing the root TSAR if the esw doesn't support groups. + /* Contains all vports with QoS enabled but no explicit node. + * Cannot be NULL if QoS is enabled, but may be a fake node + * referencing the root TSAR if the esw doesn't support nodes. */ - struct mlx5_esw_rate_group *group0; + struct mlx5_esw_sched_node *node0; } qos; struct mlx5_esw_bridge_offloads *br_offloads; @@ -436,9 +436,9 @@ int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw, u16 vport_num, bool setting); int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, u16 vport, u32 max_rate, u32 min_rate); -int mlx5_esw_qos_vport_update_group(struct mlx5_vport *vport, - struct mlx5_esw_rate_group *group, - struct netlink_ext_ack *extack); +int mlx5_esw_qos_vport_update_node(struct mlx5_vport *vport, + struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack); int mlx5_eswitch_set_vepa(struct mlx5_eswitch *esw, u8 setting); int mlx5_eswitch_get_vepa(struct mlx5_eswitch *esw, u8 *setting); int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw, From patchwork Sun Oct 13 06:45:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833693 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2063.outbound.protection.outlook.com [40.107.244.63]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B29DD13A3F3 for ; Sun, 13 Oct 2024 06:46:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.63 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802013; cv=fail; b=ETA0vQMxztCrRuv3yqgGI2zU6QQEchGrMsqpJDzJD0XOhC+kXuxXE34HOXY1DHnGCkP1p2g3+5bpNbiOQIKy6RB8pzg2SMqhJRQ9s9ClCKW8IXxa4LDvttohDfIapMmGFcd6KWBkLgTg+HmcV1Kax14wwL6BCES2zelHw1Hu84s= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802013; c=relaxed/simple; bh=4h3oO8XpvebvLXP/BR/hMVU8Y3rSkPGnLNOnqxeDOrc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=to2FJGRQfvrBPTKeBAgb1PsSSO70xx+nZuxZAW2etuilwRJXuCJopyTVKuHp0KwenMoeRA3oVhuS4fyouEOKCjmZkGxuq3baLnqyonP9AKYzGRDL6fZVNsM5TciQRjn5Wb2dwJTM53NeRkMNYq1dPATSaLMSn3BxxUcGKvU3eq0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=rcPWz8vx; arc=fail smtp.client-ip=40.107.244.63 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="rcPWz8vx" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=iC+r4r9eoAHhDUitvD3q3gvdfPHc7qnZ0H0fUTNzWfOi4dTTAVrw5AM333rYjcaxpopLekditpmn8N6dsdyeU0QFYygbr03l4mvXuzlpdiigTIPn+KnAHyx1iyUjKq5opjTi9pixR4W5KQhTv1RMZcCm5FZZwoyFRbvNmn0JlbYxIJnuxUlTFd2dKRkKf/S6WWGgMfuOSnD5w0iIdmY0WGp9Jg6Vv09hIEGQixk0pDB4gr95jufGfgYn03Ie/ERkWps6lNYBdhnHBCIxEto7FBCrbHEpNFxsAvSvpjWlg579xqVAcHPy1eOKWp811RL5glXIZdCEWgO/OYEph3fAog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RJrwU/w5nHDMD0R6dcUh8Ri2w8pdTb/2i6LRQzkIrW8=; b=lXynDh+8UW5BrslOowTcZY/J1fOXKUGbAqjUOEcQoMTG1xMHgzcG62BoMR84/NO+UX+8rgt/KiZchHiRVXBFPvrnXmVuoG1HuyrONJ3Hvjo/gHc0KA4pntF1jY7gcTrrT/fzge/MhEsXx7jueG+kCVvVMS84KkLTBLZIa96LALBMZkd8vs73IPHgWhiC2J99m8NulMxpPPXjecIScglGKY+qaSeDAkZc5dKMs7yf8xJEGtL0jJTX9Xg1gkouwb9xdUosWIsPg69eYPuKh2+467qbb7q1RC00nKaDEVyV9Efpv95VFCGzvBtZ2NwFSUZebZqXqOWdsgM4lXeKVvi7Rw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RJrwU/w5nHDMD0R6dcUh8Ri2w8pdTb/2i6LRQzkIrW8=; b=rcPWz8vxW+EqOSIEz8Hh/H6e95shqwujlZ3H5xP+4k/ouNllciKQ8OfzvxrbPvrSJ2UvWkMhv/MOteBXHzpV3bSbeRzmxFA1CEopfdAyGdctDgiDLXhhHmkoHKOWwcLfgDr6iqi8PiRmX763ki7fY83a4TbX5eAPPssytaHeG7aCTAACeMht4tF+Dy8er3wxqGb/N09pi72RUs7LHFS8NmSm6zDkHFHfDtUkM7DGded5H4TCHNsZqOkKr57yCkRtksVB9+4hoiUEDEtxg4nUJZvVqbwtcBXGX6RNrRSTNtS7QV3iCzf+kCYpDx8WK+EJ0IrQt+dJBQRns/610hoaIQ== Received: from CH5P222CA0001.NAMP222.PROD.OUTLOOK.COM (2603:10b6:610:1ee::15) by IA1PR12MB6065.namprd12.prod.outlook.com (2603:10b6:208:3ef::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24; Sun, 13 Oct 2024 06:46:48 +0000 Received: from CH2PEPF0000013F.namprd02.prod.outlook.com (2603:10b6:610:1ee:cafe::ec) by CH5P222CA0001.outlook.office365.com (2603:10b6:610:1ee::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24 via Frontend Transport; Sun, 13 Oct 2024 06:46:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by CH2PEPF0000013F.mail.protection.outlook.com (10.167.244.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:48 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:38 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:37 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:34 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 07/15] net/mlx5: Refactor vport scheduling element creation function Date: Sun, 13 Oct 2024 09:45:32 +0300 Message-ID: <20241013064540.170722-8-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000013F:EE_|IA1PR12MB6065:EE_ X-MS-Office365-Filtering-Correlation-Id: 6bea9c3f-9bb9-4373-194e-08dceb52cfa0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: MbJshuh4zCZaB6jLuWUuAlhNGota4REw738Bjtmt6MowGFGTofv5sxIFbLhzzvlVAsWW9hS91j+ir+xN4fdrLYOQwjD4CCk5hk0CoNvqmUTO0EYADuEbVDuC2Zfaw3UEAmbd1dzpYCBTfQb5CDL8dw2pMtwOwN6a41Mq0vB3asJukedeHHuvdycwVwCxWIG6YdRk1X/pqv9NOJHHvYcFg1Kbw4aOpdxN2q94eT2uZpK82LowoEs+dwNU5Tzj1YICmq/A8yv4nTzWU16BiZLQsnN77m88aDrZ/BV2sulL0HqHif5WfZKg7m0gwaL2EazbyShOPIe4hJEPU0wNh6PBEfa13sNKQbIq/rMCdKI/4cxZULIzI7bnh3MarSxHwsHhQ4v7suPVIcMpha8jomKfoKQWbJn8RR6t0MAB2n26OJFEGaKkPZsN5kEsIO5R8WrylXjIxXUDTU7LFS7DXFaukny8on+EeAKfUSsACEXBmjokjmIHK9Hu43MPUH7/sPdJRix3oF/4ud66RhltRkDyt+jin7nhqLRciCsXcMEjHjlRIvh2BAE0jr6EcRq4jfGw+qCHyPo2JWWuAdKKsjh3j8ZtxHkoZYs/UZtjZD1FfFC5WVrMBjpO60fKrzx4gIFjMISR188u/1/Rg7aNH0tkZO2VURGZoa6hSGmowRHs9t3wxcWksOM0h4hJzooD10v4rZtuHzsgdMUIYEWTJ3w31P3ZCzM66U8k6th8mudpTa39GRslWLgB0H6oYWMhsIedPtTPKJdpmpJXrSG2bza0cxi5UfSH61fYkRwPAk6BIUTYkAQhygif2dfCZbJlR8f0yfmC5y8aSIFydzGw4gKwmG93YEBZXqGBlmEbTRmh7Ep9xiGIXKObdIm402XY6c3K7iiWIJIBKzzA8oqFTFIwYgImeIG59F3DdPPI/y5UJ0Er8VTVEI0f5yD2kq7QnXAsqsnysp3VUdERc8Uqrsnh1sF/j/lK9g6XfTH9vPBaB1lz5OjS+kPnGy95jD9K5yjYu69asCc8jiQj21cVLaKRSSUPgRK7ffxm/dXvvKYUo1/0JoDkSfa9WwAaDTArEAOUeWo/gom36q6TpfnDCk0o7RSUKdqL+Y/PiLWZVMxd5fUlboK8k3/KFt77xm45FGQeQb+rxtCJLnPXOh3//VATnuBRj19iWRRHX1RVhnsaf5C3Fj/r93IwvG2uIb5l1EttoC6qKr3PHlxLnT8Qoyd4E01kVkk2TUI9WOMV2Kyi7He3MLvHpyqVtV+rli3QZWN4VJFUThpG7rNgurTEvV/lFvGCWXD5FMmrJhBc87O2GKYOdsyU1MdNSzT1P9Jp9uAJr5OTwS65LlHE0Z353pUyZO2p9QXIohcoxs/wq6qKCAQ= X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(376014)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:48.1426 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6bea9c3f-9bb9-4373-194e-08dceb52cfa0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000013F.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6065 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Modify the vport scheduling element creation function to get the parent node directly, aligning it with the group creation function. This ensures a consistent flow for scheduling elements creation, as the parent nodes already contain the device and parent element index. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 27 ++++++++++--------- 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index d3289c1cb87a..d2bdf04421b0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -407,10 +407,10 @@ static int esw_qos_create_node_sched_elem(struct mlx5_core_dev *dev, u32 parent_ tsar_ix); } -static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, - u32 max_rate, u32 bw_share) +static int +esw_qos_vport_create_sched_element(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent, + u32 max_rate, u32 bw_share, u32 *sched_elem_ix) { - struct mlx5_esw_sched_node *parent = vport->qos.parent; u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_core_dev *dev = parent->esw->dev; void *attr; @@ -432,7 +432,7 @@ static int esw_qos_vport_create_sched_element(struct mlx5_vport *vport, err = mlx5_create_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, sched_ctx, - &vport->qos.esw_sched_elem_ix); + sched_elem_ix); if (err) { esw_warn(dev, "E-Switch create vport scheduling element failed (vport=%d,err=%d)\n", @@ -459,21 +459,23 @@ static int esw_qos_update_node_scheduling_element(struct mlx5_vport *vport, return err; } - esw_qos_vport_set_parent(vport, new_node); /* Use new node max rate if vport max rate is unlimited. */ max_rate = vport->qos.max_rate ? vport->qos.max_rate : new_node->max_rate; - err = esw_qos_vport_create_sched_element(vport, max_rate, vport->qos.bw_share); + err = esw_qos_vport_create_sched_element(vport, new_node, max_rate, vport->qos.bw_share, + &vport->qos.esw_sched_elem_ix); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch vport node set failed."); goto err_sched; } + esw_qos_vport_set_parent(vport, new_node); + return 0; err_sched: - esw_qos_vport_set_parent(vport, curr_node); max_rate = vport->qos.max_rate ? vport->qos.max_rate : curr_node->max_rate; - if (esw_qos_vport_create_sched_element(vport, max_rate, vport->qos.bw_share)) + if (esw_qos_vport_create_sched_element(vport, curr_node, max_rate, vport->qos.bw_share, + &vport->qos.esw_sched_elem_ix)) esw_warn(curr_node->esw->dev, "E-Switch vport node restore failed (vport=%d)\n", vport->vport); @@ -715,13 +717,14 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, if (err) return err; - INIT_LIST_HEAD(&vport->qos.parent_entry); - esw_qos_vport_set_parent(vport, esw->qos.node0); - - err = esw_qos_vport_create_sched_element(vport, max_rate, bw_share); + err = esw_qos_vport_create_sched_element(vport, esw->qos.node0, max_rate, bw_share, + &vport->qos.esw_sched_elem_ix); if (err) goto err_out; + INIT_LIST_HEAD(&vport->qos.parent_entry); + esw_qos_vport_set_parent(vport, esw->qos.node0); + vport->qos.enabled = true; trace_mlx5_esw_vport_qos_create(vport->dev, vport, bw_share, max_rate); From patchwork Sun Oct 13 06:45:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833695 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2062.outbound.protection.outlook.com [40.107.95.62]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B25212EBDB for ; Sun, 13 Oct 2024 06:46:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.62 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802017; cv=fail; b=cDM0JBqFNzLqMgrMaOfkkW5BZI3Ov6yIusMdg/xzX6+YSANhgFHYcQKdCiHdzfu9USHegGmIfBp+OAspqxmiULAi6JhGYc4nYV+mACoTp1jy3YWvkVgWjQuy7ahSVstBZ0WaS8V7Vnnk6klvbpEiXgwxwxIUahpRgiIKxnT4s+4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802017; c=relaxed/simple; bh=rkSSs/oG+6p89mO3aP3EgiRRlbbZ3QR/JhzCL6NGSao=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=io04fSrcQm+vBTinMlq36ScIjBGW3erKUZ5VDlfplL7frw2jHVvuQRTQnIB5N59vl4RhcXQ7kXBP1F3PVJZdWc6rHdUv/lKWjgAPbkjPxb6tnqBrsR1QqBQLhtT7KP0p5TztzpSOLmkech6mlO2lBORYmyAeAnVDCI9HKFgTqFY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=HRe16dWM; arc=fail smtp.client-ip=40.107.95.62 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="HRe16dWM" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=mIBI/8DAUDU4EQhzs/iY/MosMJnZ+h76S70OjFujThLtBpusRSwCajZhgAe0QnHDNbrCub9erLZ8qD9vNqm+cxWqYDVagLskTMuxOudynLUT2pO0DSDo8L8n5opBN0FmpeJS0tIb0yTt6zt6QYFmTy3vsWUEta07V97YeNDEix6X5uP+fML3Z+UMyEasMdCEm7ADfIHYa/+fCx2CEN3xA3++S+lo17o+FGm+cJ0HKBthPN4cfMIh7S3POOYFQ2KUwjH7V5du3h6QtaAJojG2jta2Af3jAmLpiwgm/jm8JoUW1RrcnM0863H7279S5CwB9phHrxKzdVYeI/cf1fzZbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/e204qgw/qshW1jpAtBHAzHi7pcE/tgNlAC0whY+ulU=; b=d4eq9CKggudT+Qgo1L2DCTHPaQ/CgBzUSWU93OPHQbKLb/NUmWsRFV1M0kkE5bkPXfNYM8KA9Dbt3v3lAKVUjaHVCpPMsllIPP8+0feBa2eJ+4Py3iipbtJhHOHDYYo0jFeyajZctkKYDqyMAObTjWcVqq9+UZYDZij5H2ykKCOUHX+b8dla3hmCi5V88n9bkFFsmHAlEOoZ4kcgYfP12dc+MGsO6RTDsooZUsI4TB6xfDRSpr6BF9nEAreAi/y5oQqPVXdMnF5LN7nQOr/rfYKUAL5pNU2y9PrASCfn9/l2JOa4gcA7AbEskbZOVa1qW7pz/Lt0GwPiaYWULZZc0Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/e204qgw/qshW1jpAtBHAzHi7pcE/tgNlAC0whY+ulU=; b=HRe16dWM9mMRCjTkAJb1TBMYxraDezjNFrt2sqVdDffy12SK1+d1KhWvMg/TWaAOInpiTsvvnGXGXb99kjDSm1A2JryJUSqLXbVOZhPSlw0NaIHQebtFCt4I9C6lQ6JmFt5PXQEtBg8uI9DC6/C14iilUmBherulSiFhHwnFSfCeese00KR0FRj/i/F1iNc73vIj2Y3rCRhheOUWWAZCi9VNOxUYgADb8V4R3S+ceBTcXdk3e2J2+gDdpNA5iNmsEOxoM5/AeON25QgPr9nfOKxBsHiAFT8xIPL9YnSatov71t1XYdmo3DMDninhZkDjz47TDIJMYNDubi/vLPOydg== Received: from CH2PR03CA0011.namprd03.prod.outlook.com (2603:10b6:610:59::21) by PH7PR12MB8795.namprd12.prod.outlook.com (2603:10b6:510:275::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.25; Sun, 13 Oct 2024 06:46:49 +0000 Received: from CH1PEPF0000AD7F.namprd04.prod.outlook.com (2603:10b6:610:59:cafe::29) by CH2PR03CA0011.outlook.office365.com (2603:10b6:610:59::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.21 via Frontend Transport; Sun, 13 Oct 2024 06:46:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD7F.mail.protection.outlook.com (10.167.244.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:48 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:41 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:40 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:38 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 08/15] net/mlx5: Refactor vport QoS to use scheduling node structure Date: Sun, 13 Oct 2024 09:45:33 +0300 Message-ID: <20241013064540.170722-9-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD7F:EE_|PH7PR12MB8795:EE_ X-MS-Office365-Filtering-Correlation-Id: 4a736fe8-80f0-4d67-36c8-08dceb52d010 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|82310400026|376014; X-Microsoft-Antispam-Message-Info: k2CVzUNaVSDbVoxLjtbrSSyQ2KAyMqs36wfaddeRe7L3bUx0BsztgXJdNjHW90c+24yBzNiCFxkzEtRDqAYGUvbCIpbTaBuwQpeGm48e/YXj6F1t8SO/6NdoVAY5o+zZRd5wh0OZ+Fec/MI7iL/lnRaivzrcP0cNSw62GdeZdYrv7NE2dRO+xT6jisfoKS08uSvxSIFqtEMsA/QvhKxZ0Qe/768xbQuZWYyUSt7UvEoNyQv5GFpXa7m3w7mo/UKt+zmegYM71RAfnE0Ch+2murqy1hZBA0CsYdJU8iT2epBsDKQMG6pkJ55HpFgJXHkHBbfxvkCTSIA1X9n3Kfzxn8BxGlJERVWJ3B5UFmartrnT/NSI1TWf1UPn7AkReYIymR/H+NNGM9H393Aqvpb8WkrOIIP4qTOaVQlBRdpk/dRtEkrSNpiAFUK8WKcySv+36rzUKWcwv4ZeTa1fC0zD/HMbFhM9lcKv/yonJi/abdQbUA/eJORT7wJj/LS9pPIEKYBLPf5Nztz70HtaSljc8eoGScrIswsS5Y5SD3GVq6FAJfsXVsimzIkKP7JDa5unH8eUfaQQ6MV20OVNT5NK+kNSSCqxgRxB89IxFDyGjPyy0vQ6hljaCYKUPrOX/Wt36PfYpL0tEip4s1nl/aOaOqrMoeJDNpfEPl7oHFZ0fhuW1Qt/T0U0odBFe33NAgMK40PqoAWafJylpN8rWUj45XuM8e5QdLOl+jk4Z6SfSACeAHD4jS5r0aU1w3DXDA3+AvuiRJjawK/PTJopEygPz/vX3Iz6jSuLFCaiulrRdvYwXxXXEvE4kbqsc49pljnXmKyyPFqgA70cLWXjHvCEwDcb4L0vhB8/42ZlAXgabVQ+Bi72KpVsVfSuBAYp44aKvKNXUg8Qt4wQZdCymaozKkqGalzGTMu0n8NJLCCpD7LAIreMXNEmQOQN+aJbHJHb9db7l+jIq1U7MnFmFsx3ojmzRXIDhf47J6ePmAjatHq0OXRDTZFfxm+WmR1RGmnvvOXLjDAPooI793AfV3iuMU3dIzFvifVDPhiZrOiV0qZnSfllmXb0YNQRdLt3MqwENwtph4e2sEUE2JBWXTPaWFJ5y3vnF+Fb3aa7q5sIb9sqwbmDiI+vYES4jyB5SMirCXJv7btDGD4IcXJgxXZCqJReyEdSVpF08PYVzhpJV2pHojGqUnKUkqoLQojmDVOJ8TS/ga4ZFrpnz+N/SpYGR91BqHa7T55+h9lGqSrnZ++rWZaCOzc6T81KiucKHk7UKlhfKzr8CIbckAqeMUxxPGt+e97P9XGVOIbOGaAHRNN1MOvGp3Amx5mitVtMj8NcYC3MUqVJXbW9UMHaCL3c8v4H86YdJ3KHfnrajpDBpkLN6+2H9yocZc+KUIhyogfW X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(82310400026)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:48.8930 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4a736fe8-80f0-4d67-36c8-08dceb52d010 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD7F.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8795 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Refactor the vport QoS structure by moving group membership and scheduling details into the `mlx5_esw_sched_node` structure. This change consolidates the vport into the rate hierarchy by unifying the handling of different types of scheduling element nodes. In addition, add a direct reference to the mlx5_vport within the mlx5_esw_sched_node structure, to ensure that the vport is easily accessible when a scheduling node is associated with a vport. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../mlx5/core/esw/diag/qos_tracepoint.h | 7 +- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 146 ++++++++++++------ .../net/ethernet/mellanox/mlx5/core/esw/qos.h | 3 + .../net/ethernet/mellanox/mlx5/core/eswitch.c | 2 + .../net/ethernet/mellanox/mlx5/core/eswitch.h | 11 +- 5 files changed, 106 insertions(+), 63 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h index 0b50ef0871f2..43550a416a6f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/qos_tracepoint.h @@ -9,6 +9,7 @@ #include #include "eswitch.h" +#include "qos.h" TRACE_EVENT(mlx5_esw_vport_qos_destroy, TP_PROTO(const struct mlx5_core_dev *dev, const struct mlx5_vport *vport), @@ -19,7 +20,7 @@ TRACE_EVENT(mlx5_esw_vport_qos_destroy, ), TP_fast_assign(__assign_str(devname); __entry->vport_id = vport->vport; - __entry->sched_elem_ix = vport->qos.esw_sched_elem_ix; + __entry->sched_elem_ix = mlx5_esw_qos_vport_get_sched_elem_ix(vport); ), TP_printk("(%s) vport=%hu sched_elem_ix=%u\n", __get_str(devname), __entry->vport_id, __entry->sched_elem_ix @@ -39,10 +40,10 @@ DECLARE_EVENT_CLASS(mlx5_esw_vport_qos_template, ), TP_fast_assign(__assign_str(devname); __entry->vport_id = vport->vport; - __entry->sched_elem_ix = vport->qos.esw_sched_elem_ix; + __entry->sched_elem_ix = mlx5_esw_qos_vport_get_sched_elem_ix(vport); __entry->bw_share = bw_share; __entry->max_rate = max_rate; - __entry->parent = vport->qos.parent; + __entry->parent = mlx5_esw_qos_vport_get_parent(vport); ), TP_printk("(%s) vport=%hu sched_elem_ix=%u bw_share=%u, max_rate=%u parent=%p\n", __get_str(devname), __entry->vport_id, __entry->sched_elem_ix, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index d2bdf04421b0..571f7c797968 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -63,6 +63,7 @@ static void esw_qos_domain_release(struct mlx5_eswitch *esw) enum sched_node_type { SCHED_NODE_TYPE_VPORTS_TSAR, + SCHED_NODE_TYPE_VPORT, }; struct mlx5_esw_sched_node { @@ -82,13 +83,34 @@ struct mlx5_esw_sched_node { struct mlx5_eswitch *esw; /* The children nodes of this node, empty list for leaf nodes. */ struct list_head children; + /* Valid only if this node is associated with a vport. */ + struct mlx5_vport *vport; }; -static void esw_qos_vport_set_parent(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent) +static void +esw_qos_node_set_parent(struct mlx5_esw_sched_node *node, struct mlx5_esw_sched_node *parent) +{ + list_del_init(&node->entry); + node->parent = parent; + list_add_tail(&node->entry, &parent->children); + node->esw = parent->esw; +} + +u32 mlx5_esw_qos_vport_get_sched_elem_ix(const struct mlx5_vport *vport) +{ + if (!vport->qos.sched_node) + return 0; + + return vport->qos.sched_node->ix; +} + +struct mlx5_esw_sched_node * +mlx5_esw_qos_vport_get_parent(const struct mlx5_vport *vport) { - list_del_init(&vport->qos.parent_entry); - vport->qos.parent = parent; - list_add_tail(&vport->qos.parent_entry, &parent->children); + if (!vport->qos.sched_node) + return 0; + + return vport->qos.sched_node->parent; } static int esw_qos_sched_elem_config(struct mlx5_core_dev *dev, u32 sched_elem_ix, @@ -131,10 +153,11 @@ static int esw_qos_vport_config(struct mlx5_vport *vport, u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { - struct mlx5_core_dev *dev = vport->qos.parent->esw->dev; + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; + struct mlx5_core_dev *dev = vport_node->parent->esw->dev; int err; - err = esw_qos_sched_elem_config(dev, vport->qos.esw_sched_elem_ix, max_rate, bw_share); + err = esw_qos_sched_elem_config(dev, vport_node->ix, max_rate, bw_share); if (err) { esw_warn(dev, "E-Switch modify vport scheduling element failed (vport=%d,err=%d)\n", @@ -151,15 +174,15 @@ static int esw_qos_vport_config(struct mlx5_vport *vport, static u32 esw_qos_calculate_node_min_rate_divider(struct mlx5_esw_sched_node *node) { u32 fw_max_bw_share = MLX5_CAP_QOS(node->esw->dev, max_tsar_bw_share); - struct mlx5_vport *vport; + struct mlx5_esw_sched_node *vport_node; u32 max_guarantee = 0; /* Find max min_rate across all vports in this node. * This will correspond to fw_max_bw_share in the final bw_share calculation. */ - list_for_each_entry(vport, &node->children, qos.parent_entry) { - if (vport->qos.min_rate > max_guarantee) - max_guarantee = vport->qos.min_rate; + list_for_each_entry(vport_node, &node->children, entry) { + if (vport_node->min_rate > max_guarantee) + max_guarantee = vport_node->min_rate; } if (max_guarantee) @@ -213,21 +236,22 @@ static int esw_qos_normalize_node_min_rate(struct mlx5_esw_sched_node *node, { u32 fw_max_bw_share = MLX5_CAP_QOS(node->esw->dev, max_tsar_bw_share); u32 divider = esw_qos_calculate_node_min_rate_divider(node); - struct mlx5_vport *vport; + struct mlx5_esw_sched_node *vport_node; u32 bw_share; int err; - list_for_each_entry(vport, &node->children, qos.parent_entry) { - bw_share = esw_qos_calc_bw_share(vport->qos.min_rate, divider, fw_max_bw_share); + list_for_each_entry(vport_node, &node->children, entry) { + bw_share = esw_qos_calc_bw_share(vport_node->min_rate, divider, fw_max_bw_share); - if (bw_share == vport->qos.bw_share) + if (bw_share == vport_node->bw_share) continue; - err = esw_qos_vport_config(vport, vport->qos.max_rate, bw_share, extack); + err = esw_qos_vport_config(vport_node->vport, vport_node->max_rate, bw_share, + extack); if (err) return err; - vport->qos.bw_share = bw_share; + vport_node->bw_share = bw_share; } return 0; @@ -271,6 +295,7 @@ static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_e static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, u32 min_rate, struct netlink_ext_ack *extack) { + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; struct mlx5_eswitch *esw = vport->dev->priv.eswitch; u32 fw_max_bw_share, previous_min_rate; bool min_rate_supported; @@ -282,14 +307,14 @@ static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, fw_max_bw_share >= MLX5_MIN_BW_SHARE; if (min_rate && !min_rate_supported) return -EOPNOTSUPP; - if (min_rate == vport->qos.min_rate) + if (min_rate == vport_node->min_rate) return 0; - previous_min_rate = vport->qos.min_rate; - vport->qos.min_rate = min_rate; - err = esw_qos_normalize_node_min_rate(vport->qos.parent, extack); + previous_min_rate = vport_node->min_rate; + vport_node->min_rate = min_rate; + err = esw_qos_normalize_node_min_rate(vport_node->parent, extack); if (err) - vport->qos.min_rate = previous_min_rate; + vport_node->min_rate = previous_min_rate; return err; } @@ -297,6 +322,7 @@ static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, u32 max_rate, struct netlink_ext_ack *extack) { + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; struct mlx5_eswitch *esw = vport->dev->priv.eswitch; u32 act_max_rate = max_rate; bool max_rate_supported; @@ -307,17 +333,17 @@ static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, if (max_rate && !max_rate_supported) return -EOPNOTSUPP; - if (max_rate == vport->qos.max_rate) + if (max_rate == vport_node->max_rate) return 0; /* Use parent node limit if new max rate is 0. */ if (!max_rate) - act_max_rate = vport->qos.parent->max_rate; + act_max_rate = vport_node->parent->max_rate; - err = esw_qos_vport_config(vport, act_max_rate, vport->qos.bw_share, extack); + err = esw_qos_vport_config(vport, act_max_rate, vport_node->bw_share, extack); if (!err) - vport->qos.max_rate = max_rate; + vport_node->max_rate = max_rate; return err; } @@ -354,7 +380,7 @@ static int esw_qos_set_node_min_rate(struct mlx5_esw_sched_node *node, static int esw_qos_set_node_max_rate(struct mlx5_esw_sched_node *node, u32 max_rate, struct netlink_ext_ack *extack) { - struct mlx5_vport *vport; + struct mlx5_esw_sched_node *vport_node; int err; if (node->max_rate == max_rate) @@ -367,11 +393,12 @@ static int esw_qos_set_node_max_rate(struct mlx5_esw_sched_node *node, node->max_rate = max_rate; /* Any unlimited vports in the node should be set with the value of the node. */ - list_for_each_entry(vport, &node->children, qos.parent_entry) { - if (vport->qos.max_rate) + list_for_each_entry(vport_node, &node->children, entry) { + if (vport_node->max_rate) continue; - err = esw_qos_vport_config(vport, max_rate, vport->qos.bw_share, extack); + err = esw_qos_vport_config(vport_node->vport, max_rate, vport_node->bw_share, + extack); if (err) NL_SET_ERR_MSG_MOD(extack, "E-Switch vport implicit rate limit setting failed"); @@ -448,34 +475,37 @@ static int esw_qos_update_node_scheduling_element(struct mlx5_vport *vport, struct mlx5_esw_sched_node *new_node, struct netlink_ext_ack *extack) { + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; u32 max_rate; int err; err = mlx5_destroy_scheduling_element_cmd(curr_node->esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, - vport->qos.esw_sched_elem_ix); + vport_node->ix); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy vport scheduling element failed"); return err; } /* Use new node max rate if vport max rate is unlimited. */ - max_rate = vport->qos.max_rate ? vport->qos.max_rate : new_node->max_rate; - err = esw_qos_vport_create_sched_element(vport, new_node, max_rate, vport->qos.bw_share, - &vport->qos.esw_sched_elem_ix); + max_rate = vport_node->max_rate ? vport_node->max_rate : new_node->max_rate; + err = esw_qos_vport_create_sched_element(vport, new_node, max_rate, + vport_node->bw_share, + &vport_node->ix); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch vport node set failed."); goto err_sched; } - esw_qos_vport_set_parent(vport, new_node); + esw_qos_node_set_parent(vport->qos.sched_node, new_node); return 0; err_sched: - max_rate = vport->qos.max_rate ? vport->qos.max_rate : curr_node->max_rate; - if (esw_qos_vport_create_sched_element(vport, curr_node, max_rate, vport->qos.bw_share, - &vport->qos.esw_sched_elem_ix)) + max_rate = vport_node->max_rate ? vport_node->max_rate : curr_node->max_rate; + if (esw_qos_vport_create_sched_element(vport, curr_node, max_rate, + vport_node->bw_share, + &vport_node->ix)) esw_warn(curr_node->esw->dev, "E-Switch vport node restore failed (vport=%d)\n", vport->vport); @@ -486,12 +516,13 @@ static int esw_qos_vport_update_node(struct mlx5_vport *vport, struct mlx5_esw_sched_node *node, struct netlink_ext_ack *extack) { + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; struct mlx5_eswitch *esw = vport->dev->priv.eswitch; struct mlx5_esw_sched_node *new_node, *curr_node; int err; esw_assert_qos_lock_held(esw); - curr_node = vport->qos.parent; + curr_node = vport_node->parent; new_node = node ?: esw->qos.node0; if (curr_node == new_node) return 0; @@ -501,7 +532,7 @@ static int esw_qos_vport_update_node(struct mlx5_vport *vport, return err; /* Recalculate bw share weights of old and new nodes */ - if (vport->qos.bw_share || new_node->bw_share) { + if (vport_node->bw_share || new_node->bw_share) { esw_qos_normalize_node_min_rate(curr_node, extack); esw_qos_normalize_node_min_rate(new_node, extack); } @@ -707,6 +738,7 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; + u32 sched_elem_ix; int err; esw_assert_qos_lock_held(esw); @@ -718,18 +750,26 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, return err; err = esw_qos_vport_create_sched_element(vport, esw->qos.node0, max_rate, bw_share, - &vport->qos.esw_sched_elem_ix); + &sched_elem_ix); if (err) goto err_out; - INIT_LIST_HEAD(&vport->qos.parent_entry); - esw_qos_vport_set_parent(vport, esw->qos.node0); + vport->qos.sched_node = __esw_qos_alloc_rate_node(esw, sched_elem_ix, SCHED_NODE_TYPE_VPORT, + esw->qos.node0); + if (!vport->qos.sched_node) + goto err_alloc; vport->qos.enabled = true; + vport->qos.sched_node->vport = vport; + trace_mlx5_esw_vport_qos_create(vport->dev, vport, bw_share, max_rate); return 0; +err_alloc: + if (mlx5_destroy_scheduling_element_cmd(esw->dev, + SCHEDULING_HIERARCHY_E_SWITCH, sched_elem_ix)) + esw_warn(esw->dev, "E-Switch destroy vport scheduling element failed.\n"); err_out: esw_qos_put(esw); @@ -739,6 +779,7 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; + struct mlx5_esw_sched_node *vport_node; struct mlx5_core_dev *dev; int err; @@ -746,20 +787,23 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) esw_qos_lock(esw); if (!vport->qos.enabled) goto unlock; - WARN(vport->qos.parent != esw->qos.node0, + vport_node = vport->qos.sched_node; + WARN(vport_node->parent != esw->qos.node0, "Disabling QoS on port before detaching it from node"); - dev = vport->qos.parent->esw->dev; + trace_mlx5_esw_vport_qos_destroy(dev, vport); + + dev = vport_node->esw->dev; err = mlx5_destroy_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, - vport->qos.esw_sched_elem_ix); + vport_node->ix); if (err) esw_warn(dev, "E-Switch destroy vport scheduling element failed (vport=%d,err=%d)\n", vport->vport, err); + __esw_qos_free_node(vport_node); memset(&vport->qos, 0, sizeof(vport->qos)); - trace_mlx5_esw_vport_qos_destroy(dev, vport); esw_qos_put(esw); unlock: @@ -792,8 +836,8 @@ bool mlx5_esw_qos_get_vport_rate(struct mlx5_vport *vport, u32 *max_rate, u32 *m esw_qos_lock(esw); enabled = vport->qos.enabled; if (enabled) { - *max_rate = vport->qos.max_rate; - *min_rate = vport->qos.min_rate; + *max_rate = vport->qos.sched_node->max_rate; + *min_rate = vport->qos.sched_node->min_rate; } esw_qos_unlock(esw); return enabled; @@ -889,16 +933,16 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 esw_qos_lock(esw); if (!vport->qos.enabled) { /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ - err = esw_qos_vport_enable(vport, rate_mbps, vport->qos.bw_share, NULL); + err = esw_qos_vport_enable(vport, rate_mbps, vport->qos.sched_node->bw_share, NULL); } else { - struct mlx5_core_dev *dev = vport->qos.parent->esw->dev; + struct mlx5_core_dev *dev = vport->qos.sched_node->parent->esw->dev; MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps); bitmask = MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; err = mlx5_modify_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, ctx, - vport->qos.esw_sched_elem_ix, + vport->qos.sched_node->ix, bitmask); } esw_qos_unlock(esw); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h index b4045efbaf9e..61a6fdd5c267 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h @@ -13,6 +13,9 @@ int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *evport, u32 max_rate, u32 min bool mlx5_esw_qos_get_vport_rate(struct mlx5_vport *vport, u32 *max_rate, u32 *min_rate); void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport); +u32 mlx5_esw_qos_vport_get_sched_elem_ix(const struct mlx5_vport *vport); +struct mlx5_esw_sched_node *mlx5_esw_qos_vport_get_parent(const struct mlx5_vport *vport); + int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void *priv, u64 tx_share, struct netlink_ext_ack *extack); int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void *priv, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index 2bcd42305f46..09719e9b8611 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1061,6 +1061,7 @@ static void mlx5_eswitch_clear_vf_vports_info(struct mlx5_eswitch *esw) unsigned long i; mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) { + kfree(vport->qos.sched_node); memset(&vport->qos, 0, sizeof(vport->qos)); memset(&vport->info, 0, sizeof(vport->info)); vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO; @@ -1073,6 +1074,7 @@ static void mlx5_eswitch_clear_ec_vf_vports_info(struct mlx5_eswitch *esw) unsigned long i; mlx5_esw_for_each_ec_vf_vport(esw, i, vport, esw->esw_funcs.num_ec_vfs) { + kfree(vport->qos.sched_node); memset(&vport->qos, 0, sizeof(vport->qos)); memset(&vport->info, 0, sizeof(vport->info)); vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 38f912f5a707..e77ec82787de 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -216,15 +216,8 @@ struct mlx5_vport { struct { /* Initially false, set to true whenever any QoS features are used. */ bool enabled; - u32 esw_sched_elem_ix; - u32 min_rate; - u32 max_rate; - /* A computed value indicating relative min_rate between vports in a group. */ - u32 bw_share; - /* The parent group of this vport scheduling element. */ - struct mlx5_esw_sched_node *parent; - /* Membership in the parent 'members' list. */ - struct list_head parent_entry; + /* Vport scheduling element node. */ + struct mlx5_esw_sched_node *sched_node; } qos; u16 vport; From patchwork Sun Oct 13 06:45:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833696 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2052.outbound.protection.outlook.com [40.107.223.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B0FB3D6A for ; Sun, 13 Oct 2024 06:46:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.52 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802018; cv=fail; b=WIWM1f1imKgVQ2IvWBZTd8Gq4Bd9IB7002ARSNU/eEomBjkD4AVFELBgMMB8Tq6GVSTTDtjSibmBuhHcScjuZdWtCfE/YBI1oZuyBv1Lj8/JhYfANFdRE6SkaAevbgQlaOEgV126akh/jvtcaZYhdt09aGERF0iaO27UR96BTDE= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802018; c=relaxed/simple; bh=DXkK4j9q686b7ef9ZjQgFhRr+rGgg/+Opq/S1Cl1JSc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=iGrIgiaRkWlChCfyU33Yh0PgkzJv8b8ZooOGf3PCLKu+SqWlQydeUaUzdUPCe74h9h2yS/XkLfJdpaFtIZr4Q/QzH9SQTfFt4MFHvM6bqTEqo8dLWxSpDgNKZrwcK/gIDc8yr76USAPaqOId2eNEc+1jraCL05kj3tkB3gDrezU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=hrlmJbO3; arc=fail smtp.client-ip=40.107.223.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="hrlmJbO3" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=qnpsgpiPjC8YN1r9x0Gj++JUQ7jrJfMOQClibGtW4GGrY9ckknqowleSSv+mIPc1NeM2S2Mi1eVLFECDYWckWpzC0cqkQ+u9RcpDSwu/cfli5aZq4mpBcvGBdlRC/e/+79egAbyJYOp+g9ni1D02XQjJIVsSIydO+LR+35eOFLeKvZp/SljnvX3+LQxWl26/L7Yc4nat/aKAXybtrvhI5uPAhOrgybf57C77l1OWstBgvJkuiRMa6W++bBMiwE68miRyXDha3azNP3iahIbuQViycjfoo4a4S3crY8XA8noCda/pBQ8H/JDSue+9w4kBvEWRznmJ5OQ8L3b+B8IEYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GbeDVJdHozwBEsDT900VyLDuGBmKYfVLJD/nBmaUdNw=; b=MMpwQj37qQ2pNsf2UJ4vAERuOzQ7mtOTL0CCPqhiS7F98PBjncZqnk+O/L2BH0oElM3M4E00QLwqSgagnu5XoqoWLBSNEYdKii3RaDQ9YHnn8TmD/+OIO4UurBIEMv3jt6h4jD7EY665Q5CHeoGAmYtr3R+bQQF/IkJMHOgaxznICfFtvOneeuW9RmJq5ZTDTqVRl60lNe5uwBJhLhoVZyQ2lpLdlzYfuxXEN0rDUaOMcgdzPk+8KWLYyKfPVvwLn+IeCE8MQXCfP1utkTMbjc3uIJC8lKdpuKVSYAxfw6TBzp89sUkw8a6UNeoUtG2N6CLlqn8OQqzcdzA926YEbA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GbeDVJdHozwBEsDT900VyLDuGBmKYfVLJD/nBmaUdNw=; b=hrlmJbO3RKSBZRYyMmkvROOrClgope7JAIqdP+NtRqQnosmAQxaCW081fvLVnU/LjD+irdB0b7AQBzp3L62UshnQhFsa7/ny6JgZ1q3RDZ8DMS1fPNs8KRu3w558fvPW2ydgwWXuGRc8TeTO18wsaNibleLd4NYoF9IzUYACaB9KhNKm+629weRX9cgfWN2LNubmn/ge1QbeEswXVJCFwjOcfeyHbiX0Vbc88g/P437QRf10ry4jlXhpHxq61NvicqnPvq4r8QeErYK2WTuhzQp4ReM8V6if3XGhVKNn9uVttJ/hL5KQ8r0CEkw4ZbWdsIHnY4G++22aFbbL3qEC+w== Received: from CH5P221CA0022.NAMP221.PROD.OUTLOOK.COM (2603:10b6:610:1f2::23) by LV3PR12MB9166.namprd12.prod.outlook.com (2603:10b6:408:19c::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.18; Sun, 13 Oct 2024 06:46:53 +0000 Received: from CH1PEPF0000AD81.namprd04.prod.outlook.com (2603:10b6:610:1f2:cafe::cf) by CH5P221CA0022.outlook.office365.com (2603:10b6:610:1f2::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.26 via Frontend Transport; Sun, 13 Oct 2024 06:46:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD81.mail.protection.outlook.com (10.167.244.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:51 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:44 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:43 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:41 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 09/15] net/mlx5: Remove vport QoS enabled flag Date: Sun, 13 Oct 2024 09:45:34 +0300 Message-ID: <20241013064540.170722-10-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD81:EE_|LV3PR12MB9166:EE_ X-MS-Office365-Filtering-Correlation-Id: b2fde36d-5851-4243-1c2e-08dceb52d1a8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|1800799024|376014; X-Microsoft-Antispam-Message-Info: 7mm9BEriLG4afeKVRip2N1dLLj7owQ7waiWiJ4Sd0+K3K4hszZxbBg4kVBP5row6H/AojZNJIpQbC8F1wlfIkez4jbP68lAizpLw7ElOP5atLXMVpu7dN52qbQnkQ1Us7Y7I60jRHOLiscBqyu11KJiowwwDgnZBetIBHkcgamvXc4TayusSPF3uVj6rNvxKmN/gtCB1Ls/DSRJ+KkS/xin8IwRh5SHLGjv1rL1NDO1fWa8/z8rdVXFRSuDIggl7BdGcxKT+XZP7EFimaB7axOoJKAGQTuAwgBM3QFTPzG+NLzZm7iiGr2vertvKWRB7VjWNAtmwlOG/M0C1GiskRLdHIW+EVms4GpBYMJKnnvXlw73iB51LUX6FaaoDec0tUJbJW90CASBVJALY33oG9EyAxigfdKHlqmI+f6MX/LZV8YsJBaHNgUhYx6IRSHjUUOCLUZ09XD99euZFPJt8B9KFJtgv3W/GH7/D0ooX0PEoYHwdmrrB9hqmZQzAs1OmeXpk666glB2z4dFtup0TCX4gxd7DsGoZTTpSRhfpUUjhz5w2J03PFgwP9RYORyu/pqxO9ypi9xvAeSogG+of7NkOcoZjXQhK+kLi7x71ULFDoTGKTDGc9W/jTjm/TjohWREUF7kZBx2fJILuSWveJ5e/th8eTGGg9jydjWbd+GxZUm3WQhoBJMf0rQbEiW5k1QaVykxvEfesNA7XD5auYG8gTocFdWKW5r5g8ZRnUbpUAInUKiSImnPoxVw2FbP0WIvsb3LiJcVqM9BggJNVymL7q7cEQumb6lJIawlOr0OgBPdCWa/cmGfJj4RKY9OHyuMe5P6VNQPxezPFJlikFp0jmXGuoaIlLd3b2uAANRcf3+fsjYjomdlsXgVaiM2W5ojb3YXbPaP/boRW+otwRWU2I3vL8RRL6pVE10DmE1GQf3NcAt4e+JtEIhS2wmf+5wu1CeeawO6NwUnmC0OpwMqkagzQiWtfiZcxebHnndRalPmb3S8cX41g8alLZ0Mkmq5RrdNFIt5+agjzGtYhR8S/0T7aEMDipjQnqn8uAiTRN0vA3jEmho40qA3fas3YkCtsMsiGiJv/drmWXu54sg7ZDGZ7EhL3l9u0pnq4BJA1ShgEKohbR3xfhLCJ7MC2D6qZXLG1fEl2N69YClLuUtSwyhO/n5snFfh8U2FjCEuzNqzpzSuLIUhQcpxswNSWWx7D4ST/U6O/4FRZqd97uwCqbP4ZMDqd+r/cPx4htyZpMAvLid7wWP9KUTwju4TurWgTAQaxjRfQqcYOKt/exs1MULvtY7a3VD383p4oIahLCHvmvE9+lEfy+2D4qgKyYyyloHBn5FWcUk2QbrW//k0w6FtvKTv3GPYTJiOB1zBW0cTfRhbmsCztDVeW2Iad X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(1800799024)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:51.4724 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b2fde36d-5851-4243-1c2e-08dceb52d1a8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD81.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9166 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Remove the `enabled` flag from the `vport->qos` struct, as QoS now relies solely on the `sched_node` pointer to determine whether QoS features are in use. Currently, the vport `qos` struct consists only of the `sched_node`, introducing an unnecessary two-level reference. However, the qos struct is retained as it will be extended in future patches to support new QoS features. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c | 13 ++++++------- drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 2 -- 2 files changed, 6 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 571f7c797968..0f465de4a916 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -742,7 +742,7 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, int err; esw_assert_qos_lock_held(esw); - if (vport->qos.enabled) + if (vport->qos.sched_node) return 0; err = esw_qos_get(esw, extack); @@ -759,7 +759,6 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, if (!vport->qos.sched_node) goto err_alloc; - vport->qos.enabled = true; vport->qos.sched_node->vport = vport; trace_mlx5_esw_vport_qos_create(vport->dev, vport, bw_share, max_rate); @@ -785,9 +784,9 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) lockdep_assert_held(&esw->state_lock); esw_qos_lock(esw); - if (!vport->qos.enabled) - goto unlock; vport_node = vport->qos.sched_node; + if (!vport_node) + goto unlock; WARN(vport_node->parent != esw->qos.node0, "Disabling QoS on port before detaching it from node"); @@ -834,7 +833,7 @@ bool mlx5_esw_qos_get_vport_rate(struct mlx5_vport *vport, u32 *max_rate, u32 *m bool enabled; esw_qos_lock(esw); - enabled = vport->qos.enabled; + enabled = !!vport->qos.sched_node; if (enabled) { *max_rate = vport->qos.sched_node->max_rate; *min_rate = vport->qos.sched_node->min_rate; @@ -931,7 +930,7 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 } esw_qos_lock(esw); - if (!vport->qos.enabled) { + if (!vport->qos.sched_node) { /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ err = esw_qos_vport_enable(vport, rate_mbps, vport->qos.sched_node->bw_share, NULL); } else { @@ -1140,7 +1139,7 @@ int mlx5_esw_qos_vport_update_node(struct mlx5_vport *vport, } esw_qos_lock(esw); - if (!vport->qos.enabled && !node) + if (!vport->qos.sched_node && !node) goto unlock; err = esw_qos_vport_enable(vport, 0, 0, extack); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index e77ec82787de..14dd42d44e6f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -214,8 +214,6 @@ struct mlx5_vport { /* Protected with the E-Switch qos domain lock. */ struct { - /* Initially false, set to true whenever any QoS features are used. */ - bool enabled; /* Vport scheduling element node. */ struct mlx5_esw_sched_node *sched_node; } qos; From patchwork Sun Oct 13 06:45:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833697 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2061.outbound.protection.outlook.com [40.107.236.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B6F9131BAF for ; Sun, 13 Oct 2024 06:47:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.61 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802023; cv=fail; b=gta5/evdIr/XjdoKEQ2b+Zl8s7WxtTfHC2AH81MlzTlOkGZAiIZbYZatcpuxfFw/nRd1h/bLJ+TDmqjdM9WviCCu5ayRa8izWTOj9jtGlEwWlwSXvdcb4eu5yRAcK5TGkTc6NniEPoQO/nQ/75WIyF/pDaIkOhSGARRDhCoxI1U= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802023; c=relaxed/simple; bh=7S3DGItSTLwaX9OaHaY0F+NPXk2M0vISpEzBDk87P/s=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=aR5OpuLrmR2Qm9JgqxBN7HSerFuQkG5FTLf78QK6/qiMNpxCgNcA0UrJt2JtNn1WJlblYbxKdJjMe4+/Au/hxxzV1DPFsoFhfQUdEvafME0PvSkeVuYmZLheUbeBqzVJxzoEFQ1OQqALx2MxWblzg9d4SgvBdXgSV8fGr+b9L8A= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=ZeC+hB7a; arc=fail smtp.client-ip=40.107.236.61 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="ZeC+hB7a" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=DgGXyhQA2EL9AgNDAvKqdhZh2UNZ/iYqqtipM0dOBc/y91AGNhkqtPjPOTXlBbvqmyNnMWtSM07TRHZCc2n7IcNnkBaLmfvha1i7STq7RCWOnxg4kEQ20jB8AOo/o1S/GabGR3NaNUyfHXyP1eT1OqKgV9c7FcbgORA08x98dDBMU5c6FNJyfWIdxN7+fLqtdgzwqpPMH51gi0QRYv+wr/pzyrKucHkzvrL3Mwxf+jmP8FBC/Wf3zr4CWVPkreAG7v4h/30iL0vMX9hu430cgt4w43tmRbJV1uxZxZcTDyBEdoeDNj6V6NliC0+UTPsozZ7HLih+UldeiU/HrLKxrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=19DexOcbRwDu0tJr87Og7dJuCOk8pJ0jqP1J/bdX3ac=; b=yoDe2dfaEOOP5XJLmxwrBByIYBVxeyBu9eQo1V/wbuYoMOXgtKjCzPgGO/2Bwd5actFQJ9if/HFvceQrvlpVEAavsycTT3Th7PYbdYH3uJONMxpaLM4Ux3Epjyql5oyShcUY5Y+IcMaw+ZiibgPi3gcIqeGtKB37G/kh7qX8ufbj8n9ZynxcaMrnXwFAH4yOOk9KhHY5TbKiCLqDksULxngfxo1KICMv+xGgLJUwtC6JXcegF2f4oB5qalnHTwF2gPsphgLUHQGUgAZefr+Lr66FAmU8AnzAebuzKu0uJ6g0XVCDd+B4GCKR4Z//0obLXxg+nnfexkd8hnNz/HSAGQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=19DexOcbRwDu0tJr87Og7dJuCOk8pJ0jqP1J/bdX3ac=; b=ZeC+hB7ak7vVEfXiffA0xYXPtuMHH1/gVUuGQ8ixQJpFl1GP5fsSboD5Ija69SA01kWYUS8zO2ect5PXjqfe4Lot1pLxckNI+oFMgj/TwPYaQFU0LXrduXKSiDHMUhPwyRhliXjWFp/j7dg1pDv8TcA58emjThMy5RodVQ9MxY7gA/0GfezllUVhyPQ5hdu0wX8LDllEDBpl9wAGqFzSgoY576RO5f72E6dqSPEYKBAYsfXBFhSD3IodyOKglJtlzo0iSnJz2L0jEFM/4kHuF+hhLwr0QjeA3eXJKT+PiwaEw83LmewhRiBjE1gXxkTHdq3rspLZqbiYb3WeY7zmmA== Received: from CH5P221CA0003.NAMP221.PROD.OUTLOOK.COM (2603:10b6:610:1f2::22) by SA1PR12MB7293.namprd12.prod.outlook.com (2603:10b6:806:2b9::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.22; Sun, 13 Oct 2024 06:46:57 +0000 Received: from CH1PEPF0000AD81.namprd04.prod.outlook.com (2603:10b6:610:1f2:cafe::d3) by CH5P221CA0003.outlook.office365.com (2603:10b6:610:1f2::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24 via Frontend Transport; Sun, 13 Oct 2024 06:46:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD81.mail.protection.outlook.com (10.167.244.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:55 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:47 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:46 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:44 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 10/15] net/mlx5: Simplify QoS scheduling element configuration Date: Sun, 13 Oct 2024 09:45:35 +0300 Message-ID: <20241013064540.170722-11-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD81:EE_|SA1PR12MB7293:EE_ X-MS-Office365-Filtering-Correlation-Id: ee12194f-683f-41ee-daa4-08dceb52d40c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|376014|1800799024; X-Microsoft-Antispam-Message-Info: hNeLnc3VRC3x1m8846ugROKYXSEehwlj3AF8zmJo+UlWLEzwgs3CldD6Z1LBaGrEj7wh61VsRvxoFziQEFkhgmKRIDTKrJv0A4ktm+oPFlJWgMoN+mUARol5aJw9bSWczZdyTeV6EKHW9BOKwg7Zpks9gR6WTdycn6XDgS2x/VovJnqa/pX2vBCoVYQDIJRYv7gQLdx0e2u3/ISSIyIAV1m8FQlNQnJqsOGBIHfOSd+M4Rm7y+uHzn6BZ51vufOy0xViRAjDnPYXEP1sI9zO610CZeVgmJh2MTMVgxxWZJ6Vhetsac5ae+UG1JFvY3w3hnHKNWUgvk/iZcMVhr8yTyJd/Ctc8Ly1Cvsj2Kfv01PoJ1L3oM4jo97Bz58judHfSSgyJWQKfFoL+uprZ24ZPMI0MSJTJkGBQ9+9Z9QT/l7eyxtXMVahrOgnAxwx43uO5mFboKor6teIovJgpo70rysIlF5sfjA43BfUFoyH+7jfoJ5Hs2zgUEMJJrdT7APA8c7+C55XYXoRosnCG/6IsFR/x8n7ruW3E2M8iO7bVhtfrKaZ+eCZM3ZLF7bMyICGvLH+PT2avPYeD2oa9cao1F7KhUYqOmj0FEw4Dv5st9LjlXeJB+qqiVZga6NoKRCaaS5fp5NJ86aTgR/39nrEtxjqCzTgLSOrBnhvQYlKBBicarCzP41jPCQQ8f8TKTCrh/YcuKrQvyCtmV1vAxPZC9cxR6JYjyfDsPOqfrGDkAvXFZC7kSLnPghWghtm7eYomqfqEuYYPQrdbKqb/eyqmnKffhPPTUyj76Aa0UVd6lCME2eCmBV1Gtm6sA0+ga7N73YExRp8OZ1DIjAOKUyZzc+gRTAcIElaxgoTiyMaaDs0zeIyXzq7HP46VWGxmnn4AHYyyjp62m7Vq3EPpH17saBO4H/kjSwMj1DBmgiG/RHqFicoJljNJJnU//V0j0IUk2BjIcn71h7MjyUmIRTiql75chkb3xNyu00p8jxNb9kHJsCAWhHzmmFz3emJEaBwsipbRf+BxeBzSSIab10+1y9iFGeTjw+c/Z7peRQZO6Hp+0TIeKS59VvYRJsjrByIxmZEf/5Ai+EM7cmmhKBRegQDIvZPRP+lG3/1ERbvC7C+4dGMBSl0A1QnejrayWU4Wt4drrkjRinJyTDlIVlOh+3YHMDxjRCHaanI3B5YKvCJxg+vtKRYcM7EmcKYSRILlUbWn3jzpVRkdxO29dzuW/yGnbrJ50UhvhltQoCjei1qAP7nCx5PA/fWd49LHu0ceL7Kbo/f3rMuNbQ0gVwqSMRZgIMgSl1alMcQIoVVSc6rrs3pJkaWF2+it6qSmGsJBYnYF5gqH0tEm2BZgHOEcmsi7r5XgP6Za8sDmHsyAcH15xw0fVy2Njw3wyxmvuQK X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(376014)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:55.5819 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ee12194f-683f-41ee-daa4-08dceb52d40c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD81.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7293 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Simplify the configuration of QoS scheduling elements by removing the separate functions `esw_qos_node_config` and `esw_qos_vport_config`. Instead, directly use the existing `esw_qos_sched_elem_config` function for both nodes and vports. This unification helps in generalizing operations on scheduling elements nodes. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 86 +++++++++---------- 1 file changed, 40 insertions(+), 46 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 0f465de4a916..ffd5d4d38fe5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -66,6 +66,11 @@ enum sched_node_type { SCHED_NODE_TYPE_VPORT, }; +static const char * const sched_node_type_str[] = { + [SCHED_NODE_TYPE_VPORTS_TSAR] = "vports TSAR", + [SCHED_NODE_TYPE_VPORT] = "vport", +}; + struct mlx5_esw_sched_node { u32 ix; /* Bandwidth parameters. */ @@ -113,11 +118,27 @@ mlx5_esw_qos_vport_get_parent(const struct mlx5_vport *vport) return vport->qos.sched_node->parent; } -static int esw_qos_sched_elem_config(struct mlx5_core_dev *dev, u32 sched_elem_ix, - u32 max_rate, u32 bw_share) +static void esw_qos_sched_elem_config_warn(struct mlx5_esw_sched_node *node, int err) +{ + if (node->vport) { + esw_warn(node->esw->dev, + "E-Switch modify %s scheduling element failed (vport=%d,err=%d)\n", + sched_node_type_str[node->type], node->vport->vport, err); + return; + } + + esw_warn(node->esw->dev, + "E-Switch modify %s scheduling element failed (err=%d)\n", + sched_node_type_str[node->type], err); +} + +static int esw_qos_sched_elem_config(struct mlx5_esw_sched_node *node, u32 max_rate, u32 bw_share, + struct netlink_ext_ack *extack) { u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; + struct mlx5_core_dev *dev = node->esw->dev; u32 bitmask = 0; + int err; if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling)) return -EOPNOTSUPP; @@ -127,46 +148,22 @@ static int esw_qos_sched_elem_config(struct mlx5_core_dev *dev, u32 sched_elem_i bitmask |= MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; bitmask |= MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_BW_SHARE; - return mlx5_modify_scheduling_element_cmd(dev, - SCHEDULING_HIERARCHY_E_SWITCH, - sched_ctx, - sched_elem_ix, - bitmask); -} - -static int esw_qos_node_config(struct mlx5_esw_sched_node *node, - u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) -{ - struct mlx5_core_dev *dev = node->esw->dev; - int err; - - err = esw_qos_sched_elem_config(dev, node->ix, max_rate, bw_share); - if (err) - NL_SET_ERR_MSG_MOD(extack, "E-Switch modify node TSAR element failed"); - - trace_mlx5_esw_node_qos_config(dev, node, node->ix, bw_share, max_rate); - - return err; -} - -static int esw_qos_vport_config(struct mlx5_vport *vport, - u32 max_rate, u32 bw_share, - struct netlink_ext_ack *extack) -{ - struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; - struct mlx5_core_dev *dev = vport_node->parent->esw->dev; - int err; - - err = esw_qos_sched_elem_config(dev, vport_node->ix, max_rate, bw_share); + err = mlx5_modify_scheduling_element_cmd(dev, + SCHEDULING_HIERARCHY_E_SWITCH, + sched_ctx, + node->ix, + bitmask); if (err) { - esw_warn(dev, - "E-Switch modify vport scheduling element failed (vport=%d,err=%d)\n", - vport->vport, err); - NL_SET_ERR_MSG_MOD(extack, "E-Switch modify vport scheduling element failed"); + esw_qos_sched_elem_config_warn(node, err); + NL_SET_ERR_MSG_MOD(extack, "E-Switch modify scheduling element failed"); + return err; } - trace_mlx5_esw_vport_qos_config(dev, vport, bw_share, max_rate); + if (node->type == SCHED_NODE_TYPE_VPORTS_TSAR) + trace_mlx5_esw_node_qos_config(dev, node, node->ix, bw_share, max_rate); + else if (node->type == SCHED_NODE_TYPE_VPORT) + trace_mlx5_esw_vport_qos_config(dev, node->vport, bw_share, max_rate); return 0; } @@ -246,8 +243,7 @@ static int esw_qos_normalize_node_min_rate(struct mlx5_esw_sched_node *node, if (bw_share == vport_node->bw_share) continue; - err = esw_qos_vport_config(vport_node->vport, vport_node->max_rate, bw_share, - extack); + err = esw_qos_sched_elem_config(vport_node, vport_node->max_rate, bw_share, extack); if (err) return err; @@ -274,7 +270,7 @@ static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_e if (bw_share == node->bw_share) continue; - err = esw_qos_node_config(node, node->max_rate, bw_share, extack); + err = esw_qos_sched_elem_config(node, node->max_rate, bw_share, extack); if (err) return err; @@ -340,8 +336,7 @@ static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, if (!max_rate) act_max_rate = vport_node->parent->max_rate; - err = esw_qos_vport_config(vport, act_max_rate, vport_node->bw_share, extack); - + err = esw_qos_sched_elem_config(vport_node, act_max_rate, vport_node->bw_share, extack); if (!err) vport_node->max_rate = max_rate; @@ -386,7 +381,7 @@ static int esw_qos_set_node_max_rate(struct mlx5_esw_sched_node *node, if (node->max_rate == max_rate) return 0; - err = esw_qos_node_config(node, max_rate, node->bw_share, extack); + err = esw_qos_sched_elem_config(node, max_rate, node->bw_share, extack); if (err) return err; @@ -397,8 +392,7 @@ static int esw_qos_set_node_max_rate(struct mlx5_esw_sched_node *node, if (vport_node->max_rate) continue; - err = esw_qos_vport_config(vport_node->vport, max_rate, vport_node->bw_share, - extack); + err = esw_qos_sched_elem_config(vport_node, max_rate, vport_node->bw_share, extack); if (err) NL_SET_ERR_MSG_MOD(extack, "E-Switch vport implicit rate limit setting failed"); From patchwork Sun Oct 13 06:45:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833699 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2085.outbound.protection.outlook.com [40.107.244.85]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 739BA13957C for ; Sun, 13 Oct 2024 06:47:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.85 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802027; cv=fail; b=ioT4Z5cd61A8PjMX+JpxnYRvPpI/C2f0ZdzamQ7sbc4WGxECAUF56P0y557Ag0ptHhYRvr+Kmb+B19odVGzn6u0niQYwnoh3/2ldrs6+LKVdeLolXS2KqZaeX9bItr9LBfhOut+LF3/J2EZGYKLID4QfiahZ1A1WM23Hc8WxwRE= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802027; c=relaxed/simple; bh=RYa/96svgrAnnb4SdzxOLi5ZgF5fcGwxleXttt+MYxs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=V6jbSUd9xrEJd1TMadA9T+zrisjyPJ2vjPxEoLshIqGwhNqQc3U1P9uk2Wk8+9c5XgwngdXjU7kfzqaCjHhwNEeBaUGeG1pnAzPs4pnv/WdyXbTcpqfEqJACei2SjiKHSaKNB+bTf5R1fKdEYHvuposTVPbKg2iEZJAvmoGizkE= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=mE+ngyUG; arc=fail smtp.client-ip=40.107.244.85 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="mE+ngyUG" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=TvIqDGZJarXSGzZLZeBgsmGdfOBidS+QZbY0IvT3savXBqL/vEr10x7i58C/QUuZsHhB3HWiHATdOT+OxqbDDFfRZDzT+vyReJPqmCHh5DwfaEN6hRTFHv95k+RGRrV+xCcELrCRLVNqd8vo8cBPQ/KQrDc+bOvpWhSByVChQmggskW+wbSXSLVxV9DNAbHUdx09ErYrAxFigaBhTlfeN5adQv2CP9Yj6mcVEVXxoUi7+hDYpAqG5UMTEn2kTnT7vrIOpFdeg09tVFaPHCy3MJwXlGXJf8fcxyX+M0R0NOdlS0fdK4mG/vO/UhBjeenKJYiLr/GWu2G1Vip/+4BVVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TWXNFLS9fDHbl0/EPCCpzVFMjKtf2a2chR1ZLjNcfNo=; b=eavjAQEulVCkEwVnrA3YbNPddIEZWvKUVwXZd0B0gc/8RYQYnxrkjM4VA4YrmQ/X8tgyb/8eMVYwYe55zj1v1NPiBjvvSi/j71eKTl2xP0cMhxaMK0wBmkMf3QOYcRJ91edfNtKvuIUiDYx+vEF7lRZmkobNtdOmZqhHls3YHBsFWxoqub8xuvPfo4yC+AWlI+Tzj/UZ6Mrvzz4hOLXjwj70UMLHV/cuLl4BWYa4F8ZMof7TPPmL4wc92tkZLpDCJudtqtoy/6MyAuCCdnLZEpF5h7KYvf18dk8GvonodeCcaYOFQT6GYi8Ca+eHU9C16p06NyR3XDYO/brbsNAC8w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TWXNFLS9fDHbl0/EPCCpzVFMjKtf2a2chR1ZLjNcfNo=; b=mE+ngyUG5tj/QQFVCQEVURm0sp8wIwNJQroh3t5Q057+BRNFsGfIUbeQyqL3KrN+F/jH5RF6dvxFiOHIM/IQeKXfNo2NhBySh7fwhcULm9JW7KxST7Zuf4mCuhOhlNfZMP1rcODR9Uro8n7sFLvhP9cugAPqvGaPPRpxQcRZ3mXVeOfW4l39QOUGqjLnixzV9kL0dcix1pxIJ/7KrcMKEU7Wimj4jGEp19ZLJyVR6S8+2pCR40xE+yKkpJZi0kHgA2icooh+iW9E3juVgk3Pt/skgVzbfRb5P4TB7Wzeu3ilsU6k/SX2EZUBQCqggZNFB1aAbiLBg0U+d6g9FZwmkA== Received: from CH0PR03CA0018.namprd03.prod.outlook.com (2603:10b6:610:b0::23) by BY5PR12MB4260.namprd12.prod.outlook.com (2603:10b6:a03:206::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.25; Sun, 13 Oct 2024 06:47:00 +0000 Received: from CH1PEPF0000AD80.namprd04.prod.outlook.com (2603:10b6:610:b0:cafe::b9) by CH0PR03CA0018.outlook.office365.com (2603:10b6:610:b0::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24 via Frontend Transport; Sun, 13 Oct 2024 06:47:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD80.mail.protection.outlook.com (10.167.244.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:59 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:50 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:50 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:47 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Tariq Toukan Subject: [PATCH net-next 11/15] net/mlx5: Generalize QoS operations for nodes and vports Date: Sun, 13 Oct 2024 09:45:36 +0300 Message-ID: <20241013064540.170722-12-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD80:EE_|BY5PR12MB4260:EE_ X-MS-Office365-Filtering-Correlation-Id: cce1bf26-f96a-4c4d-0580-08dceb52d650 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: 0bGe1Jy8JqZltIifuw6fzPwZ6MPtfWGHGUNsRo9qdkUhndqrEHzEZFyDXgbBiJaNZNposE9olfwYsQ5n3WfawqSvMNPXBjcBlq52oTSpj8PjZf0T5sjN2VbUbYewsYePad43D/oNIDVplfmQR5n2QnntiDrK7eei4QmJGrHkJjlA8affXZz54+rumYvXKiqNMHC/A2jp65MMEU25gbjXc3q3BKNJtL2g9bNY2kWIfC8TAf+dbyWoPVHQcrHtAVaRAQpTVla0uvuNdrGZHbfN7QwbLqnd75mCcEK/9W1407lk7rIflUoGVXJI6Tabi7XPPfp9Kowg8os+dENEMXm3nCcemQd4bJY4c90ueiWZJDjPT7G9kkU8vC33sOS3ZFN5XCJA2LwoN8Z3tG95qtUTq+uNllo9vdeKwhJptyO+rWdCPw4PkIw7+OoWCsu7S+VTw7YayQnYak6qDIr0engzgZu3gI25ILXJgamMoKA+5cqrSgfsHoQAzbo44ma2vwIJVje/95Qom0XILKPEm2RpLxeUjRyuULnzAVRrQlm3Jp4UiJ2lCFf5BmIXV7Bst/2P8nqX3R9tPFBc3rH86qazHvC9/ATGfHkc7KZtsgMvKrd0rjyo7QUOE6lWQzKa+gOzM2JQCQ6o0S2m/77HztxhGhQb7wUUdR9rxqSliHC/g6XMYiqzEmgSBwAX4lDGCb7ynfa/Knnn5E/bu9usZUZBZMoJUEqBTjZMj679JdJvTbJrdgRIX6Va/7drhV7QutQUgS9CLf7gH56XdAmsc/WdWdyXHJ2SMm5IiPSX01cL9fCNGyL+Jttp9d8M+Xo+qV+TGTpl3lQmda9DOTMBJWhR7agXNnkjMpf6B0MgMBnyR7Czir5w2jrKSj89qS1kTITviPtZ2x76okftqxzsNyyZNqlIbLsvLhzNuS55cXptSwFQt+qtKyl/SqoAgIpauK3uik2OIxx6qEnyRhqlbc80ED0hLfo72GLn6pqg+kJpjNtCkoV7ZtWu914qd4YtqXmunqlowYO58o7WfNQnIfw1oRePUmPBcVElsquaCytrHM8dLs9OOjsI9DI9OCW6hy89RXYnfw5VPbk0SbURm0DqPz463RNNBE8xXoa1Ov4c/jFzAj58KIE5wc/Qfnh6c1EC1pIqia/7YepoH126NWMSkL/V0Gxqh0YIYSicMX92rBdqx7PqbJZkt9J2T3tM0XL7g1NkgOHgwiPkh1bjf43G/FD9xB4Poh4CbX2SLjwC60QAGaW/pJpG/fhcsddIwTknS8+D9+sAy09sLtUC2mTcnSP21gyb898qgxon4v1emMjjqGHV1XQjFNmNPTzN62373FGZsyzDD0jCN79FF8UBnk5N/BIPZOZKds/etumsA7Q= X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:59.3775 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cce1bf26-f96a-4c4d-0580-08dceb52d650 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD80.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4260 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Refactor QoS normalization and rate calculation functions to operate on mlx5_esw_sched_node, allowing for generalized handling of both vports and nodes. Signed-off-by: Carolina Jubran Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 115 +++++++----------- 1 file changed, 43 insertions(+), 72 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index ffd5d4d38fe5..f8253dc8ed3e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -168,45 +168,18 @@ static int esw_qos_sched_elem_config(struct mlx5_esw_sched_node *node, u32 max_r return 0; } -static u32 esw_qos_calculate_node_min_rate_divider(struct mlx5_esw_sched_node *node) -{ - u32 fw_max_bw_share = MLX5_CAP_QOS(node->esw->dev, max_tsar_bw_share); - struct mlx5_esw_sched_node *vport_node; - u32 max_guarantee = 0; - - /* Find max min_rate across all vports in this node. - * This will correspond to fw_max_bw_share in the final bw_share calculation. - */ - list_for_each_entry(vport_node, &node->children, entry) { - if (vport_node->min_rate > max_guarantee) - max_guarantee = vport_node->min_rate; - } - - if (max_guarantee) - return max_t(u32, max_guarantee / fw_max_bw_share, 1); - - /* If vports max min_rate divider is 0 but their node has bw_share - * configured, then set bw_share for vports to minimal value. - */ - if (node->bw_share) - return 1; - - /* A divider of 0 sets bw_share for all node vports to 0, - * effectively disabling min guarantees. - */ - return 0; -} - -static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw) +static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw, + struct mlx5_esw_sched_node *parent) { + struct list_head *nodes = parent ? &parent->children : &esw->qos.domain->nodes; u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); struct mlx5_esw_sched_node *node; u32 max_guarantee = 0; - /* Find max min_rate across all esw nodes. + /* Find max min_rate across all nodes. * This will correspond to fw_max_bw_share in the final bw_share calculation. */ - list_for_each_entry(node, &esw->qos.domain->nodes, entry) { + list_for_each_entry(node, nodes, entry) { if (node->esw == esw && node->ix != esw->qos.root_tsar_ix && node->min_rate > max_guarantee) max_guarantee = node->min_rate; @@ -215,7 +188,14 @@ static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw) if (max_guarantee) return max_t(u32, max_guarantee / fw_max_bw_share, 1); - /* If no node has min_rate configured, a divider of 0 sets all + /* If nodes max min_rate divider is 0 but their parent has bw_share + * configured, then set bw_share for nodes to minimal value. + */ + + if (parent && parent->bw_share) + return 1; + + /* If the node nodes has min_rate configured, a divider of 0 sets all * nodes' bw_share to 0, effectively disabling min guarantees. */ return 0; @@ -228,59 +208,50 @@ static u32 esw_qos_calc_bw_share(u32 min_rate, u32 divider, u32 fw_max) return min_t(u32, max_t(u32, DIV_ROUND_UP(min_rate, divider), MLX5_MIN_BW_SHARE), fw_max); } -static int esw_qos_normalize_node_min_rate(struct mlx5_esw_sched_node *node, - struct netlink_ext_ack *extack) +static int esw_qos_update_sched_node_bw_share(struct mlx5_esw_sched_node *node, + u32 divider, + struct netlink_ext_ack *extack) { u32 fw_max_bw_share = MLX5_CAP_QOS(node->esw->dev, max_tsar_bw_share); - u32 divider = esw_qos_calculate_node_min_rate_divider(node); - struct mlx5_esw_sched_node *vport_node; u32 bw_share; int err; - list_for_each_entry(vport_node, &node->children, entry) { - bw_share = esw_qos_calc_bw_share(vport_node->min_rate, divider, fw_max_bw_share); + bw_share = esw_qos_calc_bw_share(node->min_rate, divider, fw_max_bw_share); - if (bw_share == vport_node->bw_share) - continue; + if (bw_share == node->bw_share) + return 0; - err = esw_qos_sched_elem_config(vport_node, vport_node->max_rate, bw_share, extack); - if (err) - return err; + err = esw_qos_sched_elem_config(node, node->max_rate, bw_share, extack); + if (err) + return err; - vport_node->bw_share = bw_share; - } + node->bw_share = bw_share; - return 0; + return err; } -static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, + struct mlx5_esw_sched_node *parent, + struct netlink_ext_ack *extack) { - u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share); - u32 divider = esw_qos_calculate_min_rate_divider(esw); + struct list_head *nodes = parent ? &parent->children : &esw->qos.domain->nodes; + u32 divider = esw_qos_calculate_min_rate_divider(esw, parent); struct mlx5_esw_sched_node *node; - u32 bw_share; - int err; - list_for_each_entry(node, &esw->qos.domain->nodes, entry) { - if (node->esw != esw || node->ix == esw->qos.root_tsar_ix) - continue; - bw_share = esw_qos_calc_bw_share(node->min_rate, divider, - fw_max_bw_share); + list_for_each_entry(node, nodes, entry) { + int err; - if (bw_share == node->bw_share) + if (node->esw != esw || node->ix == esw->qos.root_tsar_ix) continue; - err = esw_qos_sched_elem_config(node, node->max_rate, bw_share, extack); + err = esw_qos_update_sched_node_bw_share(node, divider, extack); if (err) return err; - node->bw_share = bw_share; - - /* All the node's vports need to be set with default bw_share - * to enable them with QOS - */ - err = esw_qos_normalize_node_min_rate(node, extack); + if (node->type != SCHED_NODE_TYPE_VPORTS_TSAR) + continue; + err = esw_qos_normalize_min_rate(node->esw, node, extack); if (err) return err; } @@ -308,7 +279,7 @@ static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, previous_min_rate = vport_node->min_rate; vport_node->min_rate = min_rate; - err = esw_qos_normalize_node_min_rate(vport_node->parent, extack); + err = esw_qos_normalize_min_rate(vport_node->parent->esw, vport_node->parent, extack); if (err) vport_node->min_rate = previous_min_rate; @@ -359,13 +330,13 @@ static int esw_qos_set_node_min_rate(struct mlx5_esw_sched_node *node, previous_min_rate = node->min_rate; node->min_rate = min_rate; - err = esw_qos_normalize_min_rate(esw, extack); + err = esw_qos_normalize_min_rate(esw, NULL, extack); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch node min rate setting failed"); /* Attempt restoring previous configuration */ node->min_rate = previous_min_rate; - if (esw_qos_normalize_min_rate(esw, extack)) + if (esw_qos_normalize_min_rate(esw, NULL, extack)) NL_SET_ERR_MSG_MOD(extack, "E-Switch BW share restore failed"); } @@ -527,8 +498,8 @@ static int esw_qos_vport_update_node(struct mlx5_vport *vport, /* Recalculate bw share weights of old and new nodes */ if (vport_node->bw_share || new_node->bw_share) { - esw_qos_normalize_node_min_rate(curr_node, extack); - esw_qos_normalize_node_min_rate(new_node, extack); + esw_qos_normalize_min_rate(curr_node->esw, curr_node, extack); + esw_qos_normalize_min_rate(new_node->esw, new_node, extack); } return 0; @@ -582,7 +553,7 @@ __esw_qos_create_vports_rate_node(struct mlx5_eswitch *esw, struct mlx5_esw_sche goto err_alloc_node; } - err = esw_qos_normalize_min_rate(esw, extack); + err = esw_qos_normalize_min_rate(esw, NULL, extack); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch nodes normalization failed"); goto err_min_rate; @@ -640,7 +611,7 @@ static int __esw_qos_destroy_rate_node(struct mlx5_esw_sched_node *node, NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR_ID failed"); __esw_qos_free_node(node); - err = esw_qos_normalize_min_rate(esw, extack); + err = esw_qos_normalize_min_rate(esw, NULL, extack); if (err) NL_SET_ERR_MSG_MOD(extack, "E-Switch nodes normalization failed"); From patchwork Sun Oct 13 06:45:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833698 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2086.outbound.protection.outlook.com [40.107.236.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 994F181AB1 for ; Sun, 13 Oct 2024 06:47:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.86 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802025; cv=fail; b=n0d0RQBf3hClHI3WZIDweN18+cjT9qMYyp3d+FeAHYDx9aDNvY2JDX5JuU7ZIggcexw7thtZEH2wP25GanQncVRVVoeuevdEjDyd1sP43i+hqLDsinYdNoM4MPF4kEZyCXDyNVu9cSxPd3ch1jL0lqutcV+kDsw+G14DWYKMRKk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802025; c=relaxed/simple; bh=2TcKTPp0bKnP/mWG1aiVa/q3LWtt41hAOhsXyxShtXY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=hSiRudUXVu70/a7W2pQmxowQpiCRjPRZB2+sZxhNVSmFnZ4PIxhAm4tdtjbJKS9vdsmSsfcZI+fJck6w+kR2yZ/yuszaCH85AY5fkFoZPUwkUYPc1Yyi0BVs9dmZhZ/ErUKElkau8M/oK0c+Yfm9zQsbFpBZ2gelujUiypwVwMU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=F3oUCq7n; arc=fail smtp.client-ip=40.107.236.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="F3oUCq7n" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=bjZWPtetICRatRxOfERMGB6tYgvnPMq5Z0cRI7LrcjI+Ssq8V0ZZdK7hivvyJbjTsME/0dtIF0Hatwf7V0oVzWuiHUgN8yq0Gb73fdGx8G5oFw4DyMX36Mnf+DciWFE887KCOuhhX4d9rQ7FoFzLuGGUItiWnND+zB7YjG3eRwz545lvuumD1q7YQbv+bHWKbh4icC547HoRsY9SRTzg03AdwgP3eEEmh9UtRWVbLl004DB7ecaB/BgW37+Y/S+NP2CIUpS2yu2ARzqr0i3IhJE3XicA8ulwzPDLCACVsZPr4Zh+4ZeeWHhQHtSJRj4l+MzIK2anZn7k7Kik5o80Rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CV1/mQR5oVJudf1fDkcVH8tyGjewAHXQXSphYh4Ty+I=; b=bGYDsfivV3EdqxB6+bOlZTfKBQx2RzZ+76qiE34gzXtL8CuICcsfFNng4G9Ec43Z/3/49vjPfsv8sXyr0BKoJ/sD24QGq8DkVDDqWKxojndE9JujjMQ1ri+OgtGjDUvbdTmagIVF97m3acmOUQmOcOVTcNFMCbrefnAXbGEdSL77QQbrdsx2id9OxmYmJYlaBV1U+H4hSaYs5LE/QNEBKvr8Yi5SvTloddzA/3CrcXerNdFodvoYvkchplOyZ/H4dP4o7aD8Rog3JWkHrdyLpjHQ02ZbXnczkwtclF6uiWuCqESutC4cRXQy/YE35qUeFfNI7HLs3Eh76UU4XGLubw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CV1/mQR5oVJudf1fDkcVH8tyGjewAHXQXSphYh4Ty+I=; b=F3oUCq7nVhFo20MV3BjOYJXdjJGde5UrYx4JvS7Nk/VAPrg4rhzhd3q8NIzBJ4ekFDQL58D2cUsN8r+th4a5BxzAAd5iP/ajkwYM2wMYQJuFRaoQzz6+z1gmAQ6wppZ1d1unSj7OXYU3npnbTq187iAX7ctoFzKkkfY6lsYe6vHrjpNOxcDuBfg/xI7NNVqjJ1I9TKpOdp/beLJEClWGxQ/VwmorYlo0Ao8kciJOzwAtgC8GV3zEoGNtkqBlqbR2auMrc5/kNXSNZvHL4eTqyRga7ANNrXdok0May67Po1Xp0igaAeQ+vGgCdO/iam4OBqgTmAfuMeLZbikRG0FjwQ== Received: from CH0PR03CA0053.namprd03.prod.outlook.com (2603:10b6:610:b3::28) by LV2PR12MB5824.namprd12.prod.outlook.com (2603:10b6:408:176::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.25; Sun, 13 Oct 2024 06:47:00 +0000 Received: from CH2PEPF00000142.namprd02.prod.outlook.com (2603:10b6:610:b3:cafe::1d) by CH0PR03CA0053.outlook.office365.com (2603:10b6:610:b3::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.22 via Frontend Transport; Sun, 13 Oct 2024 06:47:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by CH2PEPF00000142.mail.protection.outlook.com (10.167.244.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:46:58 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:53 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:53 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:50 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Moshe Shemesh , Aya Levin , Tariq Toukan Subject: [PATCH net-next 12/15] net/mlx5: Add sync reset drop mode support Date: Sun, 13 Oct 2024 09:45:37 +0300 Message-ID: <20241013064540.170722-13-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF00000142:EE_|LV2PR12MB5824:EE_ X-MS-Office365-Filtering-Correlation-Id: 2cee998b-db1f-45ca-0700-08dceb52d5f1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|82310400026|376014; X-Microsoft-Antispam-Message-Info: S9+Fr0bmzKBqWoJL06GuuyYkl3daOEazF3QlGqxz9PWlc5MptceXS2s8EwNMBoi+wpGksAW7CumMtevc/NhXh11PTDMsDylruTNyFfhsNT4hy7zDC8UXXLc2pxOE4UflODT4b7ZwGKFKmq/A+IUvAVIB9b+CF42ryE7PSbOF4JLoLFIPF8TP7cO2r9M1bTwjfMXnqpkwq9zP4jihyGcPKAXF9P4CTQhBZBBSZz0QspeqA+ZcqJWS0u1FIQRhBzfz1eYb7GblO3rg+qvBYTnUtvLr5BNfyXy4nxsWRAsF7fpuoqIy6XdeapG1sle/9+CudJgr/iJ2iix774zjN3BFgemYHg8AjFUSdcmSu/80MFuLHpWr7DMUjIQc8jD/NzJ99S0djdD/QejiwZw+pyOwxIYAGxrHe3WxslMEbjwYNLVeue8QMI6bFe9q1Bs7WbPfej/ry7Sr2t2XrWOUxR2bWE1bNACaW+ZBx/OEk+HAqeRPD3CCuHPHW0XdF9dgeDqRScGSq0L3SCYmcFeN0D1aWWTcPzjT/ezBSA1/XyzsjUkqzLXprVBFKQwL+zKFeDqvuYPoPn5cKLoQrYtSRFqbwyfV3c+HkIw4jUAaD5pQdrN0kt3aY4BcUcBN6daSdFEmxz0mQe1+R6we3oYmM9Z9tqzO0BO14Xj4fNPyoOeM1Csmr+DN92+yh6/u3MBJ6wcoBCE8VfDSSA/tVp/r1A35GaD0yerwNV0+2YlMGCJSjvICRn5NG1xdLlhG71QoFy5wpibUILu7RTz6k/VK4am65uJbCLvUlYLF6ltCp7vhpgL9yRa3mOoSU4r05PR7YJkT8vOGY04kuZiBdtC6yLtl8DOG3HHaYekMDk7iCovJgc+hU7WYnYnxeB+aIRLPHUQNBfzjcQsfnvcr/+qQOoOfvI/UNrAculX4G2GCcfQa71EM0mpoX1OAIoZE7PzdkTptgQmCmu9JzwCxG4QyA5nKVwUNFL6jOd3/KxlUQ1O58nDs1Rc8c7nfq+YniaoqAbTBL6dSikwg85aS3NcydO0HlCkExHzvkh/EX2oiM09IXGoWtFBXc2bdDnuX56eLyI4bx+2qNQ+EnSm/V+A+8aAQGVgssYKD37yJcIEkNVgveMmxeptCe9Kko/9v+li+WLzf3LHJp9isIfvKZtMhFPqJ8FyWkPVvNEth6RMccOpUt6Hz0Wo0ForBgH5vdXd7rNTAwMJxakW1M6kXmw5RTyufYnNKs/3SCVRpqjusMZ+kHZ0ANVZqDeHxlRVvQCkjO6R9DIfB4xg1SbfcwIZebNBS8fGRCw3TEBQJb1w+8PBjUHkjCe2Yg18+YLDsUL93BKtRN+/7U9wn0l/eUKKpIodwvOmddJRuU/wdF7DnB810Kwx7cjv6emvyQRJXx4om6INP X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(82310400026)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:46:58.7550 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2cee998b-db1f-45ca-0700-08dceb52d5f1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF00000142.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5824 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh On sync reset flow, firmware may request a PF, which already acknowledged the unload event, to move to drop mode. Drop mode means that this PF will reduce polling frequency, as this PF is not going to have another active part in the reset, but only reload back after the reset. Signed-off-by: Moshe Shemesh Reviewed-by: Aya Levin Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c index 4f55e55ecb55..566710d34a7b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c @@ -35,6 +35,7 @@ struct mlx5_fw_reset { enum { MLX5_FW_RST_STATE_IDLE = 0, MLX5_FW_RST_STATE_TOGGLE_REQ = 4, + MLX5_FW_RST_STATE_DROP_MODE = 5, }; enum { @@ -616,6 +617,7 @@ static void mlx5_sync_reset_unload_event(struct work_struct *work) struct mlx5_fw_reset *fw_reset; struct mlx5_core_dev *dev; unsigned long timeout; + int poll_freq = 20; bool reset_action; u8 rst_state; int err; @@ -651,7 +653,12 @@ static void mlx5_sync_reset_unload_event(struct work_struct *work) reset_action = true; break; } - msleep(20); + if (rst_state == MLX5_FW_RST_STATE_DROP_MODE) { + mlx5_core_info(dev, "Sync Reset Drop mode ack\n"); + mlx5_set_fw_rst_ack(dev); + poll_freq = 1000; + } + msleep(poll_freq); } while (!time_after(jiffies, timeout)); if (!reset_action) { From patchwork Sun Oct 13 06:45:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833700 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2057.outbound.protection.outlook.com [40.107.95.57]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C5EE139580 for ; Sun, 13 Oct 2024 06:47:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.57 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802028; cv=fail; b=CEp22YvOSM2pBXvI6AabMawLdT2Dxf6hYUQu7GxuI/+1yBHyPKXo1+3vfAVMbHUfUTi/H6UanLdQZMpiQl/xRNZdZP8Q76siVVZDJQG/msJxsAjSubSPD+Irw32NnqvhbX6hpDmuuQN8OdBJEDZbCmZajQq9SZFKxrSq/25t47M= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802028; c=relaxed/simple; bh=3ndkomccHSzoxvPlj+dYmHnV35jwci+om0km4oZnWzk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ogihNktOcQeEQ+zHPwUIfY4Ds8c21R1KvDqqiKtE6CVoErtBjdL5XsOgrP8PWgvq4S4BAsBCo7KZlsAM3loB9TlJrtI3UGFTn9QepTc7hOUkEwppHBJaKS4fhAzx35Ult4g1D6vKKcraWGJ/WdrqBfumYgszh9h7fYDbAGXjL7I= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=VwO8qHf6; arc=fail smtp.client-ip=40.107.95.57 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="VwO8qHf6" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=cJC6wWhRyAJMuVf6LfVR4m0d9Qxhx6rs6SmpiczjZvHFUz0goNaPSlA+WJKNBxhsmFk/6hB56wIfB4YbWxsIUhxlfkjvgNFBvz38lWe41nVR90fibuWYtt9vCP9nYdKz3LQV/8joBAlabEwT2Kr+y8CnS5BHMmAJk8+mdhWrvThjjHVm57W9Q35LRaDP79iD6lYk6Hm+3U5HzHcVK8cfGY5gvCzpNQ1xDvAYo9PU/npYt3efeOmpWIEEnYwkehgZK7ISFdZKY3nJW8swhK1WAxwHxQ6Kpq17SLBc2cdsjr2/fdz7Evvd9e0ZtqsucSzzuUqLsvIAgA4RCgds1Fwoqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jeYzfla2qMqlpadDr3MJ2Bc6hnipuwXYDrlL3CnlCo8=; b=rElElauNy+nbwVNLeDZTUijW+ky+IBpFmfysYI3Vubcwuq7Kd2hLlI62P0HjRgw8pN9roGuxwp5FwTVB3y3yXAZQ9Kbm2RXHHG16FXguFWg6XZh1M8HDPMSIU6jyom8vAnbAZbHEoRA5cB1ndZKJ0+1Dl/uw0RVRHRBMcCdk++NFcSCfZWBOZ1IL1OfZE98tl3iFKmYt+lq6F5eB7JB3NJIXNwLyrhypq0eOPUOYzHS/jAi3qeRreb9OcjdHir16gd2VxxvBBjdwVOwsa1lLx/5CfEwPpU9L0HAd86+4lQusFeSmzGzv4MZzz98e47sk14j/vjS6shPW8eumyvDc3g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jeYzfla2qMqlpadDr3MJ2Bc6hnipuwXYDrlL3CnlCo8=; b=VwO8qHf6n/AChKKlWN5/QB46P2VECM6veu+6JYIVOSal/dwe8vnWsA8T8SA5jAmIssobq4FiWOpwGAzg0lMj3NRbVT/LR6Xkd7VAoSGchlQIe1mv24QGbRwWoYKaGIx26uuFZrtm/bIISEgrdDCOJYM7wiYQPFF43fd+2dOxKR4Euc9Ed+3tfNjMx7iXytRHZHbPonLJvG6krvK0lu0UErjhPAjPRwvLVoSPGNZ0nY5cmwsjMgRW7kNOr634z0HcoS5jrMHGTzL8q7vHDJrO3k89tJEovbF0gytk5cIDyvYzXR9lVtMztKKN6fjJjFuvz8NWwfx3i43VQt87La2JVg== Received: from CH0PR03CA0003.namprd03.prod.outlook.com (2603:10b6:610:b0::8) by SA3PR12MB7860.namprd12.prod.outlook.com (2603:10b6:806:307::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.22; Sun, 13 Oct 2024 06:47:03 +0000 Received: from CH1PEPF0000AD80.namprd04.prod.outlook.com (2603:10b6:610:b0:cafe::6b) by CH0PR03CA0003.outlook.office365.com (2603:10b6:610:b0::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24 via Frontend Transport; Sun, 13 Oct 2024 06:47:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD80.mail.protection.outlook.com (10.167.244.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:47:02 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:57 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:46:56 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:54 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Benjamin Poirier , Tariq Toukan Subject: [PATCH net-next 13/15] net/mlx5: Only create VEPA flow table when in VEPA mode Date: Sun, 13 Oct 2024 09:45:38 +0300 Message-ID: <20241013064540.170722-14-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD80:EE_|SA3PR12MB7860:EE_ X-MS-Office365-Filtering-Correlation-Id: 45e20eeb-e940-4bae-8cb4-08dceb52d83d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: 9+Xb722zmAwK5TYwLP3WDCU7pFoREhCibmy/+N+F6u7mC6b7DQaleIOONmZFvQn0apeO1F6Xg2bnHFytcKm7p34UCorxvS4Hqi6MobBRMbzr66rexENTHLm3lBizGDXlSYJlaRb1ZIJFKSG9nlnAaTyJitPQBC7UXSkk71Mm5vEazeWzQE9yzRSAJQcAz7RG5uECTFakjCk0q0iFbalAvw8IL1GlFA/YKz/Y5hcUdNXgafPSyeh0+9VB7UUq0j3Y981o8EkI9u2DXS3zi+2uy1ENYkgfGUlq/IflDissF+L1Cm3UxzB8qWhqhRZeH31Ubgjq/Esx9P3QYrfBGQqstTXRH27IIZQb0cnBhcgjYf1SG19y00wAb1ZzX6Wqvj4MEeAFm+JUefXAOCEvCbYjPksjcTQYzzBjTuosY3g5268/RTcZOEBW+xGgsaUWN15oV5l9RqeQpM+wf2bB4mOBbAgHcD7VfT/dbBlSXzrv62dRlYS11cuYmFDwWfI27qNzZDpF6zvT3nVUBcbeKXnaj+xNRUF/+Ort9gqY4H0ZGWWz4KeDMJdY3QFNobKX70PjA9xoUm8XqbmuWDI2aCXyW0lF7sef+4eJRPxoIVGAntpRWxTtJlDtQHhKiUFaCfRZYCNV73K/bCnafP9I9WAWwq/KjfLrvobaxqvSvJh7hJZkSqvTjXxyxX1FEbs9vU+OlgaCnJH3F5jkqdbPf7uqJ6XxaAhxEL3ZTI3b8XNP3f7KosIMXvCuf1x1UGRIPSRjieZ5jtSw/VP+ekWbA7U0rIp0eTdqLO8AiDiEi+jsUTcf65PimKbeL6CoXLFseVlsgCsQSHQVWi8t4FaU0CcoqrHXR/ypIqoQagtJvAYG3wihALgX9KWcWdVIayWiXxy1eVzr8YiCewU7bJyHG7BgHEanEro94x4Kwjquos30P5/dJHPgdMehTp0gotTuyFXHU3fClrb4EqORdHdnVvcjfepQ5/UO6wV143j5wF/a47LnU3pSHLaTgNnYZYUnbEFedX4xbO6uGDDmdwoiuWXfhznclwO19gP3fxkbcHm6Z+LGArtySMXuLAgPPW3N52CHhfvdvTIAJEwJDwa3tQjxk21rjYnzGUt/blM/8We5ulqb2gcDCgHfHJzuaL1xOMkFvESCUhXE/ubQMyFGPJHClMWvAottaQgV98tVsNwXyshQm1J58YRbKBTsS0iWEp7tQEcueSnib5sCPs2RXBuI5Ih6GrNfSnjTVuJMFZ5WLDYSgXDF3IfB/ZULAjDy6rZnroaU9q+8jm2APZjS+wxhB4t9ZJKaMs00O2AyOH58q+tHePri6ngpQ9GdY+Z9Gr2J3ZGstUsVNd/3/ee7eE4y1zatX3M5n2cOEWEzTYX0HzV2khwTkNr8erbJY3riOeXW X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(376014)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:47:02.6275 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 45e20eeb-e940-4bae-8cb4-08dceb52d83d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD80.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7860 X-Patchwork-Delegate: kuba@kernel.org From: Benjamin Poirier Currently, when VFs are created, two flow tables are added for the eswitch: the "fdb" table, which contains rules for each VF and the "vepa_fdb" table. In the default VEB mode, the vepa_fdb table is empty. When switching to VEPA mode, flow steering rules are added to vepa_fdb. Even though the vepa_fdb table is empty in VEB mode, its presence adds some cost to packet processing. In some workloads, this leads to drops which are reported by the rx_discards_phy ethtool counter. In order to improve performance, only create vepa_fdb when in VEPA mode. Tests were done on a ConnectX-6 Lx adapter forwarding 64B packets between both ports using dpdk-testpmd. Numbers are Rx-pps for each port, as reported by testpmd. Without changes: traffic to unknown mac testpmd on PF numvfs=0,0 35257998,35264499 numvfs=1,1 24590124,24590888 testpmd on VF with numvfs=1,1 20434338,20434887 traffic to VF mac testpmd on VF with numvfs=1,1 30341014,30340749 With changes: traffic to unknown mac testpmd on PF numvfs=0,0 35404361,35383378 numvfs=1,1 29801247,29790757 testpmd on VF with numvfs=1,1 24310435,24309084 traffic to VF mac testpmd on VF with numvfs=1,1 34811436,34781706 Signed-off-by: Benjamin Poirier Reviewed-by: Cosmin Ratiu Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- .../ethernet/mellanox/mlx5/core/esw/legacy.c | 27 +++++++++---------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c index 288c797e4a78..45183de424f3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c @@ -176,20 +176,10 @@ static void esw_destroy_legacy_vepa_table(struct mlx5_eswitch *esw) static int esw_create_legacy_table(struct mlx5_eswitch *esw) { - int err; - memset(&esw->fdb_table.legacy, 0, sizeof(struct legacy_fdb)); atomic64_set(&esw->user_count, 0); - err = esw_create_legacy_vepa_table(esw); - if (err) - return err; - - err = esw_create_legacy_fdb_table(esw); - if (err) - esw_destroy_legacy_vepa_table(esw); - - return err; + return esw_create_legacy_fdb_table(esw); } static void esw_cleanup_vepa_rules(struct mlx5_eswitch *esw) @@ -259,15 +249,22 @@ static int _mlx5_eswitch_set_vepa_locked(struct mlx5_eswitch *esw, if (!setting) { esw_cleanup_vepa_rules(esw); + esw_destroy_legacy_vepa_table(esw); return 0; } if (esw->fdb_table.legacy.vepa_uplink_rule) return 0; + err = esw_create_legacy_vepa_table(esw); + if (err) + return err; + spec = kvzalloc(sizeof(*spec), GFP_KERNEL); - if (!spec) - return -ENOMEM; + if (!spec) { + err = -ENOMEM; + goto out; + } /* Uplink rule forward uplink traffic to FDB */ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters); @@ -303,8 +300,10 @@ static int _mlx5_eswitch_set_vepa_locked(struct mlx5_eswitch *esw, out: kvfree(spec); - if (err) + if (err) { esw_cleanup_vepa_rules(esw); + esw_destroy_legacy_vepa_table(esw); + } return err; } From patchwork Sun Oct 13 06:45:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833701 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2059.outbound.protection.outlook.com [40.107.94.59]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D98012FF9C for ; Sun, 13 Oct 2024 06:47:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.59 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802031; cv=fail; b=XK+pICy4DwBNQDAAJYMH3ArC7+ikxMDjpl7Mtu4jW6tTixCXYDnPWeUooH4kIcE+3l2y5cba08mlzJD9UQbQOkxgfIjoUWXA8SWyuBBjnpda0e7HPaL0UsYF2rqRJkKaNTlOi5ymCiwCMVQ7ii/q1tNfeSF0vzNmqING+qCUIOE= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802031; c=relaxed/simple; bh=VUOn6GXCvm7h50UJKNENBsm+dB0humSVJ2tCYNzLd9s=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=newliyjbhcADdQFm9Wa1xVHDoG7r/bEOVdhlwf5o/o6ToAJxQZQ2YTko5Xdh7tSpqQ2v3YVlvbilObB6YnoEgrQwKFOGJ2S6xrL5ENb6z5YbgOhGReyBLJ64PPvBYgi2wxNeD34Z4E0Tp8pEUNNOP8I2cd4dhN+mvVjvp6hB724= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=uq8h+qyc; arc=fail smtp.client-ip=40.107.94.59 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="uq8h+qyc" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=gA/5kh5yXGvCveRW4xeyNUiHnIemXDqRkiM2oZGxUBLOiL4k0Ijqaw/MmNjmGSzYnn3Zs95DjFLIx5nhZEn+zyl8IZOnmI46RBs3aUt3qRuohjVc9JT6gAYMTqysQ2SS7zEBGjAA3lGOGpPu7FvGEEU+nGeVscPTcekv7q4cK86WHz/k8wVCM5zOHB8Ec0VREWnjdEEkiVC7SZxhs53nPyuilIgal0fVmf0Xoq33WMHk9/hE24kzqLwnn5kB01nmcSMp3+J/V6fzlEIoSE8j8RG78xTEzK3sGCd3R8po7tA1pxeCqjYX7jgUteDmpZSF2S+zMEQJ/9fB2fdW/mxXaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CbH93SHmMh18raBkI/tOE1it7xyJd1pXIMJ3n3AsNYY=; b=jykcuOhkRR8hSl9gNgOIYDA03gWmppn144uQf1KOO44XA4tgEg6T9msufk+byNqnAj3WDnP2UbClJIkfDsite2Eh8fsycWuJ2+JR2I05sttzMhQ8m46wiKgNGIe2ZBhcWyPajpv+VbtUC3TL9uOSBcJaqO+pDPdkM0sE4cdfAE8gycNwxVHIgQQs32u2RFJ6YR5bBtdmrxkB3Eg9QyW292hUU2NS9pIFlgWlzVEFaB+JCpL52mpaMBwc55itOrR07gTd/pR/Ka1BDA43FuAo7gEWJz8vFVri12FU7E3cVbI8odoD1GwyVb3oXg2bhustz7QMWz3lmNBrzLq8HuF39g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CbH93SHmMh18raBkI/tOE1it7xyJd1pXIMJ3n3AsNYY=; b=uq8h+qyc1CO6uCDircEaCtldAYgLmya0te08Gpi3FI/wSFtqrNUOOqxgwVGsRmRwJmgfH7L7ePwnZNDjM6uBoYSkoeurr1lAj9dLPYcNuNlDkAVAxgUcEPluFWb1es9lDdeoJlKShJ2QAYuEpg5acurfm9kp7FlD/Wb1Ede1BucLWXjWVU8cuFXreUwlYWGfYjOosJUorvQZJecouihff+kX6rQL4bJKF+hXO4Vo8UBbSqTvPA7ZIJnOrnYO/dte940tHsaQsyVfZCKRqJzy4xBLv3K8qjKCQJ1tyrns8yHhbXLKh53u5TzetPcGusEEqze20EpV7802fb0zy+DH+w== Received: from CH0PR03CA0014.namprd03.prod.outlook.com (2603:10b6:610:b0::19) by SN7PR12MB6768.namprd12.prod.outlook.com (2603:10b6:806:268::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24; Sun, 13 Oct 2024 06:47:05 +0000 Received: from CH1PEPF0000AD80.namprd04.prod.outlook.com (2603:10b6:610:b0:cafe::e7) by CH0PR03CA0014.outlook.office365.com (2603:10b6:610:b0::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24 via Frontend Transport; Sun, 13 Oct 2024 06:47:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD80.mail.protection.outlook.com (10.167.244.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:47:04 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:47:00 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:47:00 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:46:57 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 14/15] net/mlx5: fs, rename packet reformat struct member action Date: Sun, 13 Oct 2024 09:45:39 +0300 Message-ID: <20241013064540.170722-15-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD80:EE_|SN7PR12MB6768:EE_ X-MS-Office365-Filtering-Correlation-Id: 315fbbb6-b855-4434-a4b7-08dceb52d999 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|376014|82310400026; X-Microsoft-Antispam-Message-Info: LV6vlSaZF7dl9wsSM4wBjTJc/ikepMT97khi/td3c28Y9xPVs/+6z0+EktoZelClvv5sLdTfUPZgOLs/k5R4ZBJPeAN0Nu9J0PYZVfQdKi9WEQdvjJeGQzB4Hr4CXWWWrr5U9YH+B3v5PWaxPY/8OmdMKsZWHNmqyKVi6psRoJWwPNw7aR0Dm7rvFU0lgMeChWUoxtTMVgvmoVwt8QCmBYi6qRDTq72IK8aUmuNnR+lbkUSL7hZFBR5NmeXnbjBr3FjV3q2Rd++RT4XQHwKQ+pU4T+L3mflcSd4fR7NKbozsavxBcBOV0yY4/v1k1OWClAUSVznjX2fy5S4YFIhlI/9Pogo4PfGZEwblXD+THGqdv5CpqcYOa+1ecE3N3TYib/4YaKLN50QFWbgYHxgpm+HkUHXbM68hYkjaus7jfRrXib/S/ib9m3uWpCV3wI14T+5btjdVOUEmW6C6GvfxEibXLFfbiZR1mCcT4lko/MOqAvYa+ntRtL9MnJPCJRQVxDjVgKB95yj5LpluUCTLhAsBafj0eOMsAr3Vt3z6WQ8HG0+VT5yO4EPxEVg+/w5D9KHmcUpjqK3VsRNNNB2bc42y0zrEKjFnehWPccoAPm7f4o0Bc7WrXmzoFH8y5nzf7kZY5iGG+12QzcXr5dbZYUf4xBmCFc4VIBtBdry1KdPD0TJdYLOkSCn4KE4IgemERooLOlOAuE9TFGs5QWinI5Bhegs+c0rcqsfBUuEUW5aXnpxwaY+wmfukZnxF19RbY7o74MiKH0fqxXKxzpiT2tN0k+xPBbRZVcp6fyIZsz05SJ3lWeVN37eWe4TJRtAD9wyznVfWwECdi54b2AB6ZQAlQ/nsRRV7Iq+p4HS+y485VjA0MCXN5bAk7P8LXYgcNeZpyqk38AQJXjOsr25AWBHqDrjfXmRWh+HobMoI2Mr8vy2JxjESfIVxt8glYinyPEACBay6XXXteg4AVV6O1egZSendG6ZLUOi7IBe4xDcl7qXe+Thdn685tDpeWCJxneGq5/SmPXLZBdHrkCEjeNKUOAwFi+1Qd1IBjSFbRi0cdC6tka+eyITDUke5o4FiFrtL+7DGfbgaEb1cPoQo+0PAaiIMwS8oJKsisCX4pSp6kCvIdJ/+94/HoE92LIhO6o4Av0Gxg8iTBXB1bvUwkaoH2x2qaimSMe1WaDIe79FdvAZL/F9+nanhC/pBZjVEGGPV3ttOuvHuC06Vary0mapodQ5Bh/5wcoY7yHkBSrgiUqAQpuc6Uygn1xiIDas4Mu5qA7rjTR5G1wJzaCb0GCOrGM1/vETm9bk22h3QSgOEOMfWq2DFqgfLq7t82Xc7DaCdIXV+80t0d0Q+Wl2C/j1zl/Cp3SkZf1SK6/qBRVj1apzocc5xaUJnbcLUaLEy X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:47:04.9088 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 315fbbb6-b855-4434-a4b7-08dceb52d999 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD80.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6768 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh As preparation for HW Steering support, rename packet reformat struct member action to fs_dr_action, to distinguish from fs_hws_action which will be added. Add a pointer where needed to keep code line shorter and more readable. Reviewed-by: Yevgeny Kliteynik Signed-off-by: Moshe Shemesh Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 2 +- .../mellanox/mlx5/core/steering/fs_dr.c | 23 +++++++++++-------- 2 files changed, 14 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index 964937f17cf5..195f1cbd0a34 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -73,7 +73,7 @@ struct mlx5_pkt_reformat { int reformat_type; /* from mlx5_ifc */ enum mlx5_flow_resource_owner owner; union { - struct mlx5_fs_dr_action action; + struct mlx5_fs_dr_action fs_dr_action; u32 id; }; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c index 833cb68c744f..8dd412454c97 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c @@ -256,6 +256,7 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns, { struct mlx5dr_domain *domain = ns->fs_dr_domain.dr_domain; struct mlx5dr_action_dest *term_actions; + struct mlx5_pkt_reformat *pkt_reformat; struct mlx5dr_match_parameters params; struct mlx5_core_dev *dev = ns->dev; struct mlx5dr_action **fs_dr_actions; @@ -332,18 +333,19 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns, if (fte->act_dests.action.action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) { bool is_decap; - if (fte->act_dests.action.pkt_reformat->owner == MLX5_FLOW_RESOURCE_OWNER_FW) { + pkt_reformat = fte->act_dests.action.pkt_reformat; + if (pkt_reformat->owner == MLX5_FLOW_RESOURCE_OWNER_FW) { err = -EINVAL; mlx5dr_err(domain, "FW-owned reformat can't be used in SW rule\n"); goto free_actions; } - is_decap = fte->act_dests.action.pkt_reformat->reformat_type == + is_decap = pkt_reformat->reformat_type == MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2; if (is_decap) actions[num_actions++] = - fte->act_dests.action.pkt_reformat->action.dr_action; + pkt_reformat->fs_dr_action.dr_action; else delay_encap_set = true; } @@ -395,8 +397,7 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns, } if (delay_encap_set) - actions[num_actions++] = - fte->act_dests.action.pkt_reformat->action.dr_action; + actions[num_actions++] = pkt_reformat->fs_dr_action.dr_action; /* The order of the actions below is not important */ @@ -458,9 +459,11 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns, term_actions[num_term_actions].dest = tmp_action; if (dst->dest_attr.vport.flags & - MLX5_FLOW_DEST_VPORT_REFORMAT_ID) + MLX5_FLOW_DEST_VPORT_REFORMAT_ID) { + pkt_reformat = dst->dest_attr.vport.pkt_reformat; term_actions[num_term_actions].reformat = - dst->dest_attr.vport.pkt_reformat->action.dr_action; + pkt_reformat->fs_dr_action.dr_action; + } num_term_actions++; break; @@ -671,7 +674,7 @@ static int mlx5_cmd_dr_packet_reformat_alloc(struct mlx5_flow_root_namespace *ns } pkt_reformat->owner = MLX5_FLOW_RESOURCE_OWNER_SW; - pkt_reformat->action.dr_action = action; + pkt_reformat->fs_dr_action.dr_action = action; return 0; } @@ -679,7 +682,7 @@ static int mlx5_cmd_dr_packet_reformat_alloc(struct mlx5_flow_root_namespace *ns static void mlx5_cmd_dr_packet_reformat_dealloc(struct mlx5_flow_root_namespace *ns, struct mlx5_pkt_reformat *pkt_reformat) { - mlx5dr_action_destroy(pkt_reformat->action.dr_action); + mlx5dr_action_destroy(pkt_reformat->fs_dr_action.dr_action); } static int mlx5_cmd_dr_modify_header_alloc(struct mlx5_flow_root_namespace *ns, @@ -836,7 +839,7 @@ int mlx5_fs_dr_action_get_pkt_reformat_id(struct mlx5_pkt_reformat *pkt_reformat case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL: case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL: case MLX5_REFORMAT_TYPE_INSERT_HDR: - return mlx5dr_action_get_pkt_reformat_id(pkt_reformat->action.dr_action); + return mlx5dr_action_get_pkt_reformat_id(pkt_reformat->fs_dr_action.dr_action); } return -EOPNOTSUPP; } From patchwork Sun Oct 13 06:45:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13833702 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2067.outbound.protection.outlook.com [40.107.93.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 588C985C5E for ; Sun, 13 Oct 2024 06:47:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.67 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802042; cv=fail; b=lBiG9xD8x049Cci8LfgrvYFzoIZ9VjTcibuAIhxLOPDmDZbSv8kUUWaFy44RQTQiziDPxkA62l2erAseLz9knWkWJt4YgRV9+a75eFU1zv+MmLp+a78VJr7TL74c0pbz1TIOyjHln/TmLFIyAXVCZO8geI8+zyJ7fDfUU3FI1UY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728802042; c=relaxed/simple; bh=XWoF5XaVejxIkEO45xtz3w48T0S5rypCOKStgFdtwlo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gHoLeL9tJui1OoTLAxjxj0d8nShVmjQ9+nf93ONRJPiphISc4zD2BtNXIXzMmX3+E8fwFAcGf5Ck3DVYpM4Ag4TRVmU4SxgjDUe2A6WX6mGdMhLIeZRABhqU17nSfj8XpDSnsPOb/LHdjdFnDKS/j2jaoWvC40Pst98BYCBVQCg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=c3fN/8OY; arc=fail smtp.client-ip=40.107.93.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="c3fN/8OY" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=BK64RypdyxNdhiayoOrMv7KfR9Prs1+XsBu12SkWIlf/LuW7OAA48wq0hRj7x+XSlDMIJpm9h/TGBSgJnThPYCos2DqQMEWix6DQeAPphffklM9LHEWRlUdqw2QEMJr39/uUr3WR7/QA/2FkIIJQl1FWFrKj+GwI//nClklSrUOM9838kKqrEuhtuqqVHS1N7wDFCSS1GUgMLhlrE5z4YynQTIQekEDYIhKzC4oydTN4ET+paVEEocnOIIxG9isr3XSwMln+1V+QjP5RkzkRXAqtiP0olX0exOktLpJqTikUzG9pwjv+g7BlgtbqQx63kbRgsdsKzfSdk0WqbMOOkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aXmAcoLyOzxRrnd1ljArjHduOwLqEnTp9kIggX8ZN4w=; b=PODNINV3CGj/jFRQcSaAamxC9kewoAYQSSvG7XjUcyQATd8nDWgRblHZ3dWXQwJ8EHEqrskxHmKqNiKwHsG1ofn9Cak2OrB4Z8tMhzsHOhqjlpUEBdtFyHU8uXNv17cEwXzCqqIm5X/K93tVSzEbrunCAoCF3jpuvSdkQnn+hXxmyl36RxAZGDPu/O2Owpf2gzZ3+XvvrpxcKZUB6FvfLilJABc4bbxC2+NFlP6doXgW06OkKX6YS96G8VRwHcl3XMEirQmL/UNPMhne5wOdK8JBlb9o07Eyx62tFiihh7O9H23KfuRqFUd5Ky/zYshYU0VKSDTqVEECgwDcZkYv8w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aXmAcoLyOzxRrnd1ljArjHduOwLqEnTp9kIggX8ZN4w=; b=c3fN/8OY2OmI2puqmFfl1I7GsYeEwanQ/f6nE4L87d9IqreTDran23dkdyLGUcrawKXySPWgctmOtuonxNPyFERJTm4ZaEeyLA/2b/IfuaGDfNZmNOPt/Nk0MtemnZPwjb59z977u2f/scDBBICqV4c+wRV2XM2I3KN9FicN409c4oKo5k9AO4BOgDrDCDFZPvSFnVtepJum7AxU4REPDyMZooUKAH3uXFr7H6klVD2fPPpJdcuwXQ1AxMYpyePJabIKjC2zMys1jWb4rPywI+2WNWMrDA8cRH/8bdmP6Il6jzMhI9Of8kLzFUxPxbeQctKTa3L2JOnAygCOtOww0g== Received: from CH0P221CA0004.NAMP221.PROD.OUTLOOK.COM (2603:10b6:610:11c::14) by DS0PR12MB9273.namprd12.prod.outlook.com (2603:10b6:8:193::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.22; Sun, 13 Oct 2024 06:47:18 +0000 Received: from CH1PEPF0000AD83.namprd04.prod.outlook.com (2603:10b6:610:11c:cafe::7d) by CH0P221CA0004.outlook.office365.com (2603:10b6:610:11c::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.24 via Frontend Transport; Sun, 13 Oct 2024 06:47:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CH1PEPF0000AD83.mail.protection.outlook.com (10.167.244.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.13 via Frontend Transport; Sun, 13 Oct 2024 06:47:17 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:47:04 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 12 Oct 2024 23:47:03 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 12 Oct 2024 23:47:00 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 15/15] net/mlx5: fs, rename modify header struct member action Date: Sun, 13 Oct 2024 09:45:40 +0300 Message-ID: <20241013064540.170722-16-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241013064540.170722-1-tariqt@nvidia.com> References: <20241013064540.170722-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD83:EE_|DS0PR12MB9273:EE_ X-MS-Office365-Filtering-Correlation-Id: ea2dcca6-c4b7-45f3-c0b3-08dceb52e121 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: 1i0qREAay+PLZchsJ5gSl9zd+XQ/iPEEblB6csYybw3RpAwVPQEu8r519LsvtfhVBhkllLaKszfRPknumLG1zGG/a0IqK8smLN1TohlaPuZhtJiP1C+PHBVCdNRCRxdUuEUOIUG06BrI6EOCzDj5LJ0XWZuzFeKpDULpV9wFtyQQljSLGYJWUSdwTnEnypJQz3lwXfGGbfFSJc4s1g97ZqFu9tn7UPvssojlRQmZ9Mjs1OixZMI06Fa9Jfu2kfqhtZFZ7WaKeK1SqwGs6cwwVe795RTOggpw4owLHyEv8GmKgtYFs9nfQdrgQc+uGFOB/QHcqiZzn7QBCmzudkdpReG5/EW+ZWvMQtmLexo5ZQ5YhJaev4MDJXozWViIAgfRDCJd98/1mDOgCd9n78cRJS9H9tfXIb0kSr2eG+LEpdvLBJGHSHTA4rBmm1mnmNAYw5yh3QS1UYQbQWidzUYvJiJohfXZ/fjq4m7lrRwJn0hwAQqflnmZqOgj1STu50iU3pX9DNySqtcfU2xjwKsfhkY2uwHlwk8IR1d+JtEfvkhgIPUzffIYCyuklENMhO0Tk1Fi1FFJDjv4faptAJDRCqCEAJB4pgbW/8aWqhNjmGp3b3d45mC4qrX+MZzs7ph/QNdfOZHdfN1DJ4EYKexlNwzHKnlE9KvXqMqHwnTpfrbU8zx/gASEZwCYJCaqV5993Z4I6Vh6q2YG8mHHgqkDypmlC0EfIES9n//G5Xd9oTdL/DRyOKwPiD5nYwCmGXzORsZercVoHIt8MidzjhMOJgwuPK7/CD3ftWE9ORMIbiXlmDO02N1znRWYE2EqDLx4sC8fa2vEcgr1EXW3yjZvnLCesLT9NftxVmxWkP11imu5weSbMq8xtzQx5PJEYhNZ+vFvr2EyHIOes/iqYaRGudzSD7+rMwBrGzd0GG6y7l1nZfDJHTXQwC7pNRf52to07FJX1xFw9ICR0ASIZjlLygRBr8gQMe3kL07kSGqqMNO8VgA5lZzm1nFxJ7G++jkxNexqouss9v3vPRCocnkvYUdp86/V3iCdt7jPUuBxBDcENeXL7ChRN0rRzIdY4jZgsBqMugqwrPY6dgoGBBKcCZepC09mBksz9p9KYhIgqHeugpsDtIKDfQn51hxFFy50h03zfexGpYhEnEFcnOXrjMyLQmtQb3JekulhYHkjeV63AEWSi6yk1cwZugf+3grKStcDgpt1b2tXAenkAblT5rCzX9q+lnEkRkJ6TQN0z/2HSLyyNA7CDINjLsaDBtPuTCLh94UlmtF0VKqUk1V23gRFSV48lzPu1sNjEG8fb7+6QDFuAo9auXUCVe7cm4Xop9qQwPYY+c55qCAvxby+/v60XBe8Z+aICEFtfzgxWXa7wEaoI6jvIq3aT5Qnyxjs X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(376014)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2024 06:47:17.5136 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ea2dcca6-c4b7-45f3-c0b3-08dceb52e121 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD83.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB9273 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh As preparation for HW Steering support, rename modify header struct member action to fs_dr_action, to distinguish from fs_hws_action which will be added. Add a pointer where needed to keep code line shorter and more readable. Reviewed-by: Yevgeny Kliteynik Signed-off-by: Moshe Shemesh Signed-off-by: Tariq Toukan --- .../ethernet/mellanox/mlx5/core/en/tc/ct_fs_smfs.c | 4 ++-- drivers/net/ethernet/mellanox/mlx5/core/fs_core.h | 2 +- .../net/ethernet/mellanox/mlx5/core/steering/fs_dr.c | 12 +++++++----- 3 files changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_smfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_smfs.c index 1c062a2e8996..45737d039252 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_smfs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_smfs.c @@ -318,7 +318,7 @@ mlx5_ct_fs_smfs_ct_rule_add(struct mlx5_ct_fs *fs, struct mlx5_flow_spec *spec, } actions[num_actions++] = smfs_rule->count_action; - actions[num_actions++] = attr->modify_hdr->action.dr_action; + actions[num_actions++] = attr->modify_hdr->fs_dr_action.dr_action; actions[num_actions++] = fs_smfs->fwd_action; nat = (attr->ft == fs_smfs->ct_nat); @@ -379,7 +379,7 @@ static int mlx5_ct_fs_smfs_ct_rule_update(struct mlx5_ct_fs *fs, struct mlx5_ct_ struct mlx5dr_rule *rule; actions[0] = smfs_rule->count_action; - actions[1] = attr->modify_hdr->action.dr_action; + actions[1] = attr->modify_hdr->fs_dr_action.dr_action; actions[2] = fs_smfs->fwd_action; rule = mlx5_smfs_rule_create(smfs_rule->smfs_matcher->dr_matcher, spec, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index 195f1cbd0a34..b30976627c6b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -63,7 +63,7 @@ struct mlx5_modify_hdr { enum mlx5_flow_namespace_type ns_type; enum mlx5_flow_resource_owner owner; union { - struct mlx5_fs_dr_action action; + struct mlx5_fs_dr_action fs_dr_action; u32 id; }; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c index 8dd412454c97..4b349d4005e4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c @@ -372,9 +372,11 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns, actions[num_actions++] = tmp_action; } - if (fte->act_dests.action.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) - actions[num_actions++] = - fte->act_dests.action.modify_hdr->action.dr_action; + if (fte->act_dests.action.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { + struct mlx5_modify_hdr *modify_hdr = fte->act_dests.action.modify_hdr; + + actions[num_actions++] = modify_hdr->fs_dr_action.dr_action; + } if (fte->act_dests.action.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) { tmp_action = create_action_push_vlan(domain, &fte->act_dests.action.vlan[0]); @@ -705,7 +707,7 @@ static int mlx5_cmd_dr_modify_header_alloc(struct mlx5_flow_root_namespace *ns, } modify_hdr->owner = MLX5_FLOW_RESOURCE_OWNER_SW; - modify_hdr->action.dr_action = action; + modify_hdr->fs_dr_action.dr_action = action; return 0; } @@ -713,7 +715,7 @@ static int mlx5_cmd_dr_modify_header_alloc(struct mlx5_flow_root_namespace *ns, static void mlx5_cmd_dr_modify_header_dealloc(struct mlx5_flow_root_namespace *ns, struct mlx5_modify_hdr *modify_hdr) { - mlx5dr_action_destroy(modify_hdr->action.dr_action); + mlx5dr_action_destroy(modify_hdr->fs_dr_action.dr_action); } static int