From patchwork Thu Nov 7 19:43:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867017 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2056.outbound.protection.outlook.com [40.107.243.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 974651822E5 for ; Thu, 7 Nov 2024 19:45:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.56 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008721; cv=fail; b=ZP6pJRuEdaeRjFz9thzzJxoR8xXdU3aFSV0R78kIcsiZGaIJ4VDtpWIoNPOcCscyierZeL7m8/asXwc/sElpwwQjx1TpbWJisKe2PVLeX89NTGauf4CqjZxKL+zMIrhxEtyWtaI1KC9/BNpO8WjmIdn96hReBwRcO3xB3jDRLiU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008721; c=relaxed/simple; bh=DdelfkmubBJEmNATz3+Nj978FzIbYvMVd30hmw4Ot3w=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mTqwIbvPQacovShQIb/u0k6QfuuX5IKV2x+eFsfQNN+BXCDus7iYjd6l4xh9uX64cwZ1nQYazCosZB5yRVCtHG2Oyne95K6JXFbhTxE0Z2qQUUsDaAth9tyKpokoA8mtiMtRFlzp2R17241tVLJO8OR5HK9Sylu2Tl393auZe6Y= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Nd2FDUop; arc=fail smtp.client-ip=40.107.243.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Nd2FDUop" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=b+TKSKyUndAxMFoyi9DUEi8d5joWOVQGxqxKV3Yqrq6gq9ZUNY6AFOG5iBTI18nrWMVKBY42O4dcIB7atz4G4T3AGEFhya3Dusy1b8KhTMmFx+OkjV6Hfurox37/0RO/VVTV7zfu8xLuUr6v3z8n6+0WjulNHLxW1xhrpuqL9RtCewS5wT3w6+PIZ3Ii2VP/V2te9J0tCwwWdwLg5leN47Z/oHxH8wCY+5rk0GqCeKC93akHlDwv80wHLRjgbeAGXOooiPHLC69Rz7+Z8IY+MPmKItO+gwTH7mowg+Gsjlnb2beLBoYmyGtj8YeamWDccjV39vEvfX/IKEMmLhcBGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6UeVmXYoBmhIBNMH0TB4hFAoyV4oon7vS+c9jvkxuDI=; b=Fr0qEhyEbP3zqwiQ0lJm9HDNU+RAmGx168EwVnPsryC/HK0pnmQceYFA09cUCt5Jqt+tlVl4+yfFFWG6jQpTfJW76OgLjwJ6EXV6mj5sh+xe1x50RLIiYEbyMUqgDkNXvu8TGvfMhPs4xI3J9evlKCZ2TsdhhFQ26FzMcGSb5SVE74KLzUajNkgc42CniZlwQ+4sV0QX3eIXx5eakx5mouR5PApnNO/hl6j9JHJ+Teq+S9pQqYGsz+qQ5j/oWHxkJWTCrQ0RIEYsrvNKcSBmODM8Gsaf41V0sYOrZimGFDn1sdqZApU4mDQAYWgpCsspNdW0kWD3MnP6zEffxZi/vg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6UeVmXYoBmhIBNMH0TB4hFAoyV4oon7vS+c9jvkxuDI=; b=Nd2FDUop6UOgdyALu8SN7AmNEVXgqI/+3uFHv9kRpQcpIHFG+hOBIa1Mu7KgpU4aqonEYdUFl1AxhGtKKggCYvPcbFQVi0VkOO3djeVFyn/3y81IwEipuqBo2Ym+W86o/V1yRWJ7j/qyUC/cbBR5iFmz9hP7K81AfiVg3r1pp5aHYw3HEZMUA22FZsUfrKNh22U7osPLc2RykJcylpKR49nSnR3e99LXjEtbXfM3odn4Jero3KzvOdUcvvIZPMLK8PpVMw0qlQiwE2BVa/frKR/7FFdzZBtbuB7CUkgvZKLiX0fup+VS3ZOt8ANtFaqNyjx3f/8kfY0XXG2Oh6FZBA== Received: from MW4P222CA0005.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::10) by DS0PR12MB8575.namprd12.prod.outlook.com (2603:10b6:8:164::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.29; Thu, 7 Nov 2024 19:45:13 +0000 Received: from SJ5PEPF000001ED.namprd05.prod.outlook.com (2603:10b6:303:114:cafe::36) by MW4P222CA0005.outlook.office365.com (2603:10b6:303:114::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19 via Frontend Transport; Thu, 7 Nov 2024 19:45:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SJ5PEPF000001ED.mail.protection.outlook.com (10.167.242.201) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:45:12 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:44:57 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:44:57 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:44:54 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Patrisious Haddad , Mark Bloch , Tariq Toukan Subject: [PATCH net-next 01/12] net/mlx5: E-switch, refactor eswitch mode change Date: Thu, 7 Nov 2024 21:43:46 +0200 Message-ID: <20241107194357.683732-2-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ5PEPF000001ED:EE_|DS0PR12MB8575:EE_ X-MS-Office365-Filtering-Correlation-Id: ec065232-509e-4975-eabb-08dcff64b1f2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|1800799024|376014|36860700013; X-Microsoft-Antispam-Message-Info: Qf1XXRC/bXc6VvSdkiofMkkPx1IVj1CQGp3ikvsDSQF4PFx5KhjyP4wYXln3Dx6DD8IJ767coawb8eLGyxtfbm/Kt1GdwA/EHhT4Rm6w0ShibMDgIdRbHSe7U+1WT2FPF7ibH8I9KTRNya3wEpT3L/oEoCnHeNKrQGa4bUIKlftqNQiBLhkJ7+uNvYGXXPNd+R4H0wZyiJ6gl1CDyUDdPnuXL2iy+9aa/kc7SvU9SxnpV+8XDlpUDI6TIKS/FNn60dCk2xRk635g0jQRmLXk9ePbSuFKktqtzjpuOu5e6wjDKTs0Y4qY1bvAIUkGhnREGiL4J0Do6AmOBDfBbIVoD5GbvV5JZCWzE6adUgg2FzVHaaQVMju/pUupZCeROms4SStzUwcZsXb3Q9Qo96lDwYAx1GuXh98bHREx3JZo3MJ37vAd+s5bpOGZVvp2srwzN7/8GxUct0s2G1YO6jWcL3+XKNfennlSiCU8uK/9hmxLQQzRseMoGUfNQbRGfXDxavVpQIs+AAt3Li/+4/ybqYws3cQJI9mwDBJ4jWS6PgiXKAkM8K5YNPSJ2qWhmGsGe17T6Nhyt52fMRu12bfxmYRQ92caKticoTAtlUKhowbiWeIASixO36MqnCld4HFhDcHLhc6B/kpvxf+anb69efo7t7h2OQIVE/knshTZBBRWOFCcno6tehIfdhHP08Yt3RhncfFVWqR314h+udzR9HtjW+fR6zIz5qJ54RJvW7xnzHyzfiV+PjujAyAyjTIqqsclu8gOe5JUqyCqXiZmSUXpi1Nbnee2RDKHqeRHCmn6nQeCAMLTDO0ncz0XUFqsZtOP1pn3Q24kK9t4ozCw/ANkWBgBi6seTgyceAgNX1YPeH8qbqUh6ejZ4zG/LYsqGivG4O7GspQLPvlsP8hE3xyLHOoxTgPq4YWq5lNm7vg1ju6hyc06/HJD7dqDE64NlZ3UUWd76TU6fq3K8hIGs9NeMVJgthIENOztXMK9l/gMbTstEzVJGSTIioKLLKtGC4Ii9f6WHbgaFC+az3oXSxC7gV/jPB9cqGstoJHtBGdHvMi/JpuxJpD5M59/T/FKwULHYEjNE7xTc3t7guI7k8zB6F6WmLXyRVaL6Nsc+dHVgS5aPc8kEaIDq+ES6+CG7N1Po/R4fcUL2K9E5WbWyotOsCj8TTOmqMmHCxRUCgGYlTWj7DbApEhU6Ebodyfd+KbkcadWtyBgyKxPwhpUtPBtfLnyrR3l6NoaXL//4GU4dRpcYVOLkVcRaTVXwukO5hMYsaGLrKND8qRZJRNMW05HsTcBCe1a2k68ivWFeoXvMEPvNUyA5P3/G4ZPVh2SXbhCXmFCuoKgtzAGh/sWljnRWf0XeWRthKt4kFd9cihvddKpo2XThnvPLW/CHOc1TwapbLVG03yBVi8i4iJs7A== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(1800799024)(376014)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:12.5817 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ec065232-509e-4975-eabb-08dcff64b1f2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001ED.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8575 X-Patchwork-Delegate: kuba@kernel.org From: Patrisious Haddad The E-switch mode was previously updated before removing and re-adding the IB device, which could cause a temporary mismatch between the E-switch mode and the IB device configuration. To prevent this discrepancy, the IB device is now removed first, then the E-switch mode is updated, and finally, the IB device is re-added. This sequence ensures consistent alignment between the E-switch mode and the IB device whenever the mode changes, regardless of the new mode value. Signed-off-by: Patrisious Haddad Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/eswitch.c | 1 - .../mellanox/mlx5/core/eswitch_offloads.c | 26 +++++++++++++++---- 2 files changed, 21 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index cead41ddbc38..d0dab8f4e1a3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1490,7 +1490,6 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs) if (esw->mode == MLX5_ESWITCH_LEGACY) { err = esw_legacy_enable(esw); } else { - mlx5_rescan_drivers(esw->dev); err = esw_offloads_enable(esw); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index fd34f43d18d5..5f1adebd9669 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -2332,18 +2332,35 @@ static int esw_create_restore_table(struct mlx5_eswitch *esw) return err; } +static void esw_mode_change(struct mlx5_eswitch *esw, u16 mode) +{ + mlx5_devcom_comp_lock(esw->dev->priv.hca_devcom_comp); + + if (esw->dev->priv.flags & MLX5_PRIV_FLAGS_DISABLE_IB_ADEV) { + esw->mode = mode; + mlx5_devcom_comp_unlock(esw->dev->priv.hca_devcom_comp); + return; + } + + esw->dev->priv.flags |= MLX5_PRIV_FLAGS_DISABLE_IB_ADEV; + mlx5_rescan_drivers_locked(esw->dev); + esw->mode = mode; + esw->dev->priv.flags &= ~MLX5_PRIV_FLAGS_DISABLE_IB_ADEV; + mlx5_rescan_drivers_locked(esw->dev); + mlx5_devcom_comp_unlock(esw->dev->priv.hca_devcom_comp); +} + static int esw_offloads_start(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { int err; - esw->mode = MLX5_ESWITCH_OFFLOADS; + esw_mode_change(esw, MLX5_ESWITCH_OFFLOADS); err = mlx5_eswitch_enable_locked(esw, esw->dev->priv.sriov.num_vfs); if (err) { NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to offloads"); - esw->mode = MLX5_ESWITCH_LEGACY; - mlx5_rescan_drivers(esw->dev); + esw_mode_change(esw, MLX5_ESWITCH_LEGACY); return err; } if (esw->offloads.inline_mode == MLX5_INLINE_MODE_NONE) { @@ -3584,7 +3601,7 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw, { int err; - esw->mode = MLX5_ESWITCH_LEGACY; + esw_mode_change(esw, MLX5_ESWITCH_LEGACY); /* If changing from switchdev to legacy mode without sriov enabled, * no need to create legacy fdb. @@ -3770,7 +3787,6 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode, err = esw_offloads_start(esw, extack); } else if (mode == DEVLINK_ESWITCH_MODE_LEGACY) { err = esw_offloads_stop(esw, extack); - mlx5_rescan_drivers(esw->dev); } else { err = -EINVAL; } From patchwork Thu Nov 7 19:43:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867018 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2058.outbound.protection.outlook.com [40.107.92.58]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11693217F2A for ; Thu, 7 Nov 2024 19:45:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.58 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008722; cv=fail; b=MuFj9qHpE/NsZDlY2oKeXGQmXio1AR8O8Hoj9JdZDjGq0bIABOEDU8/FGA+bCTAyhCU5f/PImAemg23iqFKJ/08KB8Df79IPby7Bi6VHeQR9X8Od/KHbLbyfcy3d3Qqgsd+KCMAW2Al10KPYzE5M5AAVWco99j5EtJxyIhL8FrU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008722; c=relaxed/simple; bh=t8toPWyl+p0q6o9CZnlU8Jg2v6zmdAOCzn0hDedAt90=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=I8b9q3h7DKpyOSWWVlMVAzczZiLKyZ1kJ4dp3q31SZL1y9twKcnDfeZECnv1st+yTYRKg7FQ29+0kitALC2/gFz8QduwrJAr803aWCHo+lgnA6wLBKzoij+msktHEHrY1UilwO7K9KbFVHHucEy8CpIf73XCrlUd604wDmSDsMI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=boslKEoB; arc=fail smtp.client-ip=40.107.92.58 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="boslKEoB" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Z5g7iFLGUBJt96s5mquApl33Tkp4I8fHdXjcnALen7lut3cWUkvSiMeeLmlLxFwvzTWNQ5mf3jR7+LGcJZEXFINjcehC3PVNU9egi+EwcnH0V3wURDM3+2jZDa6BduF7arA7LXWE2rLDl+n//hg+CMAKic5gjnGZTTJJkn8/3WaZYLlcwpfEgaWC0yfIyTw60xembQy2ur6tzjEoylOk8GEh03TCVfahUOA4EE5urtDxEBVOPhYwNkFym7Ml6kRyh51PXEn5keATBfyUN7XYQICYZoieFuTns7p6VTcKRZZiYg2MyyhyGFEqkyStpZtb6PAOJUg3Zqw5GLnTzKUh9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fiabPjMvvSSCUGfGj96uoZETDpMBOMJAxXhU3yencKQ=; b=Nu4bfzacYbnnzfDQXKNxwu67yl6M/KwM2WWJ8OqJfdqms3MBVSHaMjN7spVtxGmlQn56fq5LWCdEWstoLwwows0nZbkA6mEjQB1MxyyOg2djUEoqjCCIudPG/77SJDfHuem4NpL58e1oY5J1v40Mn3kmMa9Kd4W8JnZ2fM9iiDgZna9LkPFSis/8pUKziIuZEQ9wqQBG4Enlj42InL1Yn+Df3NvY3wg8KoZeswx1Fi/REcVt/wVDzOTGSGcJWCSXxKlvM4D2ZRrFO8NQnwUN5hdUptv4I0DEWsgjTYMggiw06DFRNyqYfXrtSzmzol9A4D1eiCvo4YgGQSB06fRfuw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fiabPjMvvSSCUGfGj96uoZETDpMBOMJAxXhU3yencKQ=; b=boslKEoBG8v+Qdkt2mH6I0CJ8yH189ks5PQoHmF9LhWr02SEt24g8felEd+LRxDgb1pTO6s2LfrtstOTMqKY4pkzkQCDGjlIJXW/WcY/NfExT0InFYGtTQ/PzYFaqju0omoQvtQLzNEvv5jKGWM65DK6lxP41AjnFeWms6fvD3iFRRRm+/CFXq3ZF/AV3v3r/K+6ywlJOKEU+qMLTj9XXy0EmZb20sQuz/dhvOvlg8l2SFEE1aARrSrJ+sUNsbxEu/v99Lq986ZeEeQkTRdSSd5/vFIvb7CwgDxG8+3+MJUOMN2KpORkVL8Y+VbrbQ91QMw/cmIqoI22ApdMqJwIFA== Received: from BN9PR03CA0550.namprd03.prod.outlook.com (2603:10b6:408:138::15) by SA3PR12MB7781.namprd12.prod.outlook.com (2603:10b6:806:31a::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19; Thu, 7 Nov 2024 19:45:17 +0000 Received: from BN3PEPF0000B374.namprd21.prod.outlook.com (2603:10b6:408:138:cafe::2a) by BN9PR03CA0550.outlook.office365.com (2603:10b6:408:138::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.20 via Frontend Transport; Thu, 7 Nov 2024 19:45:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN3PEPF0000B374.mail.protection.outlook.com (10.167.243.171) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8158.0 via Frontend Transport; Thu, 7 Nov 2024 19:45:16 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:00 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:00 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:44:57 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Carolina Jubran , Cosmin Ratiu , Tariq Toukan Subject: [PATCH net-next 02/12] net/mlx5: Simplify QoS normalization by removing error handling Date: Thu, 7 Nov 2024 21:43:47 +0200 Message-ID: <20241107194357.683732-3-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B374:EE_|SA3PR12MB7781:EE_ X-MS-Office365-Filtering-Correlation-Id: c06ef88c-4fc8-496b-54e5-08dcff64b45d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|376014|82310400026; X-Microsoft-Antispam-Message-Info: O4F6RBfZMtZGYik5r/EXnAsGiNKd5i3Om7ISHLcc4NLCskRn1aSHUZpcLTj0c63SaNOLxsNCQCV/yWT4WhGYi/a6Pt3vq4FdN+I98QZeT5Cp10dFJ3qvLTwGC1UftbpXC9xQKbluPXgJpYc7ziw7sMaf01L44D8KEkWQbnxN4+b5Qj9uP84PUeCh2iQ7QxnqRYaybq7IrfU9gcgPhiS6EmEK9LdNH/PRQ4HTf0biuU+kA8aAoh7UqivcsqWRn4Jgp2Ze5IaaDefcyl4k/RvIyX9oDDctCnDK8R6fw0CxsrYKDDSc7otb8qvVwoP0YlUXxT2AfeyNNdEM3YCD5WXwhrqaG29bbBlqqsW0YO8o/beFU4hJDwLHZ4+CB/5bdm5p9Khn+q0pIvHfLMKhQW/gOr4UQiNXqr02AsoNzyWTFPlZmzJCpPXdA2j51pd6iX9/ap1RL4pMGLxVjxBcgdRSj8m2Nnmow8QLo9Q4KosYRPoaZuOCAzpYqe5By072gSvi9FOSFJPLTkMS58FN6sJsFf9aM1K8b01Dqk2SfAYSkbBxJXNwr6QsliSAjq6Ujcjaq7PqJOXMa1QF4wWiDa4nyKitbyIeEJzHBN1w9ndh0pzZQ0olODtx6FacvsXD0Xasa8ilDRWSv0Ipe42yrtYLv9B0YO6egNw71Zbxlx9lRbO2I654dbvDaoq9EjNP7Hc9C85IkJ96vSpDpTOkncvjYZKbiHg5gI92SHrC37pbOk2JiMMOG8e2D3MGSYAz6znMbUrmXHFV0vPByyrWC2LuYTYxYDViK19FJ/335RvD/mSFpx+TIFeGQblcn+uq8GIHgWUVRbzWbmdhiROiU3l0RNuAxUrl62OUYB7F2bhBkqY6VFgSi+vxpL7md/q0juiFsPRcWhRvxKWDemxApwNQwDjO7NFYjTPDiltYH2KNxTtBVyo8CGn4tamtT6c8ZOsbP6XdFmZtIveEyTzDmkPUTTOSX+8DwOtvJYYaV4jSplwZXI/pX4uTm4XeTI46+TIpu0cGk6rilswU5J8LkfviPg0OI4CvixrN25QNynStGp9XtORlg3E9lqouIqi1FwfQzrPCosSCN8DJIoD9TYHb2x9G515s7rqi+dipwdOw1UYjqH1kf84LD4z0r5j4jj4Aat48oin6ZOGjVoic3Nq/cs0zF+WJaDFaHjCqw+v5O6QoYMo6w1wa8hjWzkezCWv/gDqaWN+ngdZkFLTH2MQgNvt3Ijcw+hFlx0lyVwY8UghWgrXZwu4fLQGTbNWbwI8OT45/04rE9E7BCq2SlX3D/L87FzwwoErZjol0267kKxQx838zcXPpQ/G1NfiqV8s3gNNv9UzPqAdOYGUvBMC1NddTv0t0wOQOPaqJOyYCx3TKqpePU4zck3Y41Kfe5l4Evn9F5dRagszYBh944m8gAA== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:16.5251 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c06ef88c-4fc8-496b-54e5-08dcff64b45d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B374.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7781 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran This change updates esw_qos_normalize_min_rate to not return errors, significantly simplifying the code. Normalization failures are software bugs, and it's unnecessary to handle them with rollback mechanisms. Instead, `esw_qos_update_sched_node_bw_share` and `esw_qos_normalize_min_rate` now return void, with any errors logged as warnings to indicate potential software issues. This approach avoids compensating for hidden bugs and removes error handling from all places that perform normalization, streamlining future patches. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 72 +++++-------------- 1 file changed, 17 insertions(+), 55 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 940e1c2d1e39..0c371f27c693 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -208,64 +208,49 @@ static u32 esw_qos_calc_bw_share(u32 min_rate, u32 divider, u32 fw_max) return min_t(u32, max_t(u32, DIV_ROUND_UP(min_rate, divider), MLX5_MIN_BW_SHARE), fw_max); } -static int esw_qos_update_sched_node_bw_share(struct mlx5_esw_sched_node *node, - u32 divider, - struct netlink_ext_ack *extack) +static void esw_qos_update_sched_node_bw_share(struct mlx5_esw_sched_node *node, + u32 divider, + struct netlink_ext_ack *extack) { u32 fw_max_bw_share = MLX5_CAP_QOS(node->esw->dev, max_tsar_bw_share); u32 bw_share; - int err; bw_share = esw_qos_calc_bw_share(node->min_rate, divider, fw_max_bw_share); if (bw_share == node->bw_share) - return 0; - - err = esw_qos_sched_elem_config(node, node->max_rate, bw_share, extack); - if (err) - return err; + return; + esw_qos_sched_elem_config(node, node->max_rate, bw_share, extack); node->bw_share = bw_share; - - return err; } -static int esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, - struct mlx5_esw_sched_node *parent, - struct netlink_ext_ack *extack) +static void esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, + struct mlx5_esw_sched_node *parent, + struct netlink_ext_ack *extack) { struct list_head *nodes = parent ? &parent->children : &esw->qos.domain->nodes; u32 divider = esw_qos_calculate_min_rate_divider(esw, parent); struct mlx5_esw_sched_node *node; list_for_each_entry(node, nodes, entry) { - int err; - if (node->esw != esw || node->ix == esw->qos.root_tsar_ix) continue; - err = esw_qos_update_sched_node_bw_share(node, divider, extack); - if (err) - return err; + esw_qos_update_sched_node_bw_share(node, divider, extack); if (list_empty(&node->children)) continue; - err = esw_qos_normalize_min_rate(node->esw, node, extack); - if (err) - return err; + esw_qos_normalize_min_rate(node->esw, node, extack); } - - return 0; } static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, u32 min_rate, struct netlink_ext_ack *extack) { struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; - u32 fw_max_bw_share, previous_min_rate; bool min_rate_supported; - int err; + u32 fw_max_bw_share; esw_assert_qos_lock_held(vport_node->esw); fw_max_bw_share = MLX5_CAP_QOS(vport->dev, max_tsar_bw_share); @@ -276,13 +261,10 @@ static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, if (min_rate == vport_node->min_rate) return 0; - previous_min_rate = vport_node->min_rate; vport_node->min_rate = min_rate; - err = esw_qos_normalize_min_rate(vport_node->parent->esw, vport_node->parent, extack); - if (err) - vport_node->min_rate = previous_min_rate; + esw_qos_normalize_min_rate(vport_node->parent->esw, vport_node->parent, extack); - return err; + return 0; } static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, @@ -316,8 +298,6 @@ static int esw_qos_set_node_min_rate(struct mlx5_esw_sched_node *node, u32 min_rate, struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = node->esw; - u32 previous_min_rate; - int err; if (!MLX5_CAP_QOS(esw->dev, esw_bw_share) || MLX5_CAP_QOS(esw->dev, max_tsar_bw_share) < MLX5_MIN_BW_SHARE) @@ -326,19 +306,10 @@ static int esw_qos_set_node_min_rate(struct mlx5_esw_sched_node *node, if (min_rate == node->min_rate) return 0; - previous_min_rate = node->min_rate; node->min_rate = min_rate; - err = esw_qos_normalize_min_rate(esw, NULL, extack); - if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch node min rate setting failed"); - - /* Attempt restoring previous configuration */ - node->min_rate = previous_min_rate; - if (esw_qos_normalize_min_rate(esw, NULL, extack)) - NL_SET_ERR_MSG_MOD(extack, "E-Switch BW share restore failed"); - } + esw_qos_normalize_min_rate(esw, NULL, extack); - return err; + return 0; } static int esw_qos_set_node_max_rate(struct mlx5_esw_sched_node *node, @@ -552,17 +523,11 @@ __esw_qos_create_vports_sched_node(struct mlx5_eswitch *esw, struct mlx5_esw_sch goto err_alloc_node; } - err = esw_qos_normalize_min_rate(esw, NULL, extack); - if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch nodes normalization failed"); - goto err_min_rate; - } + esw_qos_normalize_min_rate(esw, NULL, extack); trace_mlx5_esw_node_qos_create(esw->dev, node, node->ix); return node; -err_min_rate: - __esw_qos_free_node(node); err_alloc_node: if (mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, @@ -609,10 +574,7 @@ static int __esw_qos_destroy_node(struct mlx5_esw_sched_node *node, struct netli NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR_ID failed"); __esw_qos_free_node(node); - err = esw_qos_normalize_min_rate(esw, NULL, extack); - if (err) - NL_SET_ERR_MSG_MOD(extack, "E-Switch nodes normalization failed"); - + esw_qos_normalize_min_rate(esw, NULL, extack); return err; } From patchwork Thu Nov 7 19:43:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867019 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2084.outbound.protection.outlook.com [40.107.243.84]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DDBA72170D1 for ; Thu, 7 Nov 2024 19:45:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.84 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008728; cv=fail; b=egKGj5m1x2eXRFx+rLTkN4TBO/Z/T/YUfFznxiPhcwqPuM31rCRupLntH5jhTdzxr8o/aXPNGQG1lKAeWR/UG0RJfA2DDL2L0T+svLRRJMdVrUHuF3olmzMYRtbK0Pmyz0jP1oIYbQimFvQHnmnELd0uQLEBIZMMHj8mCOKhxl4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008728; c=relaxed/simple; bh=wnVS/VdIZ7Z0W3v1YITKRxayyw9C/lrG/GfeS2ZsVFo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=I26nmPTSF1lWW0gKL9KeZSrOgIxiJEj2qXEJDE0Ww1eIi9YFapR5Mf/B9R8Xta4K2dRsN97w71uirpa1GjcRFXKAAceFUceGqTubXyqI/1KP3j3avkYXdJIRZ4aI/kg5A1zAzEmVts4YlD7Qu9KgOYtXyh9kWSqbv1sI9lYDtIY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=tcytmU4j; arc=fail smtp.client-ip=40.107.243.84 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="tcytmU4j" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Yv5ypC896TjJXd9lM3HuplOvZTiW8GO2i0ED5p+jngL94TsCxMTu12QQZc+vLXMk9lcQjS1eyGAhhUwLfPExRJqmydAs6EFP3+ci/inCnWbuu7C2NqiA1Y/BLuHB6mkEcAbA1f8NUn8k13Y+r9t7ISn60SQabSTZd2csJV2fDhta8uuVWzmWH7tmXSLTMOFJrlgFv/7IxixJaCzpf4n2tXHNwLhA9RZLUla6wzOiY1d98ADOTMNNeGIltlpOHn3zsue5DrqhUTQhCliB3vf24BjjvJ/qsedSK98p5a5K8m73jVTA/ph7YF5PEAhdLYLU0MQosGS3Jr+9Q4VJjw/q1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1eq3bxGler0lM0gaVkH8bxdUCh7jYIxI+MxEpBzUk/Q=; b=HYy1GECqfWba9yFExjb0WQ+joMPInTea0BUEK+K4TBuVS3TZlkzR0fzdWIfy9KhlQ0VIhmzxBfciXqPgtRkk6/5AL2X3118fF2idlZSzMUChK10dNbSyvbB5+XrtzdAU5e3iEMyNrRIwoYHKiO+Snk4I3KFJTAa0rw0X6FWCUdmbtTfv47EQ9i+VRdJi+qXREzxXOlPwZIhr1fBcJtvbGZqLXle8gt/sfAKNZSrVfV9BtRy13sIMrPJh0A3I9hsaTaN0E+ndV9xSCnGlOvOVLgo/TTgBKmFhwAm10Xb6bWpgE47xqqR8sxmCORl8SWoG7qPXuhrx/pnVNZjKnGm1Gg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1eq3bxGler0lM0gaVkH8bxdUCh7jYIxI+MxEpBzUk/Q=; b=tcytmU4jl7bwj7iQgVM2mGXoF1bxUtWmA00Uoz6w1++Ld/3RMtWVTSfvgdVn1rwV/6owri8iV43FdCcqBXVizi+ItXjxQcFmy6PVkOgDAI9CDBhyvVBSsTNEuWh5CLsfFoHQqAMDzR551INwOFFjRxjFwvTfNERs0RFSqFCZs64Ir9fFZH8z21UU0Z2GQtG/Nk1fcW9owTub+RRLLNAKLGpn3ihKiDo025Asx/Ze4T7wD69O4f7QKXzTGIC+oFb3DiJUm7G+vbHQfwEuxawkjdKe3rJlAIcTB2+rGgDYVKvtH3Er0kYI8BFrKzHWvDzNxrQAExKVZIZHAZACMFSFFA== Received: from BYAPR01CA0036.prod.exchangelabs.com (2603:10b6:a02:80::49) by PH7PR12MB9073.namprd12.prod.outlook.com (2603:10b6:510:2eb::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19; Thu, 7 Nov 2024 19:45:23 +0000 Received: from SJ5PEPF000001EE.namprd05.prod.outlook.com (2603:10b6:a02:80:cafe::e8) by BYAPR01CA0036.outlook.office365.com (2603:10b6:a02:80::49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.20 via Frontend Transport; Thu, 7 Nov 2024 19:45:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SJ5PEPF000001EE.mail.protection.outlook.com (10.167.242.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:45:22 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:04 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:04 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:01 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Carolina Jubran , Cosmin Ratiu , Tariq Toukan Subject: [PATCH net-next 03/12] net/mlx5: Generalize max_rate and min_rate setting for nodes Date: Thu, 7 Nov 2024 21:43:48 +0200 Message-ID: <20241107194357.683732-4-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ5PEPF000001EE:EE_|PH7PR12MB9073:EE_ X-MS-Office365-Filtering-Correlation-Id: 30e572a3-f6ca-4b92-a0a3-08dcff64b7dd X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: Tw7/7Lf0xAJT07wzD3m9IS7tHd5fgxBOaQnyJEtiuIjplPur0ipAZFNW2Z2d2v0kvPZp8UgE+j5RkZ97gnzTMg0Qeea6f1jpZk38J6tXaoSJaRsEmrZTAuL1f0EhZiW4Nr9VfE+e6E06rS+/CmudWKSWZlT0bVE3wBJBIN6yVRd3VIte+RWTmRxrfz2NgrICILKUpZe27rEGDi21AA6zFwqrDcNpx4pao1aEBmWXY8Wbhm7iL8R6utstOCy87sJNdlFQ/GtB8X/XfZE8gh2EHLQtwP0bwcUFQEvvkGdGfDFQ3aUxiUHr7faA3HvndFndPWGZhiM07ZVPkyKahSuKbxJpOFTnv37iJOJNLnDYzr6t/yy53+z3uTrWI82/6tznDv3u9A1+UCRn4O4cQSAnaFg9I6mmdKbLUdteE83Mu3my8nEk9cF0SczD2CPj5qsvmt8uVsJioQI9eOwg3w4g81AtLZQ/go/gYRW4sig3m37BbV+1zWAyWJ6Cohoo93Z1frqaS9CDUkIYGBDtwC+357L7VApK6TuPfP1mZFS+pS6rkGp3ZOM5tI0x/yzVD/L+fuy32qXCcDiIpiyLmjrElEpF+DKYDNVEQV54fYFChw2eF0Mvp5vI7Jc8VykNNgu4GNB96hL/Y2iDu8lHE6V/1Ki+cVRdnrkI/tgCVV5ijI0RUCjdW98AXgAUt51H2ldep0xNn4efxVIwkcPqZUcNHJW3MCLNyOQwr+lONY/8h4Ih4btHltVc82USql8OTn82NKqSQrEQ2g+De2OzD+IEODdJzTZf0rS3+V+vkchSyvTNZpCKf50/Rmrx0AZZ9biViqnj1BNo93p1/OoNQKjdEmh6xWq5l+AqP3pNZqTJqpjeR7i08mDZ6DoH3dHuv/v3JjE8GX6xdZ+rpNxTGVLJLGF6r3lT5pvKXWt3fWmVwrmfzfor4K7qSMzYNwFx3tsp3KujnRLm//ksi2T8Dg3SzVWi1sf5Q5aa2EkyhvWx0FO+TmMIKmHjdgZ1thPec+XkXQYzCz2cUaahL3u3r1bonLrTBtagZTz2/O0tmiWbnWJyZCpdE2gsY/qAssLm2RkkKNuMcZMIr1BnTaOV4RzwiOp9RGHDUhOZOVnx/WmrYZ+js8dSk2jkLeKwb885ebB3wTXiZ84A4XgaEDLXAbUD661oayJTMGK7FqUb7wHwvCPoSV5sJPnhc42oNACOZAjYTQeDpkqOCkcHmjlzeLAdqGqBeV5JvXVFDarn3RwqjjPvIHAyojiLcUOo8NYpTa4wJb0ckay7P8Bxm+8aeQhVYm0LnQ9PuIpOLdhE/zaz3WNxNpA92gboYYrPZfNdzJlH/FCCeQ/m9OHFYIlwvwDzydtbfoxGW/AT7mTUbJMJx4s8Tr3uJQjEa1nPGDKJziJZ2WUC0PH15OifYcoV+YQv/Q== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(376014)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:22.4772 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 30e572a3-f6ca-4b92-a0a3-08dcff64b7dd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001EE.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB9073 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Refactor max_rate and min_rate setting functions to operate on mlx5_esw_sched_node, allowing for generalized handling of both vports and nodes. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 69 ++++--------------- 1 file changed, 13 insertions(+), 56 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 0c371f27c693..82805bb20c76 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -245,69 +245,20 @@ static void esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, } } -static int esw_qos_set_vport_min_rate(struct mlx5_vport *vport, - u32 min_rate, struct netlink_ext_ack *extack) -{ - struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; - bool min_rate_supported; - u32 fw_max_bw_share; - - esw_assert_qos_lock_held(vport_node->esw); - fw_max_bw_share = MLX5_CAP_QOS(vport->dev, max_tsar_bw_share); - min_rate_supported = MLX5_CAP_QOS(vport->dev, esw_bw_share) && - fw_max_bw_share >= MLX5_MIN_BW_SHARE; - if (min_rate && !min_rate_supported) - return -EOPNOTSUPP; - if (min_rate == vport_node->min_rate) - return 0; - - vport_node->min_rate = min_rate; - esw_qos_normalize_min_rate(vport_node->parent->esw, vport_node->parent, extack); - - return 0; -} - -static int esw_qos_set_vport_max_rate(struct mlx5_vport *vport, - u32 max_rate, struct netlink_ext_ack *extack) -{ - struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; - u32 act_max_rate = max_rate; - bool max_rate_supported; - int err; - - esw_assert_qos_lock_held(vport_node->esw); - max_rate_supported = MLX5_CAP_QOS(vport->dev, esw_rate_limit); - - if (max_rate && !max_rate_supported) - return -EOPNOTSUPP; - if (max_rate == vport_node->max_rate) - return 0; - - /* Use parent node limit if new max rate is 0. */ - if (!max_rate) - act_max_rate = vport_node->parent->max_rate; - - err = esw_qos_sched_elem_config(vport_node, act_max_rate, vport_node->bw_share, extack); - if (!err) - vport_node->max_rate = max_rate; - - return err; -} - static int esw_qos_set_node_min_rate(struct mlx5_esw_sched_node *node, u32 min_rate, struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = node->esw; - if (!MLX5_CAP_QOS(esw->dev, esw_bw_share) || - MLX5_CAP_QOS(esw->dev, max_tsar_bw_share) < MLX5_MIN_BW_SHARE) + if (min_rate && (!MLX5_CAP_QOS(esw->dev, esw_bw_share) || + MLX5_CAP_QOS(esw->dev, max_tsar_bw_share) < MLX5_MIN_BW_SHARE)) return -EOPNOTSUPP; if (min_rate == node->min_rate) return 0; node->min_rate = min_rate; - esw_qos_normalize_min_rate(esw, NULL, extack); + esw_qos_normalize_min_rate(esw, node->parent, extack); return 0; } @@ -321,11 +272,17 @@ static int esw_qos_set_node_max_rate(struct mlx5_esw_sched_node *node, if (node->max_rate == max_rate) return 0; + /* Use parent node limit if new max rate is 0. */ + if (!max_rate && node->parent) + max_rate = node->parent->max_rate; + err = esw_qos_sched_elem_config(node, max_rate, node->bw_share, extack); if (err) return err; node->max_rate = max_rate; + if (node->type != SCHED_NODE_TYPE_VPORTS_TSAR) + return 0; /* Any unlimited vports in the node should be set with the value of the node. */ list_for_each_entry(vport_node, &node->children, entry) { @@ -748,9 +705,9 @@ int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *vport, u32 max_rate, u32 min_ if (err) goto unlock; - err = esw_qos_set_vport_min_rate(vport, min_rate, NULL); + err = esw_qos_set_node_min_rate(vport->qos.sched_node, min_rate, NULL); if (!err) - err = esw_qos_set_vport_max_rate(vport, max_rate, NULL); + err = esw_qos_set_node_max_rate(vport->qos.sched_node, max_rate, NULL); unlock: esw_qos_unlock(esw); return err; @@ -947,7 +904,7 @@ int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void if (err) goto unlock; - err = esw_qos_set_vport_min_rate(vport, tx_share, extack); + err = esw_qos_set_node_min_rate(vport->qos.sched_node, tx_share, extack); unlock: esw_qos_unlock(esw); return err; @@ -973,7 +930,7 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * if (err) goto unlock; - err = esw_qos_set_vport_max_rate(vport, tx_max, extack); + err = esw_qos_set_node_max_rate(vport->qos.sched_node, tx_max, extack); unlock: esw_qos_unlock(esw); return err; From patchwork Thu Nov 7 19:43:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867020 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2060.outbound.protection.outlook.com [40.107.96.60]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E35F7192D83 for ; Thu, 7 Nov 2024 19:45:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.96.60 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008737; cv=fail; b=eZzjdw5xb9MA8P1KVcykf9QzypuJ4O4ji9VOeYg05weOz7xUj+oqBhd39SgioT1yh96C77Pg5AA077VKwVocIq3au8Y7qArqy7ZZ5ZdVO/lx2qMjj1UEcZfBCWAe+rXMOlzeQjHp9GvtPW+Z9iviYDp4/MKqrwT7jNg38wgXSo8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008737; c=relaxed/simple; bh=k6IMeiKgSFBtfUDOVfi6OeR51gTkrC0wYG+8cALHnLs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ToRSWNJye/PjILsVq1rgkFab+5Avw10NviC7tvqKXHitwslsEVKQz/OAeK3qwASjQ/Lo1/716C107njSy3H3ZoQmrRFgW/Dm9PYjAV0zoVk6Ba1RTH6y0/mgw5LZMn22eOWfKiDMfJUJfzMGnPEFZ3qVd28T4JGlKsdUuOkNLrM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=G+TxzC4N; arc=fail smtp.client-ip=40.107.96.60 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="G+TxzC4N" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=H4gbIgS8mDxwQt6Oz0SEiozqWq6G+FdRwfBLGbrCz9i4BtKhEswFCUpotaCjQBbAiDlaEzeb1xDXDSbEtiTNX2yYME6IZjhivJrWvSfwf9mA6vBjcPULR74pXHtrDIoz2n8530SlWzuTpxqVTfC5C0DJ4GzyzwkxJU2KtVlfCnvAPlFDnWuvnExw8zujewjamnSrRhSdrsaF1BDDOyz4Ls3qQ0WRABGtfWmFfyJEB4egr/Wz3ksO8uErfk7dhRyKxQnli3ebVyqBoqWcu/9+tVYs5OQNdTAq5pQTMdh/KfFo5SRul5rpC2YCt2mSJIj51gI3kq55k7zQ2tYbXyLFyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bLGW03MoF5HIMidkJMPGW8wHbJUyGzSFwlgmsFICE0g=; b=xD0E0HeixJ+NGuS16/DdmBT3UodZTjQvm+pTSsaphHNN9O3bzadZ+/GrlzAkQTZgN62W2xgZOsOMhIjcVyVGB2obtYRwIBrnAXtGvmr0qGghvNVDDGTusHPE8kg2Vuyu19LhyFUF2biOHI3dSmWcbMflz7ys5DzeK+/PbFvbzi488FrX2ODID38wc50NtLosJmGZj48UfnRKw3EWRwuVoc7PsLmvKsf0pQ0QIVKQ6qksCfXBs2k5vEYFXIOGQmAwmQO4jdvqrY0e7E64BSa40sNu4KQ0JEX5we8KBopn9poyciZC9WtC3mNhUO/8W1wKw/ncIlWehgmqgC9D4IIjng== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bLGW03MoF5HIMidkJMPGW8wHbJUyGzSFwlgmsFICE0g=; b=G+TxzC4NEPoc85GcMocAhqKMJ9fuMAMVL+f1WZU3vPl0F5sCAB4kl+5q11EZdaqcg2zic4CZ63JmWMulbZ9VuF22lOcCNc1ojlXsJ14s5nKSzf0ilBmOYQx9ZLNDLNH9yzQcPlEcr5iHly2FCV0sNQoudCWYXJA/4yc0VFPbHUe8soWyQht7bjRbkbP6i/wsHI0hv5xi1+fkuvUfwoeMcbbGYwzSRHJ28bWUFUaUY18yrsaXg0n1shFUzo0z6s0bpOwtcCB3AptdYIL551SpjVbi3eIlhsqvuxDmtgyCvGc6VVxNugKg31jDl/65JYyfdiLegeSaX4zAFCoAzvx1Eg== Received: from MW4P220CA0001.NAMP220.PROD.OUTLOOK.COM (2603:10b6:303:115::6) by CH3PR12MB8459.namprd12.prod.outlook.com (2603:10b6:610:139::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.21; Thu, 7 Nov 2024 19:45:27 +0000 Received: from SJ5PEPF000001EA.namprd05.prod.outlook.com (2603:10b6:303:115:cafe::88) by MW4P220CA0001.outlook.office365.com (2603:10b6:303:115::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.18 via Frontend Transport; Thu, 7 Nov 2024 19:45:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SJ5PEPF000001EA.mail.protection.outlook.com (10.167.242.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:45:27 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:07 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:07 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:04 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Carolina Jubran , Cosmin Ratiu , Tariq Toukan Subject: [PATCH net-next 04/12] net/mlx5: Refactor scheduling element configuration bitmasks Date: Thu, 7 Nov 2024 21:43:49 +0200 Message-ID: <20241107194357.683732-5-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ5PEPF000001EA:EE_|CH3PR12MB8459:EE_ X-MS-Office365-Filtering-Correlation-Id: d072b179-dccd-42fc-ed8e-08dcff64ba93 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|1800799024|36860700013; X-Microsoft-Antispam-Message-Info: b6cW+LcTWpNBi1l3FEPUgLDlb2JJw+WO5tjeTjnLV1NUmAv+DwXAGjEWJojjuKQSvDoutoGgKoBcJZcdGd8aRSrlaq3m+k1jkqzEgVTo0tfHi8R6fQqn85T8S7hGyFYdqZxcDYgXaHlw/HNFUDMDcy/xpJtfyfYOAZ/GbUlvVGUrs7a6llqhfG1duX5vJ4Kx2SasubFlbfeKDCE4VNbYE8Q96GxCdcluz1yWuyBxKTmrv+kXvEKqqR3GwnENJPL7ZU9a/cHo3yIa7OyhnnRBqc5qM3Pzryp3u0Z221wUgfZnKMyM/0/3SRsOh1DiBpd5ymEmKMLTg63Yy+F/MrgAltPfXNsQPVmVcAVEeSpszfGZ5N4KRTeSWo1GIwtr/RJtJ6ZjMsrZYFGEjxavBB3kmca0EpBcZRLBE8qbqTaeewLuIevy0dorJoSwkKpD+gUyWSiIKZqZoK70y95FStCvzR8psgBbIjynXA2u6WxvFSbZ5TD74ynNtKv5ab7xKw00YD2G3UJzfUJgbWq4AJDgVgl5B05Th+3vlXefDINNomaLdGOGt4ZA1/YoqsNhJ8286rZkqW3/yRMGIv2ZrJ8VaP+7BNBeCkU5MXSznHzod+hcvZHqOyZJOxCsMmwVA2y7zDa7k7alVyyVNMUf5imdGu2XjtRtNJBBt4yEF84wzYsqHkoI18ffc4/0vafadu6qe4dSOixgxJ2FvL60XY8wRR/stAJFl4YKjFE6AbXpuitpbmvrxoUzwFgilA85fVog8OCxewRUFRA4aQg9SwIZv1/Iess7he6p+5Uo/kr1GTAqkzhBkVwIP8FwyZZOW7B+Fe4auFOttt4CMK9rthFDzjEs62/OlSKwsZpjXYdfxD3HvhADdvcOxrozg+gjbSeeexvKEPcag7KTaYUczPVbhJ8eCn5VHX5vLogeAgjh/KWSmLa7dJKNddR4GkXtKiGHwbtbzp6YnRoFdDRUClC6DJ7+TT9xNMBwlw2ccCKzvmdjrY9UbgKTHLZxVdheBqYCKKWsKaIYTTkiu0HdAoOy4KNW65vxlnb/RTwqGgbUF54kAPJppbn2q5E9kMiQrXprOKCO1JF3rHfvBH50nFFu2aA+6OZxVYwk5puncqs0C5JGSTpity/4SHPqX3Ayzp0Yzw4Ml1LtKvNgLuJn2St6WuR12jEtcLNRFHUAqVc6mvYbn3/C9Ka39N/mEX6bBfcDGth/zsFjLPk48kpiJBbgBf5fwV5QOHIVHPxcly/saYUzogyr2imTA46Ke7VhPFwLcPgHO4KX/BI/CTeTcmrdX6ECvtFbeOKaWYV9/dTmhvBuyZWpRNoJT2Gsx/7V06tl2ikverac8zNHsr4YsjvMmKS7tAy2aWfrPFe4sRlhDTm9UClpw22nwX+LGSD9mDVPkj1kzwKBkK1MhmVlxVlg6Q== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(376014)(1800799024)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:27.0587 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d072b179-dccd-42fc-ed8e-08dcff64ba93 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001EA.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8459 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Refactor esw_qos_sched_elem_config to set bitmasks only when max_rate or bw_share values change, allowing the function to configure nodes with only one of these parameters. This enables more flexible usage for nodes where only one parameter requires configuration. Remove scattered assignments and checks to centralize them within this function, removing the now redundant esw_qos_set_node_max_rate entirely. With this refactor, also remove the assignment of the vport scheduling node max rate to the parent max rate for unlimited vports (where max rate is set to zero), as firmware already handles this behavior. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 80 ++++++------------- 1 file changed, 24 insertions(+), 56 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 82805bb20c76..c1e7b2425ebe 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -143,10 +143,21 @@ static int esw_qos_sched_elem_config(struct mlx5_esw_sched_node *node, u32 max_r if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling)) return -EOPNOTSUPP; - MLX5_SET(scheduling_context, sched_ctx, max_average_bw, max_rate); - MLX5_SET(scheduling_context, sched_ctx, bw_share, bw_share); - bitmask |= MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; - bitmask |= MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_BW_SHARE; + if (bw_share && (!MLX5_CAP_QOS(dev, esw_bw_share) || + MLX5_CAP_QOS(dev, max_tsar_bw_share) < MLX5_MIN_BW_SHARE)) + return -EOPNOTSUPP; + + if (node->max_rate == max_rate && node->bw_share == bw_share) + return 0; + + if (node->max_rate != max_rate) { + MLX5_SET(scheduling_context, sched_ctx, max_average_bw, max_rate); + bitmask |= MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; + } + if (node->bw_share != bw_share) { + MLX5_SET(scheduling_context, sched_ctx, bw_share, bw_share); + bitmask |= MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_BW_SHARE; + } err = mlx5_modify_scheduling_element_cmd(dev, SCHEDULING_HIERARCHY_E_SWITCH, @@ -160,6 +171,8 @@ static int esw_qos_sched_elem_config(struct mlx5_esw_sched_node *node, u32 max_r return err; } + node->max_rate = max_rate; + node->bw_share = bw_share; if (node->type == SCHED_NODE_TYPE_VPORTS_TSAR) trace_mlx5_esw_node_qos_config(dev, node, node->ix, bw_share, max_rate); else if (node->type == SCHED_NODE_TYPE_VPORT) @@ -217,11 +230,7 @@ static void esw_qos_update_sched_node_bw_share(struct mlx5_esw_sched_node *node, bw_share = esw_qos_calc_bw_share(node->min_rate, divider, fw_max_bw_share); - if (bw_share == node->bw_share) - return; - esw_qos_sched_elem_config(node, node->max_rate, bw_share, extack); - node->bw_share = bw_share; } static void esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, @@ -250,10 +259,6 @@ static int esw_qos_set_node_min_rate(struct mlx5_esw_sched_node *node, { struct mlx5_eswitch *esw = node->esw; - if (min_rate && (!MLX5_CAP_QOS(esw->dev, esw_bw_share) || - MLX5_CAP_QOS(esw->dev, max_tsar_bw_share) < MLX5_MIN_BW_SHARE)) - return -EOPNOTSUPP; - if (min_rate == node->min_rate) return 0; @@ -263,41 +268,6 @@ static int esw_qos_set_node_min_rate(struct mlx5_esw_sched_node *node, return 0; } -static int esw_qos_set_node_max_rate(struct mlx5_esw_sched_node *node, - u32 max_rate, struct netlink_ext_ack *extack) -{ - struct mlx5_esw_sched_node *vport_node; - int err; - - if (node->max_rate == max_rate) - return 0; - - /* Use parent node limit if new max rate is 0. */ - if (!max_rate && node->parent) - max_rate = node->parent->max_rate; - - err = esw_qos_sched_elem_config(node, max_rate, node->bw_share, extack); - if (err) - return err; - - node->max_rate = max_rate; - if (node->type != SCHED_NODE_TYPE_VPORTS_TSAR) - return 0; - - /* Any unlimited vports in the node should be set with the value of the node. */ - list_for_each_entry(vport_node, &node->children, entry) { - if (vport_node->max_rate) - continue; - - err = esw_qos_sched_elem_config(vport_node, max_rate, vport_node->bw_share, extack); - if (err) - NL_SET_ERR_MSG_MOD(extack, - "E-Switch vport implicit rate limit setting failed"); - } - - return err; -} - static int esw_qos_create_node_sched_elem(struct mlx5_core_dev *dev, u32 parent_element_id, u32 *tsar_ix) { @@ -367,7 +337,6 @@ static int esw_qos_update_node_scheduling_element(struct mlx5_vport *vport, struct netlink_ext_ack *extack) { struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; - u32 max_rate; int err; err = mlx5_destroy_scheduling_element_cmd(curr_node->esw->dev, @@ -378,9 +347,7 @@ static int esw_qos_update_node_scheduling_element(struct mlx5_vport *vport, return err; } - /* Use new node max rate if vport max rate is unlimited. */ - max_rate = vport_node->max_rate ? vport_node->max_rate : new_node->max_rate; - err = esw_qos_vport_create_sched_element(vport, new_node, max_rate, + err = esw_qos_vport_create_sched_element(vport, new_node, vport_node->max_rate, vport_node->bw_share, &vport_node->ix); if (err) { @@ -393,8 +360,7 @@ static int esw_qos_update_node_scheduling_element(struct mlx5_vport *vport, return 0; err_sched: - max_rate = vport_node->max_rate ? vport_node->max_rate : curr_node->max_rate; - if (esw_qos_vport_create_sched_element(vport, curr_node, max_rate, + if (esw_qos_vport_create_sched_element(vport, curr_node, vport_node->max_rate, vport_node->bw_share, &vport_node->ix)) esw_warn(curr_node->esw->dev, "E-Switch vport node restore failed (vport=%d)\n", @@ -707,7 +673,8 @@ int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *vport, u32 max_rate, u32 min_ err = esw_qos_set_node_min_rate(vport->qos.sched_node, min_rate, NULL); if (!err) - err = esw_qos_set_node_max_rate(vport->qos.sched_node, max_rate, NULL); + err = esw_qos_sched_elem_config(vport->qos.sched_node, max_rate, + vport->qos.sched_node->bw_share, NULL); unlock: esw_qos_unlock(esw); return err; @@ -930,7 +897,8 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * if (err) goto unlock; - err = esw_qos_set_node_max_rate(vport->qos.sched_node, tx_max, extack); + err = esw_qos_sched_elem_config(vport->qos.sched_node, tx_max, + vport->qos.sched_node->bw_share, extack); unlock: esw_qos_unlock(esw); return err; @@ -965,7 +933,7 @@ int mlx5_esw_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, void * return err; esw_qos_lock(esw); - err = esw_qos_set_node_max_rate(node, tx_max, extack); + err = esw_qos_sched_elem_config(node, tx_max, node->bw_share, extack); esw_qos_unlock(esw); return err; } From patchwork Thu Nov 7 19:43:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867021 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2077.outbound.protection.outlook.com [40.107.243.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 14F5B217F4A for ; Thu, 7 Nov 2024 19:45:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008740; cv=fail; b=STSrV8ynP9f1pmWlGWe3EO2GtScUySWr5aiQcYPd9I1tYG6CwpZQMciN17cQW9oE3yYJ7WA1jUTZ+thE09htKKLXoRPUv6XtF8NdWsEb3Kx/hcpbUjQrBc7UtpNeT4vMBsDi0vxMtx9B2n8vTKcuHC8aIFZDsjveiCGJmdRKuHs= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008740; c=relaxed/simple; bh=ZFyJyB6HM007UNUkfKlg7ZwJYr1YTEfuCebEDi/2fjc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=oJg35NsZ3wHXGrJbkU+wiSacbpOZtK3/oj7TSL8G8RWCFb/lDdGDgUUP9r2ovKiJN15Le84i94Rs37k2Y2AaHcFBNMsIa28jklGf4OpDpqgo+BJrS01l/cVkhrva/vlttDarbAxx25+15fF5sKuU44S/we5113Jk4tdsA6lgVwQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=TyQKAx8Q; arc=fail smtp.client-ip=40.107.243.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="TyQKAx8Q" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VWNRHgBiMujhBFNucPcXc3nL7dig1H2eqje4mZHTkUifVNLTgCGlY74iMl/lM3OkF5qYdaJ5h1uOkMvifOEjvfkfibnJkm+PAePEOB68QqWyjTpfNMO2QNhOP+vgUU09BecLOO9v/vtYC5Y+zQMqp8ELcIVjom6Ma8elspA3xi6JrCaGErS/9bskbyB7UqaENAma60CYZ+25blK920x1E4mrM6agUHA3ThxMpp+VmGMm0RgzTwlFVGbtQt8Irp5697xY3PSCvrbP2PXgxim+wpc6lZYb39w6otKafXVerROtJMJSHyrsqm9l4ASfjcK/6wWMuFIPvXneGU3bior51w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=xFqiu2CMiX+4YaWrnLZzCFxo99Kj6pDqPr5I981o0Ms=; b=nDv4c/L6eZTb5QIFdGJjZh4FceBctiOqahUIWx/Y2/JYGXh3oSntlk4fFUbnBLsMcGDfQ6Y1U3rDyYPi9ln/EGiVDGqLj5htsevoB+Ola19Yknanon7jw6GkQNDZfrK6TL4bNB7YLk7i4K24FR80l8+LZmQPajuWCS9TRMyTm824Xrh4hV02OWQVgfMo8O7VJFp4MmtFTsYB8FIAcTo2UC+Raarnj6YPGiA25j/urANrgN5ukGfqZgtou3X6UPSFR+BwmtaqmSAhKSdObXQWxL5B/DOH0qKzSWkzwn9ercFqy0zATHPHMzqjUko1tJg4qUrzOwtXx5mDtE+L2lXy6g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xFqiu2CMiX+4YaWrnLZzCFxo99Kj6pDqPr5I981o0Ms=; b=TyQKAx8QnjA0Inckd/PIifArhFV6/cV3NtHFf60TzqjHZcTT25SNJigkY63YJdzy+2lWazTBqG1DbEOGDcbMoNV2WsQiXdOzOrUuz+uA3jmEVtxmB+w3y3hfALwuVZ835FRC+Y/dlrIaSex7P/9HFhRKDhWEMMdv4YU1ZLjZa51zOTmsagqzR7b20ESQfAPZ4lgJnl8KLKB6mMQ/59/g7QMzzgj+xv4I0tdHZAcyqJfl2qw7DPRtpkMI5anC0JxZwDxlM+IFAdZ5h1WN2GsBOREKPRirX7mAy89W6jhDMOX7PGUNnGGoAJ7ao8y3v4rxuonGML6bmpIwtv2IQv/0SA== Received: from DM5PR07CA0066.namprd07.prod.outlook.com (2603:10b6:4:ad::31) by DM6PR12MB4371.namprd12.prod.outlook.com (2603:10b6:5:2a3::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8114.26; Thu, 7 Nov 2024 19:45:33 +0000 Received: from DS3PEPF000099DC.namprd04.prod.outlook.com (2603:10b6:4:ad:cafe::92) by DM5PR07CA0066.outlook.office365.com (2603:10b6:4:ad::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19 via Frontend Transport; Thu, 7 Nov 2024 19:45:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS3PEPF000099DC.mail.protection.outlook.com (10.167.17.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:45:32 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:11 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:11 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:08 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Carolina Jubran , Cosmin Ratiu , Tariq Toukan Subject: [PATCH net-next 05/12] net/mlx5: Generalize scheduling element operations Date: Thu, 7 Nov 2024 21:43:50 +0200 Message-ID: <20241107194357.683732-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS3PEPF000099DC:EE_|DM6PR12MB4371:EE_ X-MS-Office365-Filtering-Correlation-Id: 3c5cd9e8-6aff-4feb-f281-08dcff64be26 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|36860700013|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: WiDYQLo/JsbYhMzXGFgrrvqdSAZB/opk4WuUhFwV0eZptbBOsNVQ0o1NSg/KkldVvOS60axfkWEcbxYdrrZA6q4fspIp/kB9Gh9EVasU0XAVW8vqHSJ8mcRV3gum/4BMIneUZD9gxSHQMh2jf1fU35v7A6ATjf/hgfMPHcTmNRyrBn3DESK+E56MmXn4VjUzH/9uFt1Wit/8pDZSGI8v7MQe4GED16xq0yifPnNEf36of3wj9ijoep0pWlEb1bHcnj5kYA+hvIN48sIHEXfy5V97ufHkL3m+9bfxFtcpsJUR9WD4XufsETJT95MzymITSLHwbMBJqyOPrJHnqDPliSQ/NUNbmCDdHFoAl4LXiI/XdngFWSwX4qquXGWmRRjeoDy2oak+CoeAYiZzNVJ0eG8SmmLq9FPlrCKEG5Mj5+1mjIjvf2lgc9wFNbh/bWQKdLsfm5a15jcwP+35QC++kkpEd8JhslI3uKi4AcaI7RxybXKRjY0YZOtmV2tshB+gZSL04pjVRL4myiGvh+Q866/mQh6AvP2EzEhaFFM+Rdjw4c0VoyKGowcXL+KAsQCT+wOqMoo125TsE9yW9ENCwS252LMVQFmZUvxbkatG03bKpZ5O8y96pf/lGKVyE90oUFnhPtNpqnHHc2UitqhJGjJKhxJe2FBxYHwNLdaD8Ja+EiXG0Dq700qsv+uFD037PW1OOFSFPZUCRxAZ28d0QwwZLkDGW/PwVrSFB0yF04IxZKkag2fR58NriwklERRZOauv6FezPCrLxt/kL7csGaZRqkDhEultvkf/gREjzDLIvGZLmpGK7ZWPwZVPurJ9apV4bRqtv2sVuk5Z7WhXnDgKEU1YNotjamXop+SsGhQHDfkn0DJIar/XXiobuaSu8GdxYpe6+TKWVBaa8zw4BEhorYgelur1eYeufEHIdhIQdxn6D2TXgPcPkPPu9RwO9TzRBjW+uviGBuqUJ3a/te1auBdRMtVBhTtw2UTV6jeIXY0S45PYSoOixKCmU0RfGAfmgUC57pUBjzvdFjnaBEuDrJ6J+DDpQknSYj4jm/v1+fOvgKCemTgZKN6+nq5xbPRFOlGiHT9v+wVNJiFPrboXh28RpZXZKx5cpk5hGCU0039M19+EYr0c/yPfJCsQtTCDPSv5yw2oqgTrS3Wa4tHjPE1pLtml+0HHbPYxuZlkiQaceH19j4llTY8krxhz1Iyx6irVlcNF6VD9E4g3v6eNGprSfWkbm1Gx8eFyibIQQxBh9hbA4bkHOzsosh92SqoJblkOjEufdD0ty6CPdNQjWm1PzSRLAFK53P2uL7C9xSWdzAWJMb2QUjbcMFF1d8y3Jpgk62PEpK2ugyW5PhQKbanDuNv0o7KKkwdhRPsmuG91xBmO98ZXClOhe3b2hDpAIMjkdaNpv+cjePtTEw== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(36860700013)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:32.8835 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3c5cd9e8-6aff-4feb-f281-08dcff64be26 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS3PEPF000099DC.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4371 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce helper functions to create and destroy scheduling elements, allowing flexible configuration for different scheduling element types. The new helper functions streamline the process by centralizing error handling and logging through esw_qos_sched_elem_op_warn, which now accepts the operation type (create, destroy, or modify). The changes also adjust the esw_qos_vport_enable and mlx5_esw_qos_vport_disable functions to leverage the new generalized create/destroy helpers. The destroy functions now log errors with esw_warn without returning them. This prevents unnecessary error handling since the node was already destroyed and no further action is required from callers. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 157 +++++++++--------- 1 file changed, 76 insertions(+), 81 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index c1e7b2425ebe..155400d36a1e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -118,18 +118,49 @@ mlx5_esw_qos_vport_get_parent(const struct mlx5_vport *vport) return vport->qos.sched_node->parent; } -static void esw_qos_sched_elem_config_warn(struct mlx5_esw_sched_node *node, int err) +static void esw_qos_sched_elem_warn(struct mlx5_esw_sched_node *node, int err, const char *op) { if (node->vport) { esw_warn(node->esw->dev, - "E-Switch modify %s scheduling element failed (vport=%d,err=%d)\n", - sched_node_type_str[node->type], node->vport->vport, err); + "E-Switch %s %s scheduling element failed (vport=%d,err=%d)\n", + op, sched_node_type_str[node->type], node->vport->vport, err); return; } esw_warn(node->esw->dev, - "E-Switch modify %s scheduling element failed (err=%d)\n", - sched_node_type_str[node->type], err); + "E-Switch %s %s scheduling element failed (err=%d)\n", + op, sched_node_type_str[node->type], err); +} + +static int esw_qos_node_create_sched_element(struct mlx5_esw_sched_node *node, void *ctx, + struct netlink_ext_ack *extack) +{ + int err; + + err = mlx5_create_scheduling_element_cmd(node->esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, ctx, + &node->ix); + if (err) { + esw_qos_sched_elem_warn(node, err, "create"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch create scheduling element failed"); + } + + return err; +} + +static int esw_qos_node_destroy_sched_element(struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) +{ + int err; + + err = mlx5_destroy_scheduling_element_cmd(node->esw->dev, + SCHEDULING_HIERARCHY_E_SWITCH, + node->ix); + if (err) { + esw_qos_sched_elem_warn(node, err, "destroy"); + NL_SET_ERR_MSG_MOD(extack, "E-Switch destroying scheduling element failed."); + } + + return err; } static int esw_qos_sched_elem_config(struct mlx5_esw_sched_node *node, u32 max_rate, u32 bw_share, @@ -165,7 +196,7 @@ static int esw_qos_sched_elem_config(struct mlx5_esw_sched_node *node, u32 max_r node->ix, bitmask); if (err) { - esw_qos_sched_elem_config_warn(node, err); + esw_qos_sched_elem_warn(node, err, "modify"); NL_SET_ERR_MSG_MOD(extack, "E-Switch modify scheduling element failed"); return err; @@ -295,14 +326,12 @@ static int esw_qos_create_node_sched_elem(struct mlx5_core_dev *dev, u32 parent_ tsar_ix); } -static int -esw_qos_vport_create_sched_element(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent, - u32 max_rate, u32 bw_share, u32 *sched_elem_ix) +static int esw_qos_vport_create_sched_element(struct mlx5_esw_sched_node *vport_node, u32 bw_share, + struct netlink_ext_ack *extack) { u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; - struct mlx5_core_dev *dev = parent->esw->dev; + struct mlx5_core_dev *dev = vport_node->esw->dev; void *attr; - int err; if (!mlx5_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT, @@ -312,23 +341,12 @@ esw_qos_vport_create_sched_element(struct mlx5_vport *vport, struct mlx5_esw_sch MLX5_SET(scheduling_context, sched_ctx, element_type, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT); attr = MLX5_ADDR_OF(scheduling_context, sched_ctx, element_attributes); - MLX5_SET(vport_element, attr, vport_number, vport->vport); - MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent->ix); - MLX5_SET(scheduling_context, sched_ctx, max_average_bw, max_rate); + MLX5_SET(vport_element, attr, vport_number, vport_node->vport->vport); + MLX5_SET(scheduling_context, sched_ctx, parent_element_id, vport_node->parent->ix); + MLX5_SET(scheduling_context, sched_ctx, max_average_bw, vport_node->max_rate); MLX5_SET(scheduling_context, sched_ctx, bw_share, bw_share); - err = mlx5_create_scheduling_element_cmd(dev, - SCHEDULING_HIERARCHY_E_SWITCH, - sched_ctx, - sched_elem_ix); - if (err) { - esw_warn(dev, - "E-Switch create vport scheduling element failed (vport=%d,err=%d)\n", - vport->vport, err); - return err; - } - - return 0; + return esw_qos_node_create_sched_element(vport_node, sched_ctx, extack); } static int esw_qos_update_node_scheduling_element(struct mlx5_vport *vport, @@ -339,30 +357,22 @@ static int esw_qos_update_node_scheduling_element(struct mlx5_vport *vport, struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; int err; - err = mlx5_destroy_scheduling_element_cmd(curr_node->esw->dev, - SCHEDULING_HIERARCHY_E_SWITCH, - vport_node->ix); - if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy vport scheduling element failed"); + err = esw_qos_node_destroy_sched_element(vport_node, extack); + if (err) return err; - } - err = esw_qos_vport_create_sched_element(vport, new_node, vport_node->max_rate, - vport_node->bw_share, - &vport_node->ix); + esw_qos_node_set_parent(vport_node, new_node); + err = esw_qos_vport_create_sched_element(vport_node, vport_node->bw_share, extack); if (err) { NL_SET_ERR_MSG_MOD(extack, "E-Switch vport node set failed."); goto err_sched; } - esw_qos_node_set_parent(vport->qos.sched_node, new_node); - return 0; err_sched: - if (esw_qos_vport_create_sched_element(vport, curr_node, vport_node->max_rate, - vport_node->bw_share, - &vport_node->ix)) + esw_qos_node_set_parent(vport_node, curr_node); + if (esw_qos_vport_create_sched_element(vport_node, vport_node->bw_share, NULL)) esw_warn(curr_node->esw->dev, "E-Switch vport node restore failed (vport=%d)\n", vport->vport); @@ -425,6 +435,12 @@ static void __esw_qos_free_node(struct mlx5_esw_sched_node *node) kfree(node); } +static void esw_qos_destroy_node(struct mlx5_esw_sched_node *node, struct netlink_ext_ack *extack) +{ + esw_qos_node_destroy_sched_element(node, extack); + __esw_qos_free_node(node); +} + static struct mlx5_esw_sched_node * __esw_qos_create_vports_sched_node(struct mlx5_eswitch *esw, struct mlx5_esw_sched_node *parent, struct netlink_ext_ack *extack) @@ -483,23 +499,13 @@ esw_qos_create_vports_sched_node(struct mlx5_eswitch *esw, struct netlink_ext_ac return node; } -static int __esw_qos_destroy_node(struct mlx5_esw_sched_node *node, struct netlink_ext_ack *extack) +static void __esw_qos_destroy_node(struct mlx5_esw_sched_node *node, struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = node->esw; - int err; trace_mlx5_esw_node_qos_destroy(esw->dev, node, node->ix); - - err = mlx5_destroy_scheduling_element_cmd(esw->dev, - SCHEDULING_HIERARCHY_E_SWITCH, - node->ix); - if (err) - NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR_ID failed"); - __esw_qos_free_node(node); - + esw_qos_destroy_node(node, extack); esw_qos_normalize_min_rate(esw, NULL, extack); - - return err; } static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) @@ -584,11 +590,11 @@ static void esw_qos_put(struct mlx5_eswitch *esw) esw_qos_destroy(esw); } -static int esw_qos_vport_enable(struct mlx5_vport *vport, - u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) +static int esw_qos_vport_enable(struct mlx5_vport *vport, u32 max_rate, u32 bw_share, + struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; - u32 sched_elem_ix; + struct mlx5_esw_sched_node *sched_node; int err; esw_assert_qos_lock_held(esw); @@ -599,29 +605,28 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, if (err) return err; - err = esw_qos_vport_create_sched_element(vport, esw->qos.node0, max_rate, bw_share, - &sched_elem_ix); - if (err) - goto err_out; - - vport->qos.sched_node = __esw_qos_alloc_node(esw, sched_elem_ix, SCHED_NODE_TYPE_VPORT, - esw->qos.node0); - if (!vport->qos.sched_node) { + sched_node = __esw_qos_alloc_node(esw, 0, SCHED_NODE_TYPE_VPORT, esw->qos.node0); + if (!sched_node) { err = -ENOMEM; goto err_alloc; } - vport->qos.sched_node->vport = vport; + sched_node->max_rate = max_rate; + sched_node->min_rate = 0; + sched_node->bw_share = bw_share; + sched_node->vport = vport; + err = esw_qos_vport_create_sched_element(sched_node, 0, extack); + if (err) + goto err_vport_create; trace_mlx5_esw_vport_qos_create(vport->dev, vport, bw_share, max_rate); + vport->qos.sched_node = sched_node; return 0; +err_vport_create: + __esw_qos_free_node(sched_node); err_alloc: - if (mlx5_destroy_scheduling_element_cmd(esw->dev, - SCHEDULING_HIERARCHY_E_SWITCH, sched_elem_ix)) - esw_warn(esw->dev, "E-Switch destroy vport scheduling element failed.\n"); -err_out: esw_qos_put(esw); return err; @@ -632,7 +637,6 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) struct mlx5_eswitch *esw = vport->dev->priv.eswitch; struct mlx5_esw_sched_node *vport_node; struct mlx5_core_dev *dev; - int err; lockdep_assert_held(&esw->state_lock); esw_qos_lock(esw); @@ -645,15 +649,7 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) dev = vport_node->esw->dev; trace_mlx5_esw_vport_qos_destroy(dev, vport); - err = mlx5_destroy_scheduling_element_cmd(dev, - SCHEDULING_HIERARCHY_E_SWITCH, - vport_node->ix); - if (err) - esw_warn(dev, - "E-Switch destroy vport scheduling element failed (vport=%d,err=%d)\n", - vport->vport, err); - - __esw_qos_free_node(vport_node); + esw_qos_destroy_node(vport_node, NULL); memset(&vport->qos, 0, sizeof(vport->qos)); esw_qos_put(esw); @@ -974,13 +970,12 @@ int mlx5_esw_devlink_rate_node_del(struct devlink_rate *rate_node, void *priv, { struct mlx5_esw_sched_node *node = priv; struct mlx5_eswitch *esw = node->esw; - int err; esw_qos_lock(esw); - err = __esw_qos_destroy_node(node, extack); + __esw_qos_destroy_node(node, extack); esw_qos_put(esw); esw_qos_unlock(esw); - return err; + return 0; } int mlx5_esw_qos_vport_update_node(struct mlx5_vport *vport, From patchwork Thu Nov 7 19:43:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867022 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2043.outbound.protection.outlook.com [40.107.95.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 908E82178E3 for ; Thu, 7 Nov 2024 19:45:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.43 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008741; cv=fail; b=A1SXFWTYWl+zuYawiWlP8QkePjgsnVEfG1iTugoRGwaK9ax0mJwYE0oNxBMHGICGyNTQa3ZdErHX40nIy8zJNB7caovrecwmc6fCHv0qCatLAW8VifOWZRaYTWDxCtb1hakbN3C3rPlv6X3cbUZwXE4PJj+YB1Oa+Q35KG4VAEA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008741; c=relaxed/simple; bh=vZGLG701nIsfPGXOatFMvtM9U1pVEI76JV3gMHpjAk8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=uRfiKoBDvxd/xYlpLTMF2GxTlFBz5MQ+pmSCMj+sCwfiUJHybx7isFodxDgHl31GhUUtVhaUNihX8OUKPkkCzYMAROTUNc7/dgEEvMAJe/H2LHQc18O/LaLiMTwC/LhbRd9SI4rVnib0VDTG2PM4Gls3bOJc1HjVXXvQleiKzeg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=tk6wZQfI; arc=fail smtp.client-ip=40.107.95.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="tk6wZQfI" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=nNYiVAqzOQ6KVJHMoC1jP5rvju+fNG1OtLCFdGaIzdmwaFFTPINTj9W8d2Z9n6MZO3MNHyXk3invc0+bvVU9mszq3sXo8feuaqSgCwgnCPWC6pXFNy6M1SLnQ78jBuhQYwYBVTa99KwuQagjiT7GeSHMd6UIlVE2vvfz7/PHhoMK4jHrpD/lgDPr8F/dGW36v4NeNYjiQS0CeegtYner1jQIiVpfr38Jv8/MJv/X5GfABDlWS3j7zoffFt1+8qysSiwWYWbSP7qib58ArbywyhNwWLDtydMtD39mfShYhNxdl033WQYGPtnTtcDt7KPHaVrpKA66QA7Fob3S5NRpJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ze/YAgOiT3f7xxmxctjgl3k/R1013GOZ8rgNsTKJLWM=; b=ZYAPhk+HBdkNwPRR0JENJAGKwfokAKXXLNoKoRD/kB4AWeWjawrey8bkIGugEP9ZGCOqy8vofdBZOE9EyU95Ib1PHyRrmpLXMbMXt6r/uILrXeAVsvuv99hl/Hh+QQxoU1cR+UYwWRwyETUiMzahQQXfIpnKLs3LrEnoXgdQ1WLu+B5RzBBMABciq/eOtQQcoSytrVysZFG7Uu1tkQ4CzEUafbzloiL3UOq6f4D/dcwJw31Dvtu5ciEnej2uFQNbwDx5EWCnZvomHkz9b1ieBp1b7UHx6wIQEyL6+w0DIPv67nuepJF1/sABkrtf0LQL5PDICBAJ9muMyszS3cR89w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ze/YAgOiT3f7xxmxctjgl3k/R1013GOZ8rgNsTKJLWM=; b=tk6wZQfIES2pDx0Ln1ogNbV5A24yzGArJ4U/wPIlRSRHdZFgsKNw36SvJjpU6mnot+HEKkY1/B9JyPvaDVAcW4hagLv+K30fXz1oRQBHM0yzJm3VcCSDgPB7iUHzlz2S6D8t70G/Ngz+KlU6NbKCCKpv9DKgQ3LXNcPfpiiA6vk1O2a5VOwNu/tTQa5ipI4asIbIvqz6tZYemTScd8TkJYkrLW89rwHOUfrINPtUN0QsRaXnI3KwMET5+dX8xTWyMpsek+HRPBvM0WwYHYKe/PZWDtFCzmH5lGUTs4yx9yyuyr+6P+uj7lS8cleI0m1bdwXCPZNIsNqBW1gPsA710w== Received: from DS7PR05CA0060.namprd05.prod.outlook.com (2603:10b6:8:2f::13) by MW4PR12MB6802.namprd12.prod.outlook.com (2603:10b6:303:20f::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8114.31; Thu, 7 Nov 2024 19:45:35 +0000 Received: from DS3PEPF000099DE.namprd04.prod.outlook.com (2603:10b6:8:2f:cafe::9b) by DS7PR05CA0060.outlook.office365.com (2603:10b6:8:2f::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.18 via Frontend Transport; Thu, 7 Nov 2024 19:45:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS3PEPF000099DE.mail.protection.outlook.com (10.167.17.200) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:45:35 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:15 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:14 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:11 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Carolina Jubran , Cosmin Ratiu , Tariq Toukan Subject: [PATCH net-next 06/12] net/mlx5: Integrate esw_qos_vport_enable logic into rate operations Date: Thu, 7 Nov 2024 21:43:51 +0200 Message-ID: <20241107194357.683732-7-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS3PEPF000099DE:EE_|MW4PR12MB6802:EE_ X-MS-Office365-Filtering-Correlation-Id: 60f5a85e-0f8e-4fe1-4499-08dcff64bf68 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|82310400026|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: eKHU8FQ+2K1gUQrkEDBnkdn2EjvzSBlTwh1CkN8fzUgS3huQAfPk5nPiuEuDsSIEY6CN0cdWyDURX1PgTa/nIxwHk+rqn+ktLh8Tlw7rEg4ktER/p1zDoVWq5AIdFtsUDC7RegBQBCrXKxIse5MVKNGsVAjCX0uSXciOXkSPdjD52dquzoHwatY4WfLRnN7zCzCpBmf92beRG+qCEAOhC5s+GGYHCp4phBQB8VoSzMtu249gHPcMlpH+VBJT7kHoUPiyB2xTqmdHMaz8YzMcLi2t/Ok/OkqydWS581ycmbP0IGfCNyRNnoWeBwl62aiGcmT/2etyGjcK0M2G6u33mPRTNN3bDssX7k8nx07WUffAvPfW7dcY2P2e5T5EOMqnbKQCjPR9Dy3OHB9zEIXRgQjL1J4L/1m/v6vZw43tFhM1h6KQVEwBqK4SMOWq9KUlFIirT2zeEAzByrEoN6fOQzaFgoowTz43EIHPsFqAP9DkUZZKSKt+xCDDgoixEECszB5r7Q9fSn3FHM8q69P24EUWD9aMYLcq858ogi5fa87iK+y2/hSgUvmSVfMg9w2x9a8U/5cxFKpXVWhkOhNWvE8A9xB6RcGSSQLopeWjZtizQXlBz0dHKWCammr9gbXede04I59N4uBtcRp3ef8ck2pJyw5A27t/foJNc/06AP9RYlfrDwLBZaCAYN5GbhhVE4QotKB7ROfvtsDXtA0CQq6Kne1rkS8wIsawm9Zb2DuffFWX5CHLDQuoLQw+FXGUv7CBPaKDnpA5Xx3bFoDikJqQAurqLqQckaip50tQ08Y433AaRf3f5Exmy7MJHZFMChoIxf1a15wOm80AYNlBvobhZVtPY8QrIn3/6HlDcnYYnzShYyROz61H0lEqmoaaAmQe0wKU0T42kP7/8WSzOUWNjnCWmAvZyGnYcDfwUAdcfPYaYTlAh1e3D50N6KhKUl5P5QhhO++tq5PDcbdlBCKt+N29Cb8s2OCKIehat1z27Vi66BJw62IzRiEyLOXfTLIm+hpGx3cYje0KGfYnDkxleeQiiJ1ZazipH9JOsh+hqLtfKFbPftejctiGovatzchozYuVDm4QFwC1d+x0+XIopzfJAXz8Cm5XZwucaLranZ4gO7Th9x/qUgWMZEkJSvqDuAzitH9iHFaAdzsLu5lk4y3vTSXQQk5rISO2pXcTVg7eEGFOiGhdWD6FvzJWFMTrc3cVktoYOcKES+F0xJITxhWcLQAvtTBjcwbvnEmPTHHAD5QfmVcjysbyMuLUepB2YpPjQbV9+Z3/7XhzFEVWhYZtvpzMh3WM1CHiyOw9QVsJ6g6i+K9qHI/zdred1BjeNhaTlWjWg4PIPf/1Lfhszy3LUE3dEu2y/gARNapulXhFRZiWbc2ocqfR88/7BXczlmGhG7a9t/9Fxf96wg== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(82310400026)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:35.1455 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 60f5a85e-0f8e-4fe1-4499-08dcff64bf68 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS3PEPF000099DE.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6802 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Fold the esw_qos_vport_enable function into operations for configuring maximum and minimum rates, simplifying QoS logic. This change consolidates enabling and updating the scheduling element configuration, streamlining how vport QoS is initialized and adjusted. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 87 +++++++++---------- 1 file changed, 39 insertions(+), 48 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 155400d36a1e..35e493924c09 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -590,22 +590,21 @@ static void esw_qos_put(struct mlx5_eswitch *esw) esw_qos_destroy(esw); } -static int esw_qos_vport_enable(struct mlx5_vport *vport, u32 max_rate, u32 bw_share, - struct netlink_ext_ack *extack) +static int esw_qos_vport_enable(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent, + u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; struct mlx5_esw_sched_node *sched_node; int err; esw_assert_qos_lock_held(esw); - if (vport->qos.sched_node) - return 0; err = esw_qos_get(esw, extack); if (err) return err; - sched_node = __esw_qos_alloc_node(esw, 0, SCHED_NODE_TYPE_VPORT, esw->qos.node0); + parent = parent ?: esw->qos.node0; + sched_node = __esw_qos_alloc_node(parent->esw, 0, SCHED_NODE_TYPE_VPORT, parent); if (!sched_node) { err = -ENOMEM; goto err_alloc; @@ -657,21 +656,42 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) esw_qos_unlock(esw); } +static int mlx5_esw_qos_set_vport_max_rate(struct mlx5_vport *vport, u32 max_rate, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; + + esw_assert_qos_lock_held(vport->dev->priv.eswitch); + + if (!vport_node) + return esw_qos_vport_enable(vport, NULL, max_rate, 0, extack); + else + return esw_qos_sched_elem_config(vport_node, max_rate, vport_node->bw_share, + extack); +} + +static int mlx5_esw_qos_set_vport_min_rate(struct mlx5_vport *vport, u32 min_rate, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; + + esw_assert_qos_lock_held(vport->dev->priv.eswitch); + + if (!vport_node) + return esw_qos_vport_enable(vport, NULL, 0, min_rate, extack); + else + return esw_qos_set_node_min_rate(vport_node, min_rate, extack); +} + int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *vport, u32 max_rate, u32 min_rate) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err; esw_qos_lock(esw); - err = esw_qos_vport_enable(vport, 0, 0, NULL); - if (err) - goto unlock; - - err = esw_qos_set_node_min_rate(vport->qos.sched_node, min_rate, NULL); + err = mlx5_esw_qos_set_vport_min_rate(vport, min_rate, NULL); if (!err) - err = esw_qos_sched_elem_config(vport->qos.sched_node, max_rate, - vport->qos.sched_node->bw_share, NULL); -unlock: + err = mlx5_esw_qos_set_vport_max_rate(vport, max_rate, NULL); esw_qos_unlock(esw); return err; } @@ -757,10 +777,8 @@ static int mlx5_esw_qos_link_speed_verify(struct mlx5_core_dev *mdev, int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 rate_mbps) { - u32 ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_vport *vport; u32 link_speed_max; - u32 bitmask; int err; vport = mlx5_eswitch_get_vport(esw, vport_num); @@ -779,20 +797,7 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 } esw_qos_lock(esw); - if (!vport->qos.sched_node) { - /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ - err = esw_qos_vport_enable(vport, rate_mbps, 0, NULL); - } else { - struct mlx5_core_dev *dev = vport->qos.sched_node->parent->esw->dev; - - MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps); - bitmask = MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; - err = mlx5_modify_scheduling_element_cmd(dev, - SCHEDULING_HIERARCHY_E_SWITCH, - ctx, - vport->qos.sched_node->ix, - bitmask); - } + err = mlx5_esw_qos_set_vport_max_rate(vport, rate_mbps, NULL); esw_qos_unlock(esw); return err; @@ -863,12 +868,7 @@ int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void return err; esw_qos_lock(esw); - err = esw_qos_vport_enable(vport, 0, 0, extack); - if (err) - goto unlock; - - err = esw_qos_set_node_min_rate(vport->qos.sched_node, tx_share, extack); -unlock: + err = mlx5_esw_qos_set_vport_min_rate(vport, tx_share, extack); esw_qos_unlock(esw); return err; } @@ -889,13 +889,7 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * return err; esw_qos_lock(esw); - err = esw_qos_vport_enable(vport, 0, 0, extack); - if (err) - goto unlock; - - err = esw_qos_sched_elem_config(vport->qos.sched_node, tx_max, - vport->qos.sched_node->bw_share, extack); -unlock: + err = mlx5_esw_qos_set_vport_max_rate(vport, tx_max, extack); esw_qos_unlock(esw); return err; } @@ -991,13 +985,10 @@ int mlx5_esw_qos_vport_update_node(struct mlx5_vport *vport, } esw_qos_lock(esw); - if (!vport->qos.sched_node && !node) - goto unlock; - - err = esw_qos_vport_enable(vport, 0, 0, extack); - if (!err) + if (!vport->qos.sched_node && node) + err = esw_qos_vport_enable(vport, node, 0, 0, extack); + else if (vport->qos.sched_node) err = esw_qos_vport_update_node(vport, node, extack); -unlock: esw_qos_unlock(esw); return err; } From patchwork Thu Nov 7 19:43:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867023 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2061.outbound.protection.outlook.com [40.107.93.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A4942178EA for ; Thu, 7 Nov 2024 19:45:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.61 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008745; cv=fail; b=T0uZWS19Cua9eoQVC7Dky4QQOVX6SaMeDxo2POKIbq4sOj/uCcMywdze0gRv9rfic+GwHXtoFSuhCaCNi443F+pljNYhE7WQyWQMiajGU/vQvfkt0yw89UUAT1QFE2mQNuzn5uZNL1Yr0K7Hc5KZvDwcLqRsIeiXsaNJZNc2G7A= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008745; c=relaxed/simple; bh=TvnoeQ72HxJUBzsscZBchBk6DiXMjKmBYP0lBjelASw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=u/ZAaxOsOK+HsTUNe9fIFwiW8m8FI8cPDy8NQKawKAoUaUYVXNk7g7hJA2RD8EcMfEFFw6wU1zh7LMKgQgLowTlVYlkd8w4N3y/xHWWers8/kAR72GUeBfFiPYUc4/R5l+WYZJm0GUPzaALHFiAxGq6rD0r96jGZsoyGPnQKDxs= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=LBI3PxsS; arc=fail smtp.client-ip=40.107.93.61 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="LBI3PxsS" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=s9FL2HZKvHrkfi0YNR9tJXVkxWqVz4YfxOg6p4HYoFH4MdQiQivB8G6aojAlBiNt9pggc9ScYuq6oxawxbt4/GG72YK2+dWMFf/cl0MrKKcTnwPB+OyydunFaJDltQRW55wbdf/i2lbyC0eA1ZpeJD4zmqM2z0TWmmKAt1CYdiCZqEGyjTVRrLGiR7aGfO9Jr2ZxMr+daWDydLRe6Kq9xI1nSewhxi7/iWnePgnpaJnucNWb36xX6tg7h2jsdDHrhXWeTbb3JAMXzQQX90FeNa/wug1lKtXqVEYM9+Zau15GQ0inLqm7osI4jES+s74qb1LV2HochXiurjPiUgGzXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yQgfSAdITIZHWR3tyvJrgHFAvAhlbcdkzy0S9Ew+0JA=; b=vAcx1okrP94rGNvtiz0fEKUf56HcVuyZk7pTZt1FISJkvXsH8cgZJLct4jcT4AOsL18+QFWR4z5DjHDpg7JOse9Y6tgdD9DPP+vtsuxG88tnbxERKFBw12L2+mgnO+/T+XqweS99JhBLFQ/OMA/lVRjTPvwWQbtHsQGeCtkcJRckYaweXDuaF055o+mNsCiRySnm+JRQlQM/UYCe59oRWoMOhiGOLpn0pCTeOKfEO1Le3GVBr5u7UtIonUUYqoMG+6+/vG8gOmZy5306ZWXe3w+XC0pFFMZREtzpH+3bMMHCE4H3/FD78zEs8MU8X7Kn3UFPdGztXnhl3LBG6btOgg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yQgfSAdITIZHWR3tyvJrgHFAvAhlbcdkzy0S9Ew+0JA=; b=LBI3PxsSYAfx5ncPm3cyK6Ffw7YS5Lb9lpHeV8SEYsuo08YWnhN8j1nioNQDJQrhHB7bTLrGUuY8wLFzg7eI+/+fd3e/3Zil0bMNev3z/tar7tsNOqkAMWPTzMzwH1AKvB/G2txrsSypoO83TpbYJc907A6/F8UKuPXwSBdrnpv81quMF7GHHWNz6I5Y1T0b3H0mJxnp4qcrxNKDwwJ4sdV5H93LGd6Lw8hwJrXhwtdjZZd1Qly+/6zKrRZJfegN9y9vji+S0Cf8OsWdO6uSR0e6K2oByXtSKoRhCFu1BvZ6y7yWS2UgNISX2YMzyDQ8bmEKgXAGE2yDTkcIw/iBnw== Received: from BN9PR03CA0089.namprd03.prod.outlook.com (2603:10b6:408:fc::34) by LV3PR12MB9119.namprd12.prod.outlook.com (2603:10b6:408:1a2::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19; Thu, 7 Nov 2024 19:45:39 +0000 Received: from BN3PEPF0000B36F.namprd21.prod.outlook.com (2603:10b6:408:fc:cafe::2e) by BN9PR03CA0089.outlook.office365.com (2603:10b6:408:fc::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.21 via Frontend Transport; Thu, 7 Nov 2024 19:45:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN3PEPF0000B36F.mail.protection.outlook.com (10.167.243.166) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8158.0 via Frontend Transport; Thu, 7 Nov 2024 19:45:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:18 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:18 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:15 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Carolina Jubran , Cosmin Ratiu , Tariq Toukan Subject: [PATCH net-next 07/12] net/mlx5: Make vport QoS enablement more flexible for future extensions Date: Thu, 7 Nov 2024 21:43:52 +0200 Message-ID: <20241107194357.683732-8-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B36F:EE_|LV3PR12MB9119:EE_ X-MS-Office365-Filtering-Correlation-Id: 1157a00f-aed3-4c96-0b79-08dcff64c1a6 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|36860700013|376014; X-Microsoft-Antispam-Message-Info: Pj+HNRcwakxGW3fQrlLZ266rDdNDyBXPLWcoh1J/CyNa8gRY+DsVD2RI7bO76M9hQJrfUgU3lMm9vn5CrxhB48ofKkiwzb6ArX7oVFXchGggA5bTqWZ16MX5IrtTbePZuzj6qtCmpR5+u04kk5Vj061sdXBDIxWRuSfy9lMsld9lvJ9o5R+DKL4CnmSfDl+kdxbMkMQWAAGFkfNif9h6aHnitmSBbymh2OjqcjxDRrXUY7JLkGLb6OXHWunhtgMOCKpD+wGmNji88Xy/FxVyEIbJNZ7P9ICmz1noihomxOrcV8DPGFg9qNvYsCYIlx2thp/WqvPUnRPJmAmS5p4nPyd57VXjyR6BenTvkIEmlrilvCDy6V8N0eUCf4bzE/i+2Zoshx4aKrk/mGYn7qRWcijpZdDs5q00qH0660G5lIOHvMzjN3RFzoxRgAdK8a5C2AEXCsy3HCmJmNICR2zb/jdzQwMNHMhUdhJkEhuPoU0iifykNqhrlDAB/FZ2DDoniRqa+8uHQM+FVBY1J4r7QOovwu4XNBBqRTQFMO565SEoC9Ow/NstrotOTtjcauLimYXoHLOnS9risR4LbZsUnigOis7dfT0FqVCzjK8ufvzYZukagXTKLGGlc8r5bLEoRO6DNFX/+Dr/pnaVVgShBOD+P4O2RD7lFaEeXbkv5ECSSn/YgtC1HDTleKPLF23XO2CcNoROWhby9loIEwJxPM82X6NS2HBCxQMH2/S4X+SnU1tlEN3qL9+p2jXFSwaIHr9eZWBnVCSxnzHTG8jp4BCIjrbs38jRHjH/c/mmUjzcxR5hG8061CI6pmyDFrLx05MV4Vmc7M+cDEssqb5fY/QP2k2e9jltMToorUgyIJZdnlEKzRH/OTT747bpfYFwrEb/4ey1q5KIou8z0Hiavme9UtmfpqJkfzJWhbTNmCcQZsEkfSPKSIoOLb84t/HmglP74JGz2RFC1whX8c+c163jcpFsLMfjhUbCJRowIIRfqTxxLwkdcLyimAdwDiGobDfCgX8sz2GBZglBX8y5wioUExilKiW5V8VAMyiSeAuJ1g4JYkdGMD8iIh6BsrmZDyaS3ExeTF834dLaz/S4kYY/Jp/v7qd11UtUC0qEAMie9zubnr2pv2W2x6J80kQTOiarhD6tQhKUYHGHygkMaTv0OUcCN2C90MyR+NVyVYxhyaFnUxc2uWXyN95ruBeqNZFYTk7fZGan4/9O0kHc0AQuqswhkaIBy6e0HHPJarowsO8SAXZjmRidBdf/777GQW05pfsOLSLrnyCT3Jp4YAbtNcMgXEgHiHrFXuzL4AMTUIojoiR5m9cDKy8q5rIK7amAFYcU201snPAws9tkA07a1MOWiiumTYQPyYI8nk7LuUYKbuylDQSDBXtD1O2jlRTp+nNHmG7z7QG6aFA/SA== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(36860700013)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:38.8746 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1157a00f-aed3-4c96-0b79-08dcff64c1a6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B36F.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9119 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Refactor esw_qos_vport_enable to support more generic configurations, allowing it to be reused for new vport node types in future patches. This refactor includes a new way to change the vport parent node by disabling the current setup and re-enabling it with the new parent. This change sets the foundation for adapting configuration based on the parent type in future patches. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/esw/devlink_port.c | 2 +- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 193 ++++++++---------- .../net/ethernet/mellanox/mlx5/core/esw/qos.h | 1 + .../net/ethernet/mellanox/mlx5/core/eswitch.c | 6 +- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 5 +- 5 files changed, 96 insertions(+), 111 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c index d0f38818363f..982fe3714683 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c @@ -195,7 +195,7 @@ void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_vport *vport) return; dl_port = vport->dl_port; - mlx5_esw_qos_vport_update_node(vport, NULL, NULL); + mlx5_esw_qos_vport_update_parent(vport, NULL, NULL); devl_rate_leaf_destroy(&dl_port->dl_port); devl_port_unregister(&dl_port->dl_port); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 35e493924c09..8b7c843446e1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -101,6 +101,12 @@ esw_qos_node_set_parent(struct mlx5_esw_sched_node *node, struct mlx5_esw_sched_ node->esw = parent->esw; } +void mlx5_esw_qos_vport_qos_free(struct mlx5_vport *vport) +{ + kfree(vport->qos.sched_node); + memset(&vport->qos, 0, sizeof(vport->qos)); +} + u32 mlx5_esw_qos_vport_get_sched_elem_ix(const struct mlx5_vport *vport) { if (!vport->qos.sched_node) @@ -326,7 +332,7 @@ static int esw_qos_create_node_sched_elem(struct mlx5_core_dev *dev, u32 parent_ tsar_ix); } -static int esw_qos_vport_create_sched_element(struct mlx5_esw_sched_node *vport_node, u32 bw_share, +static int esw_qos_vport_create_sched_element(struct mlx5_esw_sched_node *vport_node, struct netlink_ext_ack *extack) { u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; @@ -344,69 +350,10 @@ static int esw_qos_vport_create_sched_element(struct mlx5_esw_sched_node *vport_ MLX5_SET(vport_element, attr, vport_number, vport_node->vport->vport); MLX5_SET(scheduling_context, sched_ctx, parent_element_id, vport_node->parent->ix); MLX5_SET(scheduling_context, sched_ctx, max_average_bw, vport_node->max_rate); - MLX5_SET(scheduling_context, sched_ctx, bw_share, bw_share); return esw_qos_node_create_sched_element(vport_node, sched_ctx, extack); } -static int esw_qos_update_node_scheduling_element(struct mlx5_vport *vport, - struct mlx5_esw_sched_node *curr_node, - struct mlx5_esw_sched_node *new_node, - struct netlink_ext_ack *extack) -{ - struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; - int err; - - err = esw_qos_node_destroy_sched_element(vport_node, extack); - if (err) - return err; - - esw_qos_node_set_parent(vport_node, new_node); - err = esw_qos_vport_create_sched_element(vport_node, vport_node->bw_share, extack); - if (err) { - NL_SET_ERR_MSG_MOD(extack, "E-Switch vport node set failed."); - goto err_sched; - } - - return 0; - -err_sched: - esw_qos_node_set_parent(vport_node, curr_node); - if (esw_qos_vport_create_sched_element(vport_node, vport_node->bw_share, NULL)) - esw_warn(curr_node->esw->dev, "E-Switch vport node restore failed (vport=%d)\n", - vport->vport); - - return err; -} - -static int esw_qos_vport_update_node(struct mlx5_vport *vport, - struct mlx5_esw_sched_node *node, - struct netlink_ext_ack *extack) -{ - struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; - struct mlx5_eswitch *esw = vport->dev->priv.eswitch; - struct mlx5_esw_sched_node *new_node, *curr_node; - int err; - - esw_assert_qos_lock_held(esw); - curr_node = vport_node->parent; - new_node = node ?: esw->qos.node0; - if (curr_node == new_node) - return 0; - - err = esw_qos_update_node_scheduling_element(vport, curr_node, new_node, extack); - if (err) - return err; - - /* Recalculate bw share weights of old and new nodes */ - if (vport_node->bw_share || new_node->bw_share) { - esw_qos_normalize_min_rate(curr_node->esw, curr_node, extack); - esw_qos_normalize_min_rate(new_node->esw, new_node, extack); - } - - return 0; -} - static struct mlx5_esw_sched_node * __esw_qos_alloc_node(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_node_type type, struct mlx5_esw_sched_node *parent) @@ -590,43 +537,62 @@ static void esw_qos_put(struct mlx5_eswitch *esw) esw_qos_destroy(esw); } +static void esw_qos_vport_disable(struct mlx5_vport *vport, struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; + struct mlx5_esw_sched_node *parent = vport_node->parent; + + esw_qos_node_destroy_sched_element(vport_node, extack); + + vport_node->bw_share = 0; + list_del_init(&vport_node->entry); + esw_qos_normalize_min_rate(parent->esw, parent, extack); + + trace_mlx5_esw_vport_qos_destroy(vport_node->esw->dev, vport); +} + static int esw_qos_vport_enable(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent, - u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) + struct netlink_ext_ack *extack) +{ + int err; + + esw_assert_qos_lock_held(vport->dev->priv.eswitch); + + esw_qos_node_set_parent(vport->qos.sched_node, parent); + err = esw_qos_vport_create_sched_element(vport->qos.sched_node, extack); + if (err) + return err; + + esw_qos_normalize_min_rate(parent->esw, parent, extack); + + return 0; +} + +static int mlx5_esw_qos_vport_enable(struct mlx5_vport *vport, enum sched_node_type type, + struct mlx5_esw_sched_node *parent, u32 max_rate, + u32 min_rate, struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; struct mlx5_esw_sched_node *sched_node; int err; esw_assert_qos_lock_held(esw); - err = esw_qos_get(esw, extack); if (err) return err; parent = parent ?: esw->qos.node0; - sched_node = __esw_qos_alloc_node(parent->esw, 0, SCHED_NODE_TYPE_VPORT, parent); - if (!sched_node) { - err = -ENOMEM; - goto err_alloc; - } + sched_node = __esw_qos_alloc_node(parent->esw, 0, type, parent); + if (!sched_node) + return -ENOMEM; sched_node->max_rate = max_rate; - sched_node->min_rate = 0; - sched_node->bw_share = bw_share; + sched_node->min_rate = min_rate; sched_node->vport = vport; - err = esw_qos_vport_create_sched_element(sched_node, 0, extack); - if (err) - goto err_vport_create; - - trace_mlx5_esw_vport_qos_create(vport->dev, vport, bw_share, max_rate); vport->qos.sched_node = sched_node; - - return 0; - -err_vport_create: - __esw_qos_free_node(sched_node); -err_alloc: - esw_qos_put(esw); + err = esw_qos_vport_enable(vport, parent, extack); + if (err) + esw_qos_put(esw); return err; } @@ -634,23 +600,18 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, struct mlx5_esw_sched_ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; - struct mlx5_esw_sched_node *vport_node; - struct mlx5_core_dev *dev; + struct mlx5_esw_sched_node *parent; lockdep_assert_held(&esw->state_lock); esw_qos_lock(esw); - vport_node = vport->qos.sched_node; - if (!vport_node) + if (!vport->qos.sched_node) goto unlock; - WARN(vport_node->parent != esw->qos.node0, - "Disabling QoS on port before detaching it from node"); - - dev = vport_node->esw->dev; - trace_mlx5_esw_vport_qos_destroy(dev, vport); - esw_qos_destroy_node(vport_node, NULL); - memset(&vport->qos, 0, sizeof(vport->qos)); + parent = vport->qos.sched_node->parent; + WARN(parent != esw->qos.node0, "Disabling QoS on port before detaching it from node"); + esw_qos_vport_disable(vport, NULL); + mlx5_esw_qos_vport_qos_free(vport); esw_qos_put(esw); unlock: esw_qos_unlock(esw); @@ -664,7 +625,8 @@ static int mlx5_esw_qos_set_vport_max_rate(struct mlx5_vport *vport, u32 max_rat esw_assert_qos_lock_held(vport->dev->priv.eswitch); if (!vport_node) - return esw_qos_vport_enable(vport, NULL, max_rate, 0, extack); + return mlx5_esw_qos_vport_enable(vport, SCHED_NODE_TYPE_VPORT, NULL, max_rate, 0, + extack); else return esw_qos_sched_elem_config(vport_node, max_rate, vport_node->bw_share, extack); @@ -678,7 +640,8 @@ static int mlx5_esw_qos_set_vport_min_rate(struct mlx5_vport *vport, u32 min_rat esw_assert_qos_lock_held(vport->dev->priv.eswitch); if (!vport_node) - return esw_qos_vport_enable(vport, NULL, 0, min_rate, extack); + return mlx5_esw_qos_vport_enable(vport, SCHED_NODE_TYPE_VPORT, NULL, 0, min_rate, + extack); else return esw_qos_set_node_min_rate(vport_node, min_rate, extack); } @@ -711,6 +674,31 @@ bool mlx5_esw_qos_get_vport_rate(struct mlx5_vport *vport, u32 *max_rate, u32 *m return enabled; } +static int esw_qos_vport_update_parent(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent, + struct netlink_ext_ack *extack) +{ + struct mlx5_eswitch *esw = vport->dev->priv.eswitch; + struct mlx5_esw_sched_node *curr_parent; + int err; + + esw_assert_qos_lock_held(esw); + curr_parent = vport->qos.sched_node->parent; + parent = parent ?: esw->qos.node0; + if (curr_parent == parent) + return 0; + + esw_qos_vport_disable(vport, extack); + + err = esw_qos_vport_enable(vport, parent, extack); + if (err) { + if (esw_qos_vport_enable(vport, curr_parent, NULL)) + esw_warn(parent->esw->dev, "vport restore QoS failed (vport=%d)\n", + vport->vport); + } + + return err; +} + static u32 mlx5_esw_qos_lag_link_speed_get_locked(struct mlx5_core_dev *mdev) { struct ethtool_link_ksettings lksettings; @@ -972,23 +960,22 @@ int mlx5_esw_devlink_rate_node_del(struct devlink_rate *rate_node, void *priv, return 0; } -int mlx5_esw_qos_vport_update_node(struct mlx5_vport *vport, - struct mlx5_esw_sched_node *node, - struct netlink_ext_ack *extack) +int mlx5_esw_qos_vport_update_parent(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent, + struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; int err = 0; - if (node && node->esw != esw) { + if (parent && parent->esw != esw) { NL_SET_ERR_MSG_MOD(extack, "Cross E-Switch scheduling is not supported"); return -EOPNOTSUPP; } esw_qos_lock(esw); - if (!vport->qos.sched_node && node) - err = esw_qos_vport_enable(vport, node, 0, 0, extack); + if (!vport->qos.sched_node && parent) + err = mlx5_esw_qos_vport_enable(vport, SCHED_NODE_TYPE_VPORT, parent, 0, 0, extack); else if (vport->qos.sched_node) - err = esw_qos_vport_update_node(vport, node, extack); + err = esw_qos_vport_update_parent(vport, parent, extack); esw_qos_unlock(esw); return err; } @@ -1002,8 +989,8 @@ int mlx5_esw_devlink_rate_parent_set(struct devlink_rate *devlink_rate, struct mlx5_vport *vport = priv; if (!parent) - return mlx5_esw_qos_vport_update_node(vport, NULL, extack); + return mlx5_esw_qos_vport_update_parent(vport, NULL, extack); node = parent_priv; - return mlx5_esw_qos_vport_update_node(vport, node, extack); + return mlx5_esw_qos_vport_update_parent(vport, node, extack); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h index 61a6fdd5c267..6eb8f6a648c8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h @@ -13,6 +13,7 @@ int mlx5_esw_qos_set_vport_rate(struct mlx5_vport *evport, u32 max_rate, u32 min bool mlx5_esw_qos_get_vport_rate(struct mlx5_vport *vport, u32 *max_rate, u32 *min_rate); void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport); +void mlx5_esw_qos_vport_qos_free(struct mlx5_vport *vport); u32 mlx5_esw_qos_vport_get_sched_elem_ix(const struct mlx5_vport *vport); struct mlx5_esw_sched_node *mlx5_esw_qos_vport_get_parent(const struct mlx5_vport *vport); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index d0dab8f4e1a3..7fb8a3381f84 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1061,8 +1061,7 @@ static void mlx5_eswitch_clear_vf_vports_info(struct mlx5_eswitch *esw) unsigned long i; mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) { - kfree(vport->qos.sched_node); - memset(&vport->qos, 0, sizeof(vport->qos)); + mlx5_esw_qos_vport_qos_free(vport); memset(&vport->info, 0, sizeof(vport->info)); vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO; } @@ -1074,8 +1073,7 @@ static void mlx5_eswitch_clear_ec_vf_vports_info(struct mlx5_eswitch *esw) unsigned long i; mlx5_esw_for_each_ec_vf_vport(esw, i, vport, esw->esw_funcs.num_ec_vfs) { - kfree(vport->qos.sched_node); - memset(&vport->qos, 0, sizeof(vport->qos)); + mlx5_esw_qos_vport_qos_free(vport); memset(&vport->info, 0, sizeof(vport->info)); vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 14dd42d44e6f..a83d41121db6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -427,9 +427,8 @@ int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw, u16 vport_num, bool setting); int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, u16 vport, u32 max_rate, u32 min_rate); -int mlx5_esw_qos_vport_update_node(struct mlx5_vport *vport, - struct mlx5_esw_sched_node *node, - struct netlink_ext_ack *extack); +int mlx5_esw_qos_vport_update_parent(struct mlx5_vport *vport, struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack); int mlx5_eswitch_set_vepa(struct mlx5_eswitch *esw, u8 setting); int mlx5_eswitch_get_vepa(struct mlx5_eswitch *esw, u8 *setting); int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw, From patchwork Thu Nov 7 19:43:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867024 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2074.outbound.protection.outlook.com [40.107.237.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AEA4D7E782 for ; Thu, 7 Nov 2024 19:45:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008749; cv=fail; b=FHiKtmIMUv9cQyuETevTDxhATCYN8oHsatTXwC6EY7tAgsrwZrs4KT+mEwM3czRDNmgoe90wOyA6F8qO+S2Wo2eR1MYIWsjtWBws9ydwY19tfOAsPsTHfoDw+LgDL0zlHaA2opFSlYSuWnemhliHB97iFDDH2uMy5m11Zhi1U3o= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008749; c=relaxed/simple; bh=VwjpZtJLTTcDj6QyQJZMTNKuQviHKVT0mo2LERR5QPM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=YTTJKuUGjcWFJFz8K9ujOJYEjTya6UWbA29eRS68D+Syk23S5RLV7G/pol4ZjourSsGQLeC4KWKd44AN0yz+A9yWKCLpUfez+IGS0kYGKZRE3DGxkNH3LlOykhCCWx8g52TW+orb0yhxn9aS1MxPtwVIO59Yu5zwmc6TyLZf2M4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=YEXsZ0KN; arc=fail smtp.client-ip=40.107.237.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="YEXsZ0KN" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=on6Y6CP7aDKOUCyVngOXkeHQ9zzHko/QSQmFf7Tn6saKk++U5Gz/Cwvw9RBkvR2RG1xpO3mHA24r8TPjjql3cXEsqjLoXx4JymIgxhxF9JcFOupNyoKvd0IZDwyCIe8+6ZiqWmsGjtickMS00t/9z96FguL7HU9Bp2wxUU+XL5nNqw1TETAwpJyysJx6K+jnadIUchPxh/ATcfCL5m2osfkiY3K15lxFEz+i6Kj7djNU5Etw+GHfhEDoN5yEOjPXNWjOZCj46UrDH1bA0vU8WcY4WyRf4lSqs1mRkVuuyNCpsCQe9OS83vjg5/4CE60IbEimATbgrB20DSs6WBKX2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NaOfMqMUhECreXoQENGXNP8i7MzGTL+Gni2FqI1RND4=; b=m3ux7RHDyBNAbLm469Fn9QBIePcX4hbn4c8/j3cMrac5d3NaxRMPYMCEeTZmp/puv7MTXvxAytZvmsa9QaP+V3JVUYmwPTrT0kheNPSj66isUlVuX93eA2cLElklkvADKKxjpU9Me8XM2kx6mJMea+W9wpG6DYUHwagFPaQhGwauJhyy3jSFQAJARHCCtymepF8frjZNoKEzI19EMaLtexrh4mj5pYiebfo4jd5N8VRekass+vAshqbJr1oglzwOJ6mAZC2VMznRH+8HFxycGdWqMZelqt9H1g3q2PDVbw9ItPdQJkOUS/vODPaQfWkAdBO3gUWmGIUemFaAeeVR1w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NaOfMqMUhECreXoQENGXNP8i7MzGTL+Gni2FqI1RND4=; b=YEXsZ0KNwNAJJBQ+dkqJk2bukSpbYu0ySVyMyKKpL2iCbSOwnHRGiYJOwASNL48XHM8/+3qQdYxENSVIWf+Bytlt+aGM9zPtV0c0fkKMS1oIkzKbsoOPv76B95pbNVruKavGYF+kWHKIcGXeQP4Yifp2N2469Kq8PPq3HpOiSTZyG1hVQIg9noo5FznI+QgXg0dxgyVP6s707TmhTtNIs0Rc1UYg1dfP3dCZ3eKT0oK+Pv1mCPnMPV5IDuByQQ5CIEn6Kycj0+omjjA1UL4eTD/yRlV/vAgfK7D6K9XohOJdiUZoc/kxEvLDaOT4/gQ5CIiAqAi2TKy6A0Nh7h0UfA== Received: from SJ0PR03CA0373.namprd03.prod.outlook.com (2603:10b6:a03:3a1::18) by DS0PR12MB9040.namprd12.prod.outlook.com (2603:10b6:8:f5::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19; Thu, 7 Nov 2024 19:45:43 +0000 Received: from MWH0EPF000971E2.namprd02.prod.outlook.com (2603:10b6:a03:3a1:cafe::4a) by SJ0PR03CA0373.outlook.office365.com (2603:10b6:a03:3a1::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19 via Frontend Transport; Thu, 7 Nov 2024 19:45:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by MWH0EPF000971E2.mail.protection.outlook.com (10.167.243.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:45:43 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:21 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:21 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:18 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Dragos Tatulea , Tariq Toukan Subject: [PATCH net-next 08/12] net/mlx5e: SHAMPO, Simplify UMR allocation for headers Date: Thu, 7 Nov 2024 21:43:53 +0200 Message-ID: <20241107194357.683732-9-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: MWH0EPF000971E2:EE_|DS0PR12MB9040:EE_ X-MS-Office365-Filtering-Correlation-Id: 5ed324ad-061a-4bc4-ac6b-08dcff64c42f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|36860700013|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: 8bRkFubldjk2Fd4CxwvHrp+BBYFgURH0I3PjCpdYN47VQFNyQlFNX1emFTAPbOM7mMoufB4JsJGHUqCHmAPth0TwcnM2xk7kZRQmr3G8IkeY0SqE/ELiTk6bG2LKFghoZqDCn8a68y9n0Pthl3u020FaWAfisz9WOzzcOtcGRIUCmUtB1kPYylPF5GQtLY8X/KcDALEDvauTUYhF5TfEv5DzUEcbUEi0brxNjAOcbymLuwcX1wYOXwoXCfFH39NvPaYmypIBjIWg6oFIRmWN1pqfzC75SCo38sIgoDt95KWguokoRFEwqySvcaYjAhmUxPF/mRB2c0qBAPKnspPIJzwgSBURB5+j6aQDGw7KAOMDKdG4h/nJkNcYVqc75FbOjPDUFOMkTT3lmsk5EDGaKS4hL9B5Gm/+rRx61b7w+SSbyrDv/m6+C4nMWa2FDWmjijDGDIORHXkeR8bHenQjvHOc6lmbz7VVUgx1xqTNqiBQxpNbGWGpZhPbJECRjKP4Oz/sIhwLPnpKZ85q8xyI75o1epdCArlZijloHubjivltEtM0NqsNCzlxwZgtQo36Pdq0zmKYSSn99HvKN+lNtkG1XJFUMh/w28kx++rdMwZL2hH8QnjxwnfStKfbistkgT2Q2K3WhCLqro0tKns2avqh7mEgk8Suzm2yLbIbFcFToV6ZFTxl8zQ1zTwuDkzMW8GiCU5PD0bduExGrQTB8Yecj8Eel3CO8W6C0t55fXOTPLBcj9onGlEeLU3RhJMIEfWzYWaWGdfDW/6lndu0hpURpSgXnb1t2VYKgRF/Tckr2tn8mr9SLFviNVa274wkov55sYnxgi4kwEw9+s+hFSQzwDQGwoaQ9WVQAUdRyTjRmnPZgiNsJ+HajzgkYAKN2Uh0X3Yr/RrFaMvTQTOSVjeqgsZfsi082lSjjjqZ46Ad0t8/tCiooHO6Wv220/PnkY3x/K/pujptiz6g8LSWfnxGRVh3adrHYjL35zoL9zplK0QS3XrTbGjldZYs8OdhuZMN5qGWem1mECz3oyRhGrImuBAoYMYALgE9rVCqvFuc98DtIlYSJE2FCARd6A+XRJEVS7w3mbaZnnyWjmK9F1EQGUxAQZLREeqz/0gQ4wSm17TFtZi7Cb1Ef8j9AZBhkwuRUOz5vgdjsB80qk/mFNeTZwIKOczFX48qvFQ3spJ6PdxALcEhhZ6D/JYo3BA0l3i+xWTbfnJ+R3VgDlTLdY/Celu6Xs/Th7FlEBZgDWrQ0oiVa/J3eYD5H9rb9dlji9/Rc9ahgEXro904GypH61XdD3Fh36HYcYlV3SNeGHJs1gdQXHQbFSwU8sf9yfaEQImf5vHVo+59c2oUBP1kQOOOiQsH5tKSaPmHwwze02lvPOIqblJQ9nL1BRrzryuoWH0YMtFgrzseZ7nX2Y+vwg== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(36860700013)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:43.2101 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5ed324ad-061a-4bc4-ac6b-08dcff64c42f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: MWH0EPF000971E2.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB9040 X-Patchwork-Delegate: kuba@kernel.org From: Dragos Tatulea Allocating page fragments for header data split is currently more complicated than it should be. That's because the number of KSM entries allocated is not aligned to the number of headers per page. This leads to having leftovers in the next allocation which require additional accounting and needlessly complicated code. This patch aligns (down) the number of KSM entries in the UMR WQE to the number of headers per page by: 1) Aligning the max number of entries allocated per UMR WQE (max_ksm_entries) to MLX5E_SHAMPO_WQ_HEADER_PER_PAGE. 2) Aligning the total number of free headers to MLX5E_SHAMPO_WQ_HEADER_PER_PAGE. ... and then it drops the extra accounting code from mlx5e_build_shampo_hd_umr(). Although the number of entries allocated per UMR WQE is slightly smaller due to aligning down, no performance impact was observed. Signed-off-by: Dragos Tatulea Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 - .../net/ethernet/mellanox/mlx5/core/en_rx.c | 29 ++++++++----------- 2 files changed, 12 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 58f3df784ded..4449a57ba5b2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -633,7 +633,6 @@ struct mlx5e_shampo_hd { u16 pi; u16 ci; __be32 key; - u64 last_addr; }; struct mlx5e_hw_gro_data { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index d81083f4f316..e044e5d11f05 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -648,30 +648,26 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, u16 ksm_entries, u16 index) { struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; - u16 entries, pi, header_offset, err, wqe_bbs, new_entries; + u16 pi, header_offset, err, wqe_bbs; u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; u16 page_index = shampo->curr_page_index; struct mlx5e_frag_page *frag_page; - u64 addr = shampo->last_addr; struct mlx5e_dma_info *dma_info; struct mlx5e_umr_wqe *umr_wqe; int headroom, i; + u64 addr = 0; headroom = rq->buff.headroom; - new_entries = ksm_entries - (shampo->pi & (MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT - 1)); - entries = ALIGN(ksm_entries, MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT); - wqe_bbs = MLX5E_KSM_UMR_WQEBBS(entries); + wqe_bbs = MLX5E_KSM_UMR_WQEBBS(ksm_entries); pi = mlx5e_icosq_get_next_pi(sq, wqe_bbs); umr_wqe = mlx5_wq_cyc_get_wqe(&sq->wq, pi); - build_ksm_umr(sq, umr_wqe, shampo->key, index, entries); + build_ksm_umr(sq, umr_wqe, shampo->key, index, ksm_entries); frag_page = &shampo->pages[page_index]; - for (i = 0; i < entries; i++, index++) { + WARN_ON_ONCE(ksm_entries & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)); + for (i = 0; i < ksm_entries; i++, index++) { dma_info = &shampo->info[index]; - if (i >= ksm_entries || (index < shampo->pi && shampo->pi - index < - MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT)) - goto update_ksm; header_offset = (index & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) << MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE; if (!(header_offset & (PAGE_SIZE - 1))) { @@ -691,7 +687,6 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, dma_info->frag_page = frag_page; } -update_ksm: umr_wqe->inline_ksms[i] = (struct mlx5_ksm) { .key = cpu_to_be32(lkey), .va = cpu_to_be64(dma_info->addr + headroom), @@ -701,12 +696,11 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, sq->db.wqe_info[pi] = (struct mlx5e_icosq_wqe_info) { .wqe_type = MLX5E_ICOSQ_WQE_SHAMPO_HD_UMR, .num_wqebbs = wqe_bbs, - .shampo.len = new_entries, + .shampo.len = ksm_entries, }; - shampo->pi = (shampo->pi + new_entries) & (shampo->hd_per_wq - 1); + shampo->pi = (shampo->pi + ksm_entries) & (shampo->hd_per_wq - 1); shampo->curr_page_index = page_index; - shampo->last_addr = addr; sq->pc += wqe_bbs; sq->doorbell_cseg = &umr_wqe->ctrl; @@ -731,7 +725,8 @@ static int mlx5e_alloc_rx_hd_mpwqe(struct mlx5e_rq *rq) struct mlx5e_icosq *sq = rq->icosq; int i, err, max_ksm_entries, len; - max_ksm_entries = MLX5E_MAX_KSM_PER_WQE(rq->mdev); + max_ksm_entries = ALIGN_DOWN(MLX5E_MAX_KSM_PER_WQE(rq->mdev), + MLX5E_SHAMPO_WQ_HEADER_PER_PAGE); ksm_entries = bitmap_find_window(shampo->bitmap, shampo->hd_per_wqe, shampo->hd_per_wq, shampo->pi); @@ -739,8 +734,8 @@ static int mlx5e_alloc_rx_hd_mpwqe(struct mlx5e_rq *rq) if (!ksm_entries) return 0; - ksm_entries += (shampo->pi & (MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT - 1)); - index = ALIGN_DOWN(shampo->pi, MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT); + /* pi is aligned to MLX5E_SHAMPO_WQ_HEADER_PER_PAGE */ + index = shampo->pi; entries_before = shampo->hd_per_wq - index; if (unlikely(entries_before < ksm_entries)) From patchwork Thu Nov 7 19:43:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867025 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2049.outbound.protection.outlook.com [40.107.243.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 036542178E3 for ; Thu, 7 Nov 2024 19:45:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.49 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008754; cv=fail; b=pzD0OJvHw3Oisc0lMvFOv+xEAZinRTKpbmqPCWTGN8PzjA6XBkOdPhG5RMIQzzuMaRLz1J6gmzevhURUJsEACJpenU1cTlmkYb0n3C3Y/6nONmR9xXLGQ4Aau/BFQ56135LfisdE0NSm+ai6rRkeJf/MkT5R790c6hpISJn+lA8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008754; c=relaxed/simple; bh=xXf3pOgv20Q7EWdQWzHCNmjtpcu/urQo5vpGuisj0HU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=DsK5MPIYTIIVFyp4HFoeFwzDUUNc0mMqrJF6e9GykDZ4Ryc/Swc+FJxFQImOy6Pk6fuoTo7WbHw92wlrkw7E19DW7vHczeCzYuTcfmHd8D8dVsyEyr1hA0uPWxqYubS9L6zgy4PF79RTOAfAfYQ7X83dDxhazn4bAutsNRKRXwo= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=XAN6gF77; arc=fail smtp.client-ip=40.107.243.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="XAN6gF77" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=u7fxFxb40VdgXr7QGn6xuseu5lkRQE0jxXyyr1j68+tXYZb8ModX+SJqA4uzPiXa/hOl0J944BSaMkBRcwmgzWL4OEpz8BoZukGkyXrYR7jPLy7aXFfa0+DUtlM9FNbfJA/gZV1yc6BBcbgPnon9u+gXYjSD23rGJ0E89ViTmfiYgj91iYQ7yYjbE/XQCQnaMgW9PN0nfkHNv3Mer1WLmrDYzOKGU9+cFBtIsae4SKrYqpvjmUzyfBOb1LQ/NY1aWr9cXOGGQWQ210RSHJUdCRqqDSRcK8ssGi92XP5FRZ7Zrv9wGqBE6yIFoMVqqpl9gZcZk5Sr4VlqaMIYntI1+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LRAJOHiorIYcWl1WCU/qGD4wnzlzIrCyIMw1u/IiifA=; b=xnOr6aO7dr1/ek5wRrZrwHnMbt8FByRLU/0djBIPAuRT6L2weSvGmZQlnOTSk6mgLqQE8tNkPEOTHdYVW3+4QMJJiEDH6Fqqvz1qnxECCbjDMoAL3RENGIdmPF2tYN9UYpOWlaMON7pibqgu5wJaMzi2TH0+ck7FDc8XUXyeyvvpqC33XJgpD9yObtoWFhTBcXsRqQyZ1IvP12sd2elmSV0xvyFNAgTMM/+zPw68047/cXbe36ZCpdM04tYZQu3/vDX771P/mToe6G5LGm97VHKCXcRXCvDG8DvsQgphx+rYfc3+MUYNykb1zd5XKdXMwlySn/tMfaKFs4xu0iCSzA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LRAJOHiorIYcWl1WCU/qGD4wnzlzIrCyIMw1u/IiifA=; b=XAN6gF774dALknxRIW2lYt3B5V/Bi0AyjkgxqBLgSPpI0lOuL4QhdFy12Uk87rK5lnZS4ogTdASpXKeUG+hkGwzWfzUiD9yg04BG4+he4/IBTh2MATm3EsfrH0wHTKMlCgDK18lHLAQaz9TEmvc6GO2V0cdSlY5XXFrv4mTmpfMZ8Z0L7NjxYWBVJycTkcuajRSrlw/pvwpuqbyrONdjS/h85taRHOY3RXrVnVu7FFaasNzZ6w4lMoFdUoCYtfeGTYiKPSPaMbELF3ACsVzbaL9CRPvRcgFxHaYiGyfCueF95rorQDXSMS0mueykqte6Zf/uRWZ/+UkKL4OcM4VTsg== Received: from SJ0PR05CA0188.namprd05.prod.outlook.com (2603:10b6:a03:330::13) by CH3PR12MB7667.namprd12.prod.outlook.com (2603:10b6:610:14f::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.20; Thu, 7 Nov 2024 19:45:49 +0000 Received: from MWH0EPF000971E3.namprd02.prod.outlook.com (2603:10b6:a03:330:cafe::b5) by SJ0PR05CA0188.outlook.office365.com (2603:10b6:a03:330::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.16 via Frontend Transport; Thu, 7 Nov 2024 19:45:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by MWH0EPF000971E3.mail.protection.outlook.com (10.167.243.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:45:48 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:25 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:24 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:21 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Dragos Tatulea , Tariq Toukan Subject: [PATCH net-next 09/12] net/mlx5e: SHAMPO, Fix page_index calculation inconsistency Date: Thu, 7 Nov 2024 21:43:54 +0200 Message-ID: <20241107194357.683732-10-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: MWH0EPF000971E3:EE_|CH3PR12MB7667:EE_ X-MS-Office365-Filtering-Correlation-Id: bafeac22-b435-4b06-683a-08dcff64c769 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|82310400026|1800799024|36860700013; X-Microsoft-Antispam-Message-Info: 1M/AHMwU79L7ROrGtc6rIiju+aW8yT8eWAWaXuI2UaNCCyFod3Yk93s3gSHwmCVH9VG6p910N/kwKKv/g529LusYtH/v6XunD1aT+RGemXRxi5RFMQXKXCYxyLIuyXUuM69KyXTQ5MIUzesCPJ7/m9JFv/XMMcbEJPvTpx6sp5lfOa+IcMSOCiKHyStEA+w8e5M/Sy8T/+3ampJMRBbXgHhPAty3bd8yWRsmguIN+6d0OynGkxY4Rvgz0jRKIWSWFV/roX5Ga4hDRZXR4LBW/oCJDeQqzixYABYk8prX75vcDW0YyIW/rhTrp4imLIwL3LzIMYKQ1vSkQmvzRRb220iI57gccfgjSIe50EjGei+thZ/hferFulqp2LfRBcbUhKEbhOHhT/m4cMcectqgGCv2jb/aktvG4unFKbvzTrepLAbPEfoSQJKaDXXaf/IdSBUvcmx+so6b1GWROTQCvD/Y3e9RLEFohLO//yRVE1z/iFH4GOOIyUQDCYuLucGt6ChEShNeft3vPloZUhzW0oj1YXC+4F71a75prgppo5+G/kSVyHPd+yAQ5M7i9eSZ1sc8SvDwPkWcekxSwpflOLV7/Paulg/7x3vnoZgnO7CQvAAu1Krdo/PkoDpyA3yfvm1rD29Rtwq5tLKgCJPwg0RLAdoK5PEE5A0A9E0ZNxp70O4Zk5FAkAsFVt1Hz+jJ9SbkwxH9SLHmIGyZBYevZucOnbZJZ5NKeIrLhxraLR3V73yxk4jG+RQ6aE6Xf4s9uh3I90L1+fTX8uVK/YzFU9g4FptYGWrgEjURBaAFpoTUAWAlCFENyTG9o6mqAzODqPLde4K3/+oAkpgbHwzAfw+hbhz5h5l0RkXnIyPh62VH3qp1EF4ZGyAghGcNfswLFIOEM0/mMFcIQ2lEN8PiC5BW372xh7jYTaXhmFOChSF6mlMy2pq8PeMtx/XbATAewQBeo1Sxn1qqenT9SRrtU9zQkMGD2WUII33es0WFe4AxEqSBdbx5ypP7zq2/Rl9IOOMAIi9F8iPFrQ2N+HPetqeCPiEpa4ggqj9hM9X45TGaNHx3Ti7F+78IiQDxHZj4jP53drkVB6MXVQ2tVN9ZTyh09JLvWsVHG+mFf9Nx302SK0dVYg3f014A7bxvATxErdUhPKiUlW5pMT3RhL9x1MBOq5e7NvacLOqjzvHmUFLJgtMk1U9Z2c3lvDt+F7pIEyczEkdTHbw0f11TBmwaD0Zvr/YEXrSxRlV0rwHyPMzp6cxyaZValQR9fpkdog92xNclPFEso/Ad65lms92nykb7ckeINWkXta0gNLUTOR9n7g3DoAnrSfOxrBeyeHTwdM2que+P9+ZWNxzW/LhBCbGCd5GCHaTN6e7Kqcl+Axyk0w0tefe/7zRDI1i59M/jspVRac9hR+stW5+3Cq0KOg== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(82310400026)(1800799024)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:48.6216 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bafeac22-b435-4b06-683a-08dcff64c769 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: MWH0EPF000971E3.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7667 X-Patchwork-Delegate: kuba@kernel.org From: Dragos Tatulea When calculating the index for the next frag page slot, the divisor is incorrect: it should be the number of pages per queue not the number of headers per queue. This is currently harmless because frag pages are not used directly, but they are intermediated through the info array. But it needs to be fixed as an upcoming patch will get rid of the info array. This patch introduces a new pages per queue variable and plugs it in the formula. Now that this variable exists, additional code can be simplified in the SHAMPO initialization code. Signed-off-by: Dragos Tatulea Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 + drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 8 +++----- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 2 +- 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 4449a57ba5b2..b4abb094f01a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -629,6 +629,7 @@ struct mlx5e_shampo_hd { u16 curr_page_index; u32 hd_per_wq; u16 hd_per_wqe; + u16 pages_per_wq; unsigned long *bitmap; u16 pi; u16 ci; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 59d7a0e28f24..3ca1ef1f39a5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -767,8 +767,6 @@ static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev, u32 *pool_size, int node) { - void *wqc = MLX5_ADDR_OF(rqc, rqp->rqc, wq); - int wq_size; int err; if (!test_bit(MLX5E_RQ_STATE_SHAMPO, &rq->state)) @@ -793,9 +791,9 @@ static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev, cpu_to_be32(rq->mpwqe.shampo->mkey); rq->mpwqe.shampo->hd_per_wqe = mlx5e_shampo_hd_per_wqe(mdev, params, rqp); - wq_size = BIT(MLX5_GET(wq, wqc, log_wq_sz)); - *pool_size += (rq->mpwqe.shampo->hd_per_wqe * wq_size) / - MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; + rq->mpwqe.shampo->pages_per_wq = + rq->mpwqe.shampo->hd_per_wq / MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; + *pool_size += rq->mpwqe.shampo->pages_per_wq; return 0; err_hw_gro_data: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index e044e5d11f05..76a975667c77 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -671,7 +671,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, header_offset = (index & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) << MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE; if (!(header_offset & (PAGE_SIZE - 1))) { - page_index = (page_index + 1) & (shampo->hd_per_wq - 1); + page_index = (page_index + 1) & (shampo->pages_per_wq - 1); frag_page = &shampo->pages[page_index]; err = mlx5e_page_alloc_fragmented(rq, frag_page); From patchwork Thu Nov 7 19:43:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867026 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2056.outbound.protection.outlook.com [40.107.243.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DAF71822E5 for ; Thu, 7 Nov 2024 19:46:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.56 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008762; cv=fail; b=LGB35HsqSZCUHfRBVABxZrz4SxXgT+kNqfTxnBK5iIOHfEm3TYQhP4TYJW1RoITZ6n1N1qvvLmaRo8wqxx+HbAf1M33xDffiuS9LlKtgR4l+ufT+al7EAb43ALrfCahkZhek5nNOx35olgy0cyH/XTHvQynn3PXRt2aSwbub7JM= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008762; c=relaxed/simple; bh=QvMaBh6pl44Mlj5vxF/L0+/Qz8fbj31k5608Mq4n2v4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=IgE5ROHuK43FD56KlEeMDthjdG24bAC8dJBIDkEsYqta2A48GTmCr+d1+xJZuq9rqdGBv7WMRGTHZhSIi3iAyDJtmkmRVi+m0QoNH9/cC6fn5KSIc8vtDurCHUa84tfBbAXSPT+ju6XXgoVrGFRT/SD2svOdBdUwOPqKY4Qqay4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=okX1E7/u; arc=fail smtp.client-ip=40.107.243.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="okX1E7/u" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=rEspMr2zPP2za00/oqI3QNZilIxyDrQr8xkMNeqXunEUf5iAKa5Xl+isNiH3QU6vdoPP5mdMHqzc0wiI8bf+SjDd/leU7e1NWfFFbJe2PPC9MBFIR8+5ANmDVPHPulf7fDGV0fnSssYY26ITJoBH9w26IErMcjJxUmVd5JLIYBQzzguB3548JAjqFHTjbcPEhI1VYOZCGaGd94Iua/pXafDVc5eDqb8HtqgB4UqSHF/sqkBAi2nOzX9HYbV7miytx7YlVlzpryEWzfquhFpQlznc0C06KXMpgRMKAy49XAMtQTLcI3DZBhljF2IbFVo+vkv6zP8z2RAUT3ZLAR89rA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uhwMnB1+1u3uanFUuzdgeAsR9/uKBLF7sWlcV4eC0ho=; b=eY6sPWa9Dkb94GIWEUtJ+CQ6j/KL3srKbq6aqm0KYWrEo8XmB1daeP6kIRQn6PfAASL9dDsPHAPLj1ImVqGpSsxxxitFlb2ZyZX2LU9Rm/xk9TKu1w+3RqXJlVBezf7qDka1E5YfNtmJ2hCvdvizKqvDk83JxrQBzzVdh1iKGqKyZgF0O78z67EiY0ZnjT3iZ6bsEAHRH0VLucxL5luS6a5CMPjJ0cJ6Oio/YG/XIZso3nHaxRok2eLuvxa5yJ5QkB2+JP3W7B1o498Gztz/q2HAwuNVXi6tYGgLzTVsPk/kx36qbzVhK6+PRm0LnydJTSxkU7yaIhO0iZGJSboFiQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uhwMnB1+1u3uanFUuzdgeAsR9/uKBLF7sWlcV4eC0ho=; b=okX1E7/uKlkp1PKAc6w9hGJ15FOLK9mO0qkbApbLjMsws1H3pkQxBJstT8c6muXdeEhEL1zgJV6+Ve5q5wPUzPfaA4K6DPFHkIfLTcCumlm4DxqlA2gqJ/i+O9inOwqzMyXaMT7zL5BxUfuE93oqaE1aYCZdsfiIk0d+nCsBm/LFtZBeLoeU51IseVe+7FWQ66lfi3lMTxRt3jF9ldYScc762pOw4vPYepUWt65uM4jp9ADiHNm7wWxCl9dfHLilsicfm3n//KttXY3qc1FL0GLikq8WJKOjeZ0FqTgu1McC4z6lgMH8urrm/LOD79IMfCpLWagT3tc82FjH6o3Sog== Received: from DS7PR03CA0169.namprd03.prod.outlook.com (2603:10b6:5:3b2::24) by PH7PR12MB5904.namprd12.prod.outlook.com (2603:10b6:510:1d8::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8114.30; Thu, 7 Nov 2024 19:45:57 +0000 Received: from DS3PEPF000099DF.namprd04.prod.outlook.com (2603:10b6:5:3b2:cafe::bb) by DS7PR03CA0169.outlook.office365.com (2603:10b6:5:3b2::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19 via Frontend Transport; Thu, 7 Nov 2024 19:45:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS3PEPF000099DF.mail.protection.outlook.com (10.167.17.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:45:57 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:28 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:27 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:25 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Dragos Tatulea , Tariq Toukan Subject: [PATCH net-next 10/12] net/mlx5e: SHAMPO, Change frag page setup order during allocation Date: Thu, 7 Nov 2024 21:43:55 +0200 Message-ID: <20241107194357.683732-11-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS3PEPF000099DF:EE_|PH7PR12MB5904:EE_ X-MS-Office365-Filtering-Correlation-Id: dde3378a-4f64-439c-669a-08dcff64cc72 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: RFWaMje6ES5CssiRImALpFNBNRhGUT2/bXWmr9V8e437qb/SDIL4svvazpXnT9Kpm2N3nSRVWw8YkuC26V0fSd8IdSZydyI84KjWIYWJy+9R6SGnTiqYC9uO7OR5bSIIGubA/CsnHhcYVOUZ0g+QING76bi/jD2wwWNqozhpYWfh4YG7pU+/zFcdPqForRDcoF6q38Q5ZhhsbXv6bCIEta2NFSeD5ZEjVL9bHwMFGa+TEgLeNvPYWuIizu0UBmiFZLYj+O3AZCtgtl7rkrrvLvhxDEk1kMAnANUYcwd5hDzyFRM0bD0bI/9ZjqDUWOBKAShDMofji5KkB7dSh5zIQvtDPvENhXFWrI56t7QBlU0rE3+0AfznNlIgJcNw946ExwCrsfI6J1ADZYRSTO7mm+Hp6w9nmj6BZJfvU49syu1jppkcBeqE4uasaRjVDKSG9v5J7s/sqC93HAzUNK04Xg2kZvarp6/J1hfIEHVqgKDnLR787YyMCIB9xi0XlqIbgD7nkjN3hg786AIyZxdL6UwGz6XM0MMRszKJy5BTE0LyqJHVaI3UO4swsi/k/a74Nq/jdALA9UTm0BDpuokfBBwmtrbQf1DeOsRnMBytmJ5M1/x06aMfnbAv2chcjaDqxPo/i3V4O9lm85sCIx2Ie6cu3Odz325UTLeMejSdX1WdjHKWAHN8UuQ/d9CxNJUF5nLCQmQPDA0eVh2sWjZHzAahuKlzIobUeRC1k5WaWVBMYGGqNFCamf9SyUAF/Ar1mZgFWQDAtnQWBMijnx0HhvvfLNuxMjBpNYUySa+5vcBTM0h9joPo1QXsJaOUZ6m417l2gWWsUGZKM+aRR0+Kjpx3Q9iwU9yJ/Me1AiJYvi73QStEMgaYgMIYNzm2VzAHuDmGkq/2V9DWG4/PCcH0Uvhhi0Sr0J9JISRjhgA5QzMQlWtdNH9VmdFPsBcdh70xBt8UuvVP1ZFpEq0Z/5jb4PqQS8bjdvBvB6VX59QJuJFkMRmIbRCjf88kgxDPQ6FQbNRWkHHDpYZu8TfQ/eHIK+KvCP6YaCIJnK4JeVr4CgTyXj0JeGqR1+oOpILfPxrfeK9RLQgOvONklv4TgG3expbpbqjc1z5J3pxhXKXk0AVqqDvcrjZfTwHbXWZNI7irTJq1NH0RXG/y4WZvl1qID4E8Ajcq6SPd+cAvxFh1mkC8FwJhdOQhiGVb94iMX7e/kPyQMVnF92jgnL/vwNhx+e5IWtGVRQ+mOygJsIpbalRyO/wwUHh2mZOc4SWNEV8R6nAg6iNWxxotBUQrID5Iz+tzKccTn3S7YiWVrnN9WqGs9hN6/wA08npijSiDPEU0mfjoMHzgyynwm/h4zGJYr28jvgU1v0OCdPJG2vw1UyDy8TGafMnAiB/tEzNrSSm03Haf/ujjz+25s+pI/R6cRQ== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(376014)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:45:57.0223 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dde3378a-4f64-439c-669a-08dcff64cc72 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS3PEPF000099DF.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5904 X-Patchwork-Delegate: kuba@kernel.org From: Dragos Tatulea Now that the UMR allocation has been simplified, it is no longer possible to have a leftover page from a previous call to mlx5e_build_shampo_hd_umr(). This patch simplifies the code by switching the order of operations: first take the frag page and then increment the index. This is more straightforward and it also paves the way for dropping the info array. Signed-off-by: Dragos Tatulea Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 76a975667c77..637069c1b988 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -651,7 +651,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, u16 pi, header_offset, err, wqe_bbs; u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; u16 page_index = shampo->curr_page_index; - struct mlx5e_frag_page *frag_page; + struct mlx5e_frag_page *frag_page = NULL; struct mlx5e_dma_info *dma_info; struct mlx5e_umr_wqe *umr_wqe; int headroom, i; @@ -663,16 +663,14 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, umr_wqe = mlx5_wq_cyc_get_wqe(&sq->wq, pi); build_ksm_umr(sq, umr_wqe, shampo->key, index, ksm_entries); - frag_page = &shampo->pages[page_index]; - WARN_ON_ONCE(ksm_entries & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)); for (i = 0; i < ksm_entries; i++, index++) { dma_info = &shampo->info[index]; header_offset = (index & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) << MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE; if (!(header_offset & (PAGE_SIZE - 1))) { - page_index = (page_index + 1) & (shampo->pages_per_wq - 1); frag_page = &shampo->pages[page_index]; + page_index = (page_index + 1) & (shampo->pages_per_wq - 1); err = mlx5e_page_alloc_fragmented(rq, frag_page); if (unlikely(err)) From patchwork Thu Nov 7 19:43:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867028 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2082.outbound.protection.outlook.com [40.107.94.82]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 156C321731F for ; Thu, 7 Nov 2024 19:46:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.82 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008772; cv=fail; b=VW5KsCGtNN0/YsuVX9xoVnLxbyvDivmhECd14PTJZFAgKJSaKpkd2Njoiw9t4hUqpYBCNdaOEIGcP4WSD0q/cjwlZ3sj4XBfz87fSthZra++i9W3GzRna9+mXtfBtDozPodj8RrH869rojChf6lfZ1aQYEk8euRBY3cTegjUhl0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008772; c=relaxed/simple; bh=IgBda9qh9ykbPc8MTnsIZkhLGhar4f45XoJZTXWZ7a8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WDfwuXEJahndk46JivkdxhWJb4zGH1G16dKGmFacM5M0U0fX5iBBvvDn8mecORz9+zn1dzzVWzm2im8Xh55UJTtWgadoPF1pqbWiBDPSTnBEe07YNVG6j7BnFgqUz/5mOrQzBQNF7qzD131EvuGLZCqF3+Zh/PqcSQpwUwP17Dg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=BFIJ+jwH; arc=fail smtp.client-ip=40.107.94.82 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="BFIJ+jwH" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=PqWhuqPVyLMFIheqWzjsDyoEEAtFgEomQiUiPi5bCUl4uAwajxEGlSh/2vBVv5K3MHWxtOC/yW0F5/m0ysCF9MgbEnkemvVTUHKiVyn2tqzGLd8srIvx3ccTEXyBLSoM2JOSuy6Vvl4wV4CVCXv8pc3bMNSeUF3wSjJ3M95DanxeS1MVSMoFoEOq8E5YbTykI8WCIns78BhDUHvpp2DGtCyyHymMH/MvUsaiMe+60cbGpza6faiBs2n2JzjLTQkOLC5m20t3Tbli6a0wak7ojyvMIrMhdHg9lbPjJmUQjvWIhTfKYQmsmraRCq2gFPsrtc9JBfFjlgmDC4riBReqHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ON3ziYp1nS6433urhoK2lIdDaTf8mpHJN7IuogC88ZI=; b=AtLUdYKW93txXbZa0PAl/t+UaaJmgf+VRpajeYx8/njN/qQrHIAfdnc8aiFrhyHPKjuKHPSTBX0g8R44EfsE1DORGTSie0W343Rrh1AQlV540UzeU4oHHBgN4pzVe64gOdflCVeciM68jborKhgxOP30m9l1arV2q2uy/Vo/bQPKuXsQeoCIvwzrw3wFgoTidc5sARzZUU47iV0FHqwQxnOek0P9kB3w6/Go44q0dCgJOS+miQaGDnf0gNp61C6oQchO0tgABPFYKAPzsbIrfKoRLmtPrrunjABvL/i6T+tKqNKIQMli+plIVCquPilB/6NpDkpC4CsCp1oHAUPMKg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ON3ziYp1nS6433urhoK2lIdDaTf8mpHJN7IuogC88ZI=; b=BFIJ+jwHrCXE7dUa8aXiUXcWiUZoY0XDFDa0Mn6shGMbRbHrHhcLL6J+wt8ArTzyERLkeGp9mKJLS21V/LbV+Qr5LdyRfwo6iQpWxyo5KfPt+tVyoZgJm1B2b1vjDtf/6MIoA4GgbfrMh4BOWnm1POufEixeXcRRAzkUIdvXihuqg4Hx4O0GItno3CuhSFaE8VpfF1UOadMIIR2SrF8kxfE5dMOz/cHkneKSUdpPZRqj4SI+UcB+637m6nRo2yqD+gVPCacr5YqFibQ1tfmJaYWsrCV01ODhs9yZwBLzJf5hTer4GM2GcmtHusPIEL9Pmn/eH1BQb7E2s3pWIJV4pA== Received: from CH2PR12CA0014.namprd12.prod.outlook.com (2603:10b6:610:57::24) by CH3PR12MB8283.namprd12.prod.outlook.com (2603:10b6:610:12a::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19; Thu, 7 Nov 2024 19:46:01 +0000 Received: from DS3PEPF000099E2.namprd04.prod.outlook.com (2603:10b6:610:57:cafe::48) by CH2PR12CA0014.outlook.office365.com (2603:10b6:610:57::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.20 via Frontend Transport; Thu, 7 Nov 2024 19:46:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS3PEPF000099E2.mail.protection.outlook.com (10.167.17.201) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:46:00 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:31 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:31 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:28 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Dragos Tatulea , Tariq Toukan Subject: [PATCH net-next 11/12] net/mlx5e: SHAMPO, Drop info array Date: Thu, 7 Nov 2024 21:43:56 +0200 Message-ID: <20241107194357.683732-12-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS3PEPF000099E2:EE_|CH3PR12MB8283:EE_ X-MS-Office365-Filtering-Correlation-Id: 678e9c03-7740-4f03-b6fa-08dcff64ce78 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|82310400026|36860700013; X-Microsoft-Antispam-Message-Info: 2riPm3rLyTxjgYh8vGKR62MKqgagHkfK8l2Vc/vWIXO7tYvAUsWJmOL2ZG5OlfVzXo+LcjwD88xYwrp4Gqvc7CDOycNQQPcFdK+ot20TIwOMOWd0AsE3fENbfeGQi4w4sAv4WmHx7xpCfK+uTdPwtBMH1KvTYt/TjY8CEo8Udxu5MYLCVQvcW9lagtMJ4BddvNQQZbxKYdm9KtCCHPAjB/2CQRlb6qsEWWPc0tv+el3R4lMJlXojTeZgWFMxoJnVsPj3H0nsm2UIS9jafq5g9lzX+UrLCGzoyJ2WU/IgVzY66kFtuA3BLi2FZmucjHxQyjVvMHfw7gxAMpiKMOx97zd0uA2Jql7lHvKF2qBj24JmX6545NLYYM7SobPP8ssqEb6TW0P4OeDE49YxKvlDzNlBc9iqLHZNHXF8DtqsO5p1+rgXdICO6m+7bQx42pG6o2JsoytWhOovldmURN7skgmGZxMD8a3fRBcuV3mq7msXbH0hIf3rp7tUwOt7VwAnpSjQGFBVzQbDm9I1jNQpgfk3TiUfIAHsdfHQM1NRQSEJDS/44f0zT/kCwGItsrPWEFtlzNqATzUl0dd9wfgxvWLw/hhbf/TZ7pAD0yTR6LdDsCdm6keckwU6DFkVIwQrW4WW/wONkpJRHaRVSjCIghe5ChhQSJsBJXsmfsOci9/2lR79SpH3bma7ur4VDvhG+dQSxiMB1tgYE3RT8vXHOKV9oFV29REoXKHvPgNrIoeQT/FeztHLhJNMXw4nIJ1ujCA5ic6qUkDIRJfLugOpcFlZqcCLt74nbzojZwYctpQ581XdkolYXlpNwM8saCiMHG5bWB/JdKmxl2llpR14hzzxrpPJmb4+/ZjlRC1jXEodi53K4uvujYlJqBGRNjDqYn87jWx3faQ0efaRiIj3dK5lhjx0a39pIuuunKUOc7s7BsJ+PFWQN+YGf8G6wuog1vo/KDSKQbkURFAm0xXJ9JpwESl38jUDYj3GFDCOBfMrpFyPIrd+qRXmev2poM1DAxss/10NI6a2cY4LIipOZQF7jfdfCkKj98Fppn0l9XvzollLPSwO6HCjH0d3NeyO5lPvZoh1seYEaIO3fi/z2NavpzZNkZotbNZGmDgPd1e72JV+h4YJqwsR3PKnYr7o3zPfY5urzokOZr/nyZy3cBi3YdlO7GFfCczRO6VY1JGgYiQ1SLYSmX9LJRp90rwjcBDU9TAJAOlah1RXeEzhY8w/xJohZj095pLgI1bP+ZlMsjnwPzMNr5bSS5BAhqfGjry1Ub+HOu6fp65NPUzIQGzWGrPil88VimfAjnv273l9ccbn4koXSmfltXTViCC6/TWWjMumoWe0yEOoHncJb0O9We0bx8DMuromRde1kBXRPt/NY0TRCnQExexB76KsUxNoHjO9W0bgAifjBZtPtA== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(1800799024)(82310400026)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:46:00.4035 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 678e9c03-7740-4f03-b6fa-08dcff64ce78 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS3PEPF000099E2.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8283 X-Patchwork-Delegate: kuba@kernel.org From: Dragos Tatulea The info array is used to store a pointer to the dma address of the header and to the frag page. However, this array is not really required: - The frag page can be calculated from the header index frag page index = header index / headers per page. - The dma address can be calculated through a formula: dma page address + header offset. This series gets rid of the info array and uses the above formulas instead. The current_page_index was used in conjunction with the info array to store page fragment indices. This variable is dropped as well. There was no performance regression observed. Signed-off-by: Dragos Tatulea Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 3 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 7 +- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 76 ++++++++++--------- 3 files changed, 42 insertions(+), 44 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index b4abb094f01a..979fc56205e1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -83,6 +83,7 @@ struct page_pool; #define MLX5E_SHAMPO_LOG_HEADER_ENTRY_SIZE (8) #define MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE (9) #define MLX5E_SHAMPO_WQ_HEADER_PER_PAGE (PAGE_SIZE >> MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE) +#define MLX5E_SHAMPO_LOG_WQ_HEADER_PER_PAGE (PAGE_SHIFT - MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE) #define MLX5E_SHAMPO_WQ_BASE_HEAD_ENTRY_SIZE (64) #define MLX5E_SHAMPO_WQ_RESRV_SIZE (64 * 1024) #define MLX5E_SHAMPO_WQ_BASE_RESRV_SIZE (4096) @@ -624,9 +625,7 @@ struct mlx5e_dma_info { struct mlx5e_shampo_hd { u32 mkey; - struct mlx5e_dma_info *info; struct mlx5e_frag_page *pages; - u16 curr_page_index; u32 hd_per_wq; u16 hd_per_wqe; u16 pages_per_wq; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 3ca1ef1f39a5..2e27e9d6b820 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -350,19 +350,15 @@ static int mlx5e_rq_shampo_hd_info_alloc(struct mlx5e_rq *rq, int node) shampo->bitmap = bitmap_zalloc_node(shampo->hd_per_wq, GFP_KERNEL, node); - shampo->info = kvzalloc_node(array_size(shampo->hd_per_wq, - sizeof(*shampo->info)), - GFP_KERNEL, node); shampo->pages = kvzalloc_node(array_size(shampo->hd_per_wq, sizeof(*shampo->pages)), GFP_KERNEL, node); - if (!shampo->bitmap || !shampo->info || !shampo->pages) + if (!shampo->bitmap || !shampo->pages) goto err_nomem; return 0; err_nomem: - kvfree(shampo->info); kvfree(shampo->bitmap); kvfree(shampo->pages); @@ -372,7 +368,6 @@ static int mlx5e_rq_shampo_hd_info_alloc(struct mlx5e_rq *rq, int node) static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq) { kvfree(rq->mpwqe.shampo->bitmap); - kvfree(rq->mpwqe.shampo->info); kvfree(rq->mpwqe.shampo->pages); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 637069c1b988..3de575875586 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -643,6 +643,21 @@ static void build_ksm_umr(struct mlx5e_icosq *sq, struct mlx5e_umr_wqe *umr_wqe, umr_wqe->uctrl.mkey_mask = cpu_to_be64(MLX5_MKEY_MASK_FREE); } +static struct mlx5e_frag_page *mlx5e_shampo_hd_to_frag_page(struct mlx5e_rq *rq, int header_index) +{ + BUILD_BUG_ON(MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE > PAGE_SHIFT); + + return &rq->mpwqe.shampo->pages[header_index >> MLX5E_SHAMPO_LOG_WQ_HEADER_PER_PAGE]; +} + +static u64 mlx5e_shampo_hd_offset(int header_index) +{ + return (header_index & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) << + MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE; +} + +static void mlx5e_free_rx_shampo_hd_entry(struct mlx5e_rq *rq, u16 header_index); + static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, struct mlx5e_icosq *sq, u16 ksm_entries, u16 index) @@ -650,9 +665,6 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; u16 pi, header_offset, err, wqe_bbs; u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; - u16 page_index = shampo->curr_page_index; - struct mlx5e_frag_page *frag_page = NULL; - struct mlx5e_dma_info *dma_info; struct mlx5e_umr_wqe *umr_wqe; int headroom, i; u64 addr = 0; @@ -665,29 +677,20 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, WARN_ON_ONCE(ksm_entries & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)); for (i = 0; i < ksm_entries; i++, index++) { - dma_info = &shampo->info[index]; - header_offset = (index & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) << - MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE; - if (!(header_offset & (PAGE_SIZE - 1))) { - frag_page = &shampo->pages[page_index]; - page_index = (page_index + 1) & (shampo->pages_per_wq - 1); + header_offset = mlx5e_shampo_hd_offset(index); + if (!header_offset) { + struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, index); err = mlx5e_page_alloc_fragmented(rq, frag_page); if (unlikely(err)) goto err_unmap; addr = page_pool_get_dma_addr(frag_page->page); - - dma_info->addr = addr; - dma_info->frag_page = frag_page; - } else { - dma_info->addr = addr + header_offset; - dma_info->frag_page = frag_page; } umr_wqe->inline_ksms[i] = (struct mlx5_ksm) { .key = cpu_to_be32(lkey), - .va = cpu_to_be64(dma_info->addr + headroom), + .va = cpu_to_be64(addr + header_offset + headroom), }; } @@ -698,20 +701,22 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, }; shampo->pi = (shampo->pi + ksm_entries) & (shampo->hd_per_wq - 1); - shampo->curr_page_index = page_index; sq->pc += wqe_bbs; sq->doorbell_cseg = &umr_wqe->ctrl; return 0; err_unmap: - while (--i >= 0) { - dma_info = &shampo->info[--index]; - if (!(i & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1))) { - dma_info->addr = ALIGN_DOWN(dma_info->addr, PAGE_SIZE); - mlx5e_page_release_fragmented(rq, dma_info->frag_page); + while (--i) { + --index; + header_offset = mlx5e_shampo_hd_offset(index); + if (!header_offset) { + struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, index); + + mlx5e_page_release_fragmented(rq, frag_page); } } + rq->stats->buff_alloc_err++; return err; } @@ -844,13 +849,11 @@ static void mlx5e_free_rx_shampo_hd_entry(struct mlx5e_rq *rq, u16 header_index) { struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; - u64 addr = shampo->info[header_index].addr; if (((header_index + 1) & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) == 0) { - struct mlx5e_dma_info *dma_info = &shampo->info[header_index]; + struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, header_index); - dma_info->addr = ALIGN_DOWN(addr, PAGE_SIZE); - mlx5e_page_release_fragmented(rq, dma_info->frag_page); + mlx5e_page_release_fragmented(rq, frag_page); } clear_bit(header_index, shampo->bitmap); } @@ -1204,10 +1207,10 @@ static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe, static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index) { - struct mlx5e_dma_info *last_head = &rq->mpwqe.shampo->info[header_index]; - u16 head_offset = (last_head->addr & (PAGE_SIZE - 1)) + rq->buff.headroom; + struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, header_index); + u16 head_offset = mlx5e_shampo_hd_offset(header_index) + rq->buff.headroom; - return page_address(last_head->frag_page->page) + head_offset; + return page_address(frag_page->page) + head_offset; } static void mlx5e_shampo_update_ipv4_udp_hdr(struct mlx5e_rq *rq, struct iphdr *ipv4) @@ -2178,29 +2181,30 @@ static struct sk_buff * mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 header_index) { - struct mlx5e_dma_info *head = &rq->mpwqe.shampo->info[header_index]; - u16 head_offset = head->addr & (PAGE_SIZE - 1); + struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, header_index); + dma_addr_t page_dma_addr = page_pool_get_dma_addr(frag_page->page); + u16 head_offset = mlx5e_shampo_hd_offset(header_index); + dma_addr_t dma_addr = page_dma_addr + head_offset; u16 head_size = cqe->shampo.header_size; u16 rx_headroom = rq->buff.headroom; struct sk_buff *skb = NULL; void *hdr, *data; u32 frag_size; - hdr = page_address(head->frag_page->page) + head_offset; + hdr = page_address(frag_page->page) + head_offset; data = hdr + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + head_size); if (likely(frag_size <= BIT(MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE))) { /* build SKB around header */ - dma_sync_single_range_for_cpu(rq->pdev, head->addr, 0, frag_size, rq->buff.map_dir); + dma_sync_single_range_for_cpu(rq->pdev, dma_addr, 0, frag_size, rq->buff.map_dir); net_prefetchw(hdr); net_prefetch(data); skb = mlx5e_build_linear_skb(rq, hdr, frag_size, rx_headroom, head_size, 0); - if (unlikely(!skb)) return NULL; - head->frag_page->frags++; + frag_page->frags++; } else { /* allocate SKB and copy header for large header */ rq->stats->gro_large_hds++; @@ -2212,7 +2216,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, } net_prefetchw(skb->data); - mlx5e_copy_skb_header(rq, skb, head->frag_page->page, head->addr, + mlx5e_copy_skb_header(rq, skb, frag_page->page, dma_addr, head_offset + rx_headroom, rx_headroom, head_size); /* skb linear part was allocated with headlen and aligned to long */ From patchwork Thu Nov 7 19:43:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13867027 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2086.outbound.protection.outlook.com [40.107.92.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4811E216A20 for ; Thu, 7 Nov 2024 19:46:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.86 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008769; cv=fail; b=uCv9epljUni5t0pAmUCKGgnfUBomE/8jigIouGF8xpKp6cU0EEaYgrplj3fvHjH+qC/9sHInZ708ZJw7/gICD5uN+eJ8TJAXZA6Fe3zF7EG0RRMmeAU6DKlTkNus8Ff7Zqo+qMa0sHwNojFR4bHogCvPzFF7Av+z3qIgjeOYX/s= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731008769; c=relaxed/simple; bh=WvXQkeWdC1mp2B16wmllqQtitvjvaHg3znZMTepqj9I=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=oXiM+Dhqhvvo63IC6rL93bXBzruXFOEOvsBIbqqJpfeK853pOhQnpFywJy3kRc7ZwRJY/qmXF8RlNCtfvJIAL1iDWOtEHbjgHtRt5H05UX7+2rZ++zhTnbfzE/is/tiqqSoNZNosZ9tPsbQZnBn2M07eh3jNYoIz+BN2cqG6RSg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=RMkNtybj; arc=fail smtp.client-ip=40.107.92.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="RMkNtybj" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=SbqiOAQGFq7b2SfhXVWavkcdT6Jt0OQtjPgBG0RXa1lyE8wVnYlZsa6XnG1Tc//i1s1F3Blqyk2NjasXGSiMBsQ5A/MVqQptloBaRdO1fNcMUxRlxQ2Bg4ls/ZnGs4stXrTxY3XeEWFFKXh0cEtsI/ZFdPn9pVq2+PhwvBNs75FbUr/iiTqxynP5mA3OR9EBQ0wX0hQzN6kxiJ0CD0BqGInVgiwr9dBS9DBkvkVbKHfLw5ij6OAitx7su4M+fZN7H2oslR+8dYbBpPH40O1GQ5MVhJsCssMnrBBrB1xZakCxLUQX7v31facx7YOu4z5Oi4hL/MEWbQ9ppiXZLY3fMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fngkzKAStpMOntV+jr+gOrvTVqEaGlolR4Oo1OBYkWk=; b=qFlQkwJkeJ+0UeoTxErhYBP9hu0jUwo2FOsmmp6egRHK9zempbYF0Q3Kdr2NAKYTQatjcWjbx+yiVi4A3qlquyliLzrcgpl8by+9v4Lpdldqadod9hwFdzKTg6N03MxvXAgybtnBCqs1fCiNQ80IJHFaRHfmAod3RiY4mcQZAusrOJms1DdcVpdWbxGVTRzzTE/2D+Y3jcZUXxc+XXrE9I2ClWxBGO/OWMEtOa0q0guAlMKbe/uz+SYFcqm6jy7C5knRrVWkm8g7SNwSoqZLp6wWEoOyb4sJ9NkOkwnOC6WDmo2wqzrJEd14eLVob7Dn2jRP/y4ouFKMPJhp6qfQTA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fngkzKAStpMOntV+jr+gOrvTVqEaGlolR4Oo1OBYkWk=; b=RMkNtybj7ASi+R8y8ITFO5E/SJrbt+JlRlmSgxw1AE/mwOqyWqpab/shvMCYlL1QGBzNh7x9g9AIpw0/0SzGLeDp596+TxNkA/C1gQGdqB2qnljvMUvdCJR7DHdbEt7bBVwxpnSl5ocecQSICZIVEhlbvTNYlZDizBM0zGL9jMcOC7IU4y7wCx6dSLzELd1hcwIp/m+m90c6ck4gCK78/1Rj2jW3lM2ktt+xVFqObrVBwvgpnVB/ahsPrWWahXEEF5vJEPAvyguuqvny6FM3LqYZa/HN+6MCHgTsGVazqOREk3EWaJ8Cmpbfgyvn1RKMimGtovilC9GNZJlk2iArUw== Received: from DS7PR03CA0160.namprd03.prod.outlook.com (2603:10b6:5:3b2::15) by DM4PR12MB7525.namprd12.prod.outlook.com (2603:10b6:8:113::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.20; Thu, 7 Nov 2024 19:46:02 +0000 Received: from DS3PEPF000099DF.namprd04.prod.outlook.com (2603:10b6:5:3b2:cafe::df) by DS7PR03CA0160.outlook.office365.com (2603:10b6:5:3b2::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.19 via Frontend Transport; Thu, 7 Nov 2024 19:46:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS3PEPF000099DF.mail.protection.outlook.com (10.167.17.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8137.17 via Frontend Transport; Thu, 7 Nov 2024 19:46:02 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:35 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 7 Nov 2024 11:45:34 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 7 Nov 2024 11:45:31 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Dragos Tatulea , Tariq Toukan Subject: [PATCH net-next 12/12] net/mlx5e: SHAMPO, Rework header allocation loop Date: Thu, 7 Nov 2024 21:43:57 +0200 Message-ID: <20241107194357.683732-13-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241107194357.683732-1-tariqt@nvidia.com> References: <20241107194357.683732-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS3PEPF000099DF:EE_|DM4PR12MB7525:EE_ X-MS-Office365-Filtering-Correlation-Id: e0cd5288-b863-483e-41f6-08dcff64cf79 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|1800799024|376014; X-Microsoft-Antispam-Message-Info: mW302nAdPMDY+oJZwVOwKaF5XKN+vR+OYa7tgcxuCps1S8lekuMEbdOLLy5PIKs+CHJU3vxby4tnSRInUR5dwhr86AXA02PKwBbnSwj8/FBfnD4MFi8y5gJXp/grfQUaNAqFA+Mz3G4QGb4bJt/Y5D9HGJGtA+JAhQVa8V9FP7TyRbAUS69Nkkvg7GA+KWB5BQPoQtetsrZNd8BXd9zBqXSlxo0aiPM04II5H2RfTBN+MLPCmRbwgiA1ljaVmsHSL91a/ifKmy3pN25anY5MzkvrtNmrMWqihb/Geg2ZXv2I7WEl5Hg/eD16SG91N6YGK2W4QtuFmRw2fNdTNbB7gxxYFWGPQyHzM7OlegqnvwbrZ88C67oFezOFadEQDvkQW1k2Fd2WCkAOChjclJ+NKu1i1RqxJEKnByfSyFaxU094NReQW5m+UsyRzY2Ua7fc14QVJekryI+B2++Z67Mv2VRUInOrVH6PaHJvDo3CcROAPy+UTQJQLqbRIVUGO5LWhnIfp1J8yTdVHP4gucod3ccyo8fRI1Qwluv//sSxgiglZvbl2aL7CArhHEo6RcfSGBcujyYY8w+TDj2KApI4e07eXSRRNvR4ieyIO2fZS2Xl8vc4cQPxplW5+UbecrnQ/4bXvOGHNHCOQNWXQtxa/OWDKW2SLOSXUNwg9WzweHJh+5MIlF5IwQAvlHhM8Oyzg3ST8UTFzHHFoGorEMSWtrnywminH/vJ2ysuXVo9F7qawZzBNsD47HstNPorDrQH6Z6UshDNW6JB9uuMzfTA4a4G6w+0EzTW84vq8vWlm8rGa0r+Eq+ihZGTLlYTitrGRNSHkid8nImTUjY4NA9gz0OJh5LH4QHBz6t126CP3nd+qXenWHsFx6bE0TZL9qIVHEB/le/OyPUg1eOELqY5mukRoRk+niS/1dSqti8fPS+0PgonD8c9zfHjjSbz8X19Mlw8BIQNaWOPOk0ouqBAJBQ516RBVwQkv2xUFOOSGtXXMRT9YxelT1g90dXxVKBwVOFfw8Xg9FCWiiMSEX/WBnYJXvKZoVgBID4wL20RrYa6gLlOdK8jcuLgT3aoIm3xtGJKwj9EgjtUdQtHxZGAzjFelXp0wK5NpD+teqX+j0hNqJBzcRsK2B6G/CF3Dch9ePvVvBia7CWF2lRwvg5wGEBaQIeIZlYCVw/HOrjnt9RSa14O969kbv0mxuSgF+ZQQpKqe7T6ZfxiuVLhNa4hqeo4/oBaiPH2VPRllPT81gaFXlww8w2cYS6V+vRHu7V80aqjfmeBDBpyb5KIowiY015GWPFEbz1FGZ1v5IlESUKXxtWFf0eIaJvLnLDtfl1+5hknZacRVna8pQqVjkfCF8jNcpBMOvwrP5WtqPvDueyWwmtZhXW6c3hzUQB/qpctLCpZhOzAmH1iqlFeiqchHg== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(1800799024)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2024 19:46:02.1004 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e0cd5288-b863-483e-41f6-08dcff64cf79 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS3PEPF000099DF.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7525 X-Patchwork-Delegate: kuba@kernel.org From: Dragos Tatulea The current loop code was based on the assumption that there can be page leftovers from previous function calls. This patch changes the allocation loop to make it clearer how pages get allocated every MLX5E_SHAMPO_WQ_HEADER_PER_PAGE headers. This change has no functional implications. Signed-off-by: Dragos Tatulea Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 32 ++++++++++--------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 3de575875586..1963bc5adb18 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -666,8 +666,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, u16 pi, header_offset, err, wqe_bbs; u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; struct mlx5e_umr_wqe *umr_wqe; - int headroom, i; - u64 addr = 0; + int headroom, i = 0; headroom = rq->buff.headroom; wqe_bbs = MLX5E_KSM_UMR_WQEBBS(ksm_entries); @@ -676,22 +675,25 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, build_ksm_umr(sq, umr_wqe, shampo->key, index, ksm_entries); WARN_ON_ONCE(ksm_entries & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)); - for (i = 0; i < ksm_entries; i++, index++) { - header_offset = mlx5e_shampo_hd_offset(index); - if (!header_offset) { - struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, index); + while (i < ksm_entries) { + struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, index); + u64 addr; + + err = mlx5e_page_alloc_fragmented(rq, frag_page); + if (unlikely(err)) + goto err_unmap; - err = mlx5e_page_alloc_fragmented(rq, frag_page); - if (unlikely(err)) - goto err_unmap; - addr = page_pool_get_dma_addr(frag_page->page); - } + addr = page_pool_get_dma_addr(frag_page->page); - umr_wqe->inline_ksms[i] = (struct mlx5_ksm) { - .key = cpu_to_be32(lkey), - .va = cpu_to_be64(addr + header_offset + headroom), - }; + for (int j = 0; j < MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; j++) { + header_offset = mlx5e_shampo_hd_offset(index++); + + umr_wqe->inline_ksms[i++] = (struct mlx5_ksm) { + .key = cpu_to_be32(lkey), + .va = cpu_to_be64(addr + header_offset + headroom), + }; + } } sq->db.wqe_info[pi] = (struct mlx5e_icosq_wqe_info) {