From patchwork Tue Dec 3 20:29:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892906 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2051.outbound.protection.outlook.com [40.107.243.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E54BB1F6694; Tue, 3 Dec 2024 20:30:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.51 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257817; cv=fail; b=iTKhvHjbaV8Ln5yXL9ZIqKzxr7Y/X7rbhCVP5XJIgsI5aIkMIVlDo7MsflqGFtWyhR8DFKwV0BzZEjXZ4RjuyJvTzluHqeWmS/OAlSnJEWkBU5y9RBEIfJH5o9wDq+osec7BVY/Ti7lWsD1poM25OiYcCQluqZZVdCfFgolkwSo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257817; c=relaxed/simple; bh=iPsY3JNqyCFs/4cH7DQ5hmTTXOdlkygw4+vYBCb59Hc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=o8rSY+ltKL2QQqgeuHmLp+GD/Unyn+2SilcBzY05K4eh14UGFaxs3NgaMw9uMazcvNnvHjX2YOzZeTEGSHVTfwrnWxb4y0xvIYh5le8u9td4D0rTnGXRv/F/8Rb5Vf4TzK3urwhrlMzTsW3q1F4Miy3FjfhpL48tdOennnybn4w= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=mfA+fEln; arc=fail smtp.client-ip=40.107.243.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="mfA+fEln" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=a08ZBrrR7fy4V9xgVJ9jgSDJXk9+2xeFL/sUWwKzKeWxe96sC/EUKmVQa5kzf63/aEDI/wDq2sto6zOl7raiXhwxPCB2/svl5q60CoZbkzKwMA5PwG2vm2DBACYZDyZwFOjf9E91we9FXy0u8l+OKI1T4UJxyRaSUA7vWGUoukuhQI6+yGGLoCGe+xpqoEtrxgQRJ2u/xpBYVGk2nvWSnY3gO+KSx/AnKGD41cfd6D4HTVFvXuwOhyAQgWKbuRdJoZMkaPLBbMsG8TteEFAlBmQ2S7Cucp3giV6os44+qnsY6bnEAeOmpy16T+jSmT5yZQpL0+Q/nAJ1m0gg/x+qGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TO1I/NziVUCoUdIza2q5d1Az/TBvPj866QaNN+JMi5I=; b=kjgoWXtoKLjDQPxEOv5p65/DQcuZvvqohMHE+3HpV3e5CjtgHDjefK32gGkjbrHWXoi5MSAGknyfRDfshySt+Nv7fLQVflJAuSLnZcpOvk+Gjdww17H3mh5btXxK3cdG/T52v1ckP79XlEJ3ObkrZdmZb1TQO6bxGUck3boGQv8+pm2kwk4+AjlAtgSEaAUKaNvvS1coh0+YbKqFaG8wD0jKe7wPt8kVRRUH8+8OauLaj0K10W88tVcTHA1Jb78IEtv9HuOOamrk0MVpAfQZ+bHmrNl6lM1izZclqsz/LB8SR32jaGmz3DTZjq78JqE0Zmk/0J0YnpNu0+2glBRPAQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TO1I/NziVUCoUdIza2q5d1Az/TBvPj866QaNN+JMi5I=; b=mfA+fElnXJXoSQdcyF9hFwENcW4zwScwihzxOsC7gg+94uz+VdleqP8AT0xViCTaSsvBJPK5nXLs7fe3PGvPibV25gfkM9/qz3K1/i6ifrJ7dAWr98M4f+jBsndxrmZDog6ouOmgqtesSniznOGMgKdOfhnAhVaD4sPZTH5acLH280bXHrrBxwgq+VEvwXdeJAV86oaz2dVsdqS3lZ7IWAB9xIoSYJgJoiFW2bW5ru9MJxpkIGB5RFo2pxQ/FflH05G0sb3wrxU9xL7HYYcTvjZG/XC1M/cQbboUZ4YSqT6+TtkZ3IFWxIMrGKPIfV/as71tIpDGApGB+/2T9fAZOQ== Received: from DM5PR08CA0026.namprd08.prod.outlook.com (2603:10b6:4:60::15) by DS7PR12MB8324.namprd12.prod.outlook.com (2603:10b6:8:ec::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.20; Tue, 3 Dec 2024 20:30:11 +0000 Received: from DS2PEPF00003442.namprd04.prod.outlook.com (2603:10b6:4:60:cafe::2c) by DM5PR08CA0026.outlook.office365.com (2603:10b6:4:60::15) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8207.12 via Frontend Transport; Tue, 3 Dec 2024 20:30:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by DS2PEPF00003442.mail.protection.outlook.com (10.167.17.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:11 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:02 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:01 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:29:58 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Cosmin Ratiu , Tariq Toukan Subject: [PATCH mlx5-next V4 01/11] net/mlx5: ifc: Reorganize mlx5_ifc_flow_table_context_bits Date: Tue, 3 Dec 2024 22:29:14 +0200 Message-ID: <20241203202924.228440-2-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003442:EE_|DS7PR12MB8324:EE_ X-MS-Office365-Filtering-Correlation-Id: abbfdf2c-7390-4467-3154-08dd13d9493e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: lL19uq44g88gkegTZHRVOzOUtL6KvKOOFM2WizTwZrg7Oh2hPc6L1OUf7gzRgbBteBx/dFgBmAj0iZTczxlPEramIrgkXkTFSadiXLb2Z81jr8wzKVO90QYwJqXr0O8nr4JfElKRwg/Us68OFalO8U1hhDVT/eokFtaVoYeumK5iviNxOA/R1+tAtwIeO2uWsJzjEQt4D3yOEDMzbr6CgTBR8UdG78XRUmyHs6XhNMwddNHJu4/VqwXu9ZCcGq7ZMYnH0+aZKYiV+EPARbX/ukKwkOsWrjR2ujx5mz2omP2BItDKQNn2/dEDET3ZHa2jad4XJ/f2S4+trVIYyJZ++wwEfliHzZig8cEKNPde+AXjYdMOhTQTaKAkpd1Szq0ragoer4DAPRHF1fbNO1IvNrUcOIK0f+gR/U4wBG3m0fQGl4T9GOJ+r2/5g8jszYp9J9i7CXeerVjKVA8XsWBweCH7IYjtego7yWTV+QRI3IMeD8k7/U1lOfhLRyIyUsW6bd/3nvUgjNhdTSw0hyni5BFxY7aleUKvc56Hw/FC/Z6rE20hDsIOlz2q0GHUMv6XgM+fwaCQCJy4JRTEFpzdmP8eNr1IpoamLZjKd3I2oEioMWiMs7+sgVSyyyzrCLFiSaq98uYA7x/UTmEgwNwYvYJITVf9b69rle+gAwV9MS5/e2tnTfOmN6EXycargYxFu2ra8lzsZjhwgZfAqlHkzf6VW+RL1D5KIP+UjtcsAI3fIU2QwgDZZWilX6Rv/EvnuqR4bX+Q9eIjq7YjDjVoThqzdiYdIJzhJLS7JNZ6ADSrFTGNJMxEQBWaayFedbQ58tjY8Vtlf77Q6f1I5ZHMQDYro4pIsnd6Ou4VXa8JI6jKFA7Xllnaun/ZIoGQid70kvybbmIRvXRLmN6Wr5JmUHSZpctE523pM96oUhjBt1EXp84LzGGFAtSw8MZBpUay4mwzCPhdSA9tVl+9YH8o2+hxM6cx/rDe3f+CMWyg8IZukZ0r9C4OBghDMDc91FfB8Z8tFOTOcZ3xTevVzgWKy0NaJ3SYv7ZPGJXslCv9G+2I9Pen4SDtBkM2l+gSk/YSfP87nnhgzXkxAuMau4t2gx+hWAoklYfkqnyRjT+S4ZrreQ35HBD4ryQfn0/DZ/p1ycXdZEiGJErDZk55Skef6xP2JfqsPQFzvZm+f4t12tdYn+/2bLGCHIKe5rMuoFDbWYFW3AUHTvNbXPAei2cygpbBK6BdaepLGP39TqzUmBIRrRFU9xfu8imaiOKYNXMUhxxqy3UWtaKhlAo6GUBbuOD3Z6phJ9u4EdnoG1gpkhUkw4EgKDAEETM7egZtimPMIzHs/NM01CKjeOElPxANi1VTqDVNtOIQZFBYAqYG6t6TtZkh1GMRXVpW63mTzWNzhjPka2PG/M5UFc12FtnXiGeWwG0ClS3o88OGDNlvdZWHu/p+yVAtM8b/n2Jy0PkA X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(1800799024)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:11.2872 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: abbfdf2c-7390-4467-3154-08dd13d9493e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003442.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB8324 From: Cosmin Ratiu The nested union at the end is not in the same style as the rest of the code, so un-nest it to make the style uniformly applied again. Signed-off-by: Cosmin Ratiu Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- include/linux/mlx5/mlx5_ifc.h | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 4fbbcf35498b..f3650f989e68 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -6324,6 +6324,20 @@ struct mlx5_ifc_modify_other_hca_cap_in_bits { struct mlx5_ifc_other_hca_cap_bits other_capability; }; +struct mlx5_ifc_sw_owner_icm_root_params_bits { + u8 sw_owner_icm_root_1[0x40]; + + u8 sw_owner_icm_root_0[0x40]; +}; + +struct mlx5_ifc_rtc_params_bits { + u8 rtc_id_0[0x20]; + + u8 rtc_id_1[0x20]; + + u8 reserved_at_40[0x40]; +}; + struct mlx5_ifc_flow_table_context_bits { u8 reformat_en[0x1]; u8 decap_en[0x1]; @@ -6342,20 +6356,10 @@ struct mlx5_ifc_flow_table_context_bits { u8 lag_master_next_table_id[0x18]; u8 reserved_at_60[0x60]; - union { - struct { - u8 sw_owner_icm_root_1[0x40]; - - u8 sw_owner_icm_root_0[0x40]; - } sws; - struct { - u8 rtc_id_0[0x20]; - u8 rtc_id_1[0x20]; - - u8 reserved_at_100[0x40]; - - } hws; + union { + struct mlx5_ifc_sw_owner_icm_root_params_bits sws; + struct mlx5_ifc_rtc_params_bits hws; }; }; From patchwork Tue Dec 3 20:29:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892907 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2040.outbound.protection.outlook.com [40.107.244.40]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C57F18E76F; Tue, 3 Dec 2024 20:30:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.40 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257828; cv=fail; b=U4Bz6XE3btBy8tyWaHKhI96VSrvLCslketRYik5wt45H8VkXZax8Hb8OZoZ/w9he6t1e11cZkxBicGGBwkipB07XUF2Eb9hTiP/wasp6ZrhEHNaXMrZKV/g1qDuM4MayuAVVH7k/peHt/9+CygX6IF0c/bo9Zkc6OQApzBKMIcI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257828; c=relaxed/simple; bh=58hw4nZl1mQmhDedbbLKowV4upMBJ3iMauLTCB0OWqk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=pDEBBZFmaB9uzG41oqI9QI++dz4H7ttpMAee8y/rxfcfBCScGGMC60FpFaWCvCROVu7uLjAHKVJdiPyLxXrZBmP1A7jadwX5mEb5IvQ6IOHso7DQMJJOQhfSS7ERs9W4a/XQVlMRhv8mFHVHVMX4MoF/LfmAMc2CJ0HSjKH1deU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=QmWe6JGl; arc=fail smtp.client-ip=40.107.244.40 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="QmWe6JGl" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=GydClp25dYqRVX4SxhMeYwY5BoWg9YZzJUkAxyOKiSdx51Y7+b8MDGUF0O9m9bqRsV98pU3Nwn+tiJohw2VrY55zojkcM2ho+oMvHB6Wn+HyCB1RQkevEKf9K4mt2inIO3btclBztOS+cuTO0WhZvM18TaW8Z3nW4bIVJnLYVAzBjBWhr3MX3A5zg4xoRx8Eby+2danaqMW00j+uDPpjFhtMSOh78Np7VyGEq4lh7F5yus7enD8EX7Xkke2fA3eIBsc2RnUc0SV4Kv9O84EH7AVVcxcm0O6gMqJ2lCh/kh/kQ3KHf65nPlrFud2Bfh092BcCf+L8PsPDTgAiBrIRlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=48ODyl+gnmxsnZ3y+s4Sfga1soenzCmUIvVs88Iv2z8=; b=JNWcjrodhCyd3YzmFAn4a+O2gdEigEAPSeNTMY3/5SrqYwPIFKAceBAK9AWhmntDae+ezHy1Cqsbr8zMjVSoSL+gt+VtyRlP4WLrPG+3dErS6X3nQYBSBW/13mKccjOwQL0V4PCBP1qEM2U9obwrcq2jI9mUsch2vgz4SGLqWApNx/A6GLFHSRV1zmS/F5dxObhXh4k89w09oNXnLw5XpGvwPa5dkiO2/V8VlGZpTa2diaFtZy4Gp4bva7CCiymlRwNidAkfyPTQViOVigZspUSdNxtLBAw4VEtE9ieKjSQNO5pqoPkFukcIpDzFTV/KOQcRZuqJje3spJyDOkydsA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=48ODyl+gnmxsnZ3y+s4Sfga1soenzCmUIvVs88Iv2z8=; b=QmWe6JGlYg5DMseQIKww2Rnm9KfWUJkzac7o9RggQdboB+z1cWAhMo/WR2Os7882Rji3Jeci1uQMIuyGTEIGRNnI5O/dACv2lOUinm2Z53r/p93owBHACOFcKUBVxTK2QQYMwrB+ifmoa8cwvLm+gFL7C738jGyiGlp3wxAz9k1qZ2sGDPQ75QJjrhlW2KBtmpBypXLsV+k2PmSDmBP8Ycgb7BITiDvtmlQL+6EYkqexR8qlKrXKfG/wVPpLlJ/nvqNc1Z+QkgMlI3dnY+0CpMHvYenhVL4fQIvPOFBeskSr+rZz80KEj/TWlomQqS8ffED74M4184M0S6QWoD9c5g== Received: from MN2PR14CA0019.namprd14.prod.outlook.com (2603:10b6:208:23e::24) by MW6PR12MB8663.namprd12.prod.outlook.com (2603:10b6:303:240::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.18; Tue, 3 Dec 2024 20:30:19 +0000 Received: from BN1PEPF00006001.namprd05.prod.outlook.com (2603:10b6:208:23e:cafe::b6) by MN2PR14CA0019.outlook.office365.com (2603:10b6:208:23e::24) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8207.18 via Frontend Transport; Tue, 3 Dec 2024 20:30:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00006001.mail.protection.outlook.com (10.167.243.233) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:18 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:05 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:05 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:02 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Yevgeny Kliteynik , "Tariq Toukan" Subject: [PATCH mlx5-next V4 02/11] net/mlx5: Add ConnectX-8 device to ifc Date: Tue, 3 Dec 2024 22:29:15 +0200 Message-ID: <20241203202924.228440-3-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00006001:EE_|MW6PR12MB8663:EE_ X-MS-Office365-Filtering-Correlation-Id: 6b537856-47ad-474a-47f9-08dd13d94da4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|376014|82310400026; X-Microsoft-Antispam-Message-Info: QdCdBVugeqIlLDkN8lAZDuh8qkK+LwTWEWUORSGqpjcF2GyaSIMvvsaur0Ikl0ps30PjgDBa2PRSNMfu2Azm4byqVFfZKRbtsJyJFsYcAMR3Hh5mDn311UxwoaLbJiCYGiuVTRt+qtoorf4SAEojakwxu4qiYAFNzqrTMH384o6ccdqt9LnlZfiTJveiThSxey7jas1A6SVcgjfEFwMnBS+MOQAxZASOW0x0hwYAJpkIpzrvyFLX31b2odhjEOGYBt1zIn+Fw82M7pC26QvGxqXaKhEUhtQVmbOJ1BXQ7Qup1MjhMwl+nqzQe8E8IOkynUG3iImFNOKK00KRZyaKqi/TGoQb4ytXEJ2yL77IcZPOYArT0+3deY9KIjIuC1JozgMAwUCA04O2SvfNlqxrMzC5QL3auUf1PzLmb6klK9oDb5PjXDmUcMC5JE8+pt6lSmAhFGN2G5+ZOsZOSqjSMPoqmvm3hYlchew9ObtN69zBWEmDsUAhWf7zLIWBNNwHGOxhfgBYc85VXqQWjndofJOENn0CQtkGTrOvunGp8h1juJ4eA6EiXFl6rMRngqmoFKE9g/3uOiooduBB5tMytPWRvEhHw+1ggWMsfaq8glsfYHi1DFG2I71hIb4uQDAhpknxEk1RTiZnIuWtOzurgcY6S9xpntbzYIHpM8ZAgNkvNEGqqU10RY/Lbc+Xy0B9r04RMC/0mTNVpw1DT0nXFm5SzjloNK62QDN/oS/8R3DGjsZgTHZ3Z6OlbUx0kmcH79MwCB56US5NdRwvozYRz8OTEBL+KFIHvCiSQ+3wgLkq2vR0UXA3aZuJb9zmINgIl8szgXytkuwyY+/avJRLC5vRMshAX2HWnpL14w28pr7pafdS7Is4hSbRAKeIhUjKAirp+BS6jOfBMmvGF9hVVDzJJ7tXbCmQ33J6wGFU24goJ74bJGwSfZU/OP22grgY0EnOnPjnZGwtvNB8qLlNuD4bT+LaVU31ljmwmCep6/VPmZNGmb1+Bv+cMzEhBo9noBabPFj5/xqH6ckjoh8vENmwjhFBdpo+X2JxkO0+3bB5vtDPboM/AFVBUvaW7kBokPl5ZAADpoUY0zITIGrP+K+20g16iLC4iAqNkBZZCBdoaeEcZXdh+cAOXoaC/R1iTZnNniGF+q2ASM6nNCRfTdwtY8p0hNwog/l+wLRMcWyw+M+9Ly0p5Ah8U3m2fSitKOzmTuGj06OMtOMHQwVSAgG72fPLqt6t0chQiFfJ7rapJ/ElS4ZHbZy6PEqJu+GtPXN24BECQ4JykuKITORmnpZUPrPTqd0Fg6nNheyN7d2dTjHbBqH8cshWHD9UtnX26+8VFomR6fbMTpjvz8c8Ks/OI2RTuYwtjKuydqqyqvTCppvOwN3pMsFYpmd4y+a5vZFFpQTQI5ta9q4vYVDiFIO4FEtHXEimo3kQtVnM2GpZEDB9DgjEsQb/JPsKFeNB X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:18.5890 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6b537856-47ad-474a-47f9-08dd13d94da4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00006001.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8663 From: Yevgeny Kliteynik In preparation for ConnectX-8 SWS support, add enum for the new device type. Signed-off-by: Yevgeny Kliteynik Signed-off-by: Tariq Toukan --- include/linux/mlx5/mlx5_ifc.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index f3650f989e68..bd9b1833408e 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1590,6 +1590,7 @@ enum { MLX5_STEERING_FORMAT_CONNECTX_5 = 0, MLX5_STEERING_FORMAT_CONNECTX_6DX = 1, MLX5_STEERING_FORMAT_CONNECTX_7 = 2, + MLX5_STEERING_FORMAT_CONNECTX_8 = 3, }; struct mlx5_ifc_cmd_hca_cap_bits { From patchwork Tue Dec 3 20:29:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892908 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2086.outbound.protection.outlook.com [40.107.92.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1F9B1B21AA; Tue, 3 Dec 2024 20:30:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.86 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257837; cv=fail; b=iBx4O2i7qd/AOCfPBx6iWtMxGKcbxngXaPO9U03qwEaH+PCpvH/1X6EyKji6UiC2/6lUEuHiTg7GeB2tdWwDUVIfgfx4OgieZEo17sKvdVVY9kxMoY06KJ8wQx1qF8CzeFZLyT7QkQB8jl1fElNrpo0J3Xu540Z/apVnkoLeOT4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257837; c=relaxed/simple; bh=ZMXttAAi+cPxkYzuVI9ssjM5Os1UDm7P2NfN9uPdj7Y=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UUwcx4Yh6Jl2Soaw7TKaiY4DM29pcg6hm/QXr2HhW4tVmghqQV6laaMD9gJVl7ZykvEwTHPwDQ9Y0GTOvg1P8yj932OnRh+HMPNpfyCHlGwa28BVY8I+nxlJL+TI0M3vsFoq6NS8av8bjRS32GBJfyhxBg/zl8OWQhJriA4d9Wc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=XHSyEEjN; arc=fail smtp.client-ip=40.107.92.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="XHSyEEjN" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=adD03zrVDSAFEb/M73ogR5yKwH76weFjuSMOOo3YLriPEJ1nJCWkVn4qkHK7d36aCLpP+ahDlIyyzykuOhNOm8WbzS8ehsfwcdKVoVS7DmvfGKURB58anj0OhjQ1cqppiQcAoZYMqlRrBIFPmabI6va8twZgqqMIDmszInzuoNdZP3Qq5jWQ8roCKRY7swwF8gWc0stcfrDY6fmBDLCGQCUKQ9dNJrcIogxi9lY7+FRyamv9oszmvemfhcmUrn9bK87nmInbYXyME9bJXFN6h22MNrNG9brRdREg8NCFe/fdZh4m7UDXBkQIJWzxpUm3ev4LZ2g+3TN0MHl8d8LUEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fX6tDMjM26CGsXxvelMV8aWSCGNI2ocmDy42LHZrLqo=; b=xh4x/uI8hBUXMaH+VDQX/UvE+17EWIr7T4946F7ZAn5Y78URAVu7hF50cAF6N4bSOljcVt0/hF8fy5n8CttrtWToyeCESAa0L/udK/HMX1+KLKntmft5PPr6GrIy5AjKSIss4VTur8Mn1HMggv19Qkg0Gny3ov/KDWYEuh9U+mYKszqXJLkBLozjQ0PEiH4ZTfQlnwJLenZyOac31iZfNt++hJ7nFLjip3dzYlEiT9zruRfVgM0sc14eIpo1TNqXkjlEAYq+EyrohY8J3BL3ppf5QrREsxAgPQQAhIomoTdXmm2q6d4WLIP2DxIduN1R90EXOs0Eq2yEinTEhE8qBg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fX6tDMjM26CGsXxvelMV8aWSCGNI2ocmDy42LHZrLqo=; b=XHSyEEjNVHUh8PwZAemSip5bL/rKZ1W8NAq6W1EhG6RyKgplkJ3yK2d6aQe1mYG2QOtal/2CAK0Cjlw/4rND/tsmew/PTbhP+mWy0jubb3M1GE/Xn+rWOOCGkJlVSITlMkymmnHiGDyB4PdIL+nbdEX63CIF12s82OggdrwbHDNWt1s111RqS1AzAg6FyATLtDUzN+GSM5nzYmm4UqudlCLKpQJSXdQUWOQXtZSUpxlDZIJw8XEWMAJ5Em+zlRx1rJwoVwhdl5fRiDmm4Gm2wYa9RXYqBUgKp7QFYXGbBpgJ0bxWHIFICVA5sU+0txnOljnOZ66DuSMcVWfBNF7PLQ== Received: from MN2PR15CA0048.namprd15.prod.outlook.com (2603:10b6:208:237::17) by CH2PR12MB9494.namprd12.prod.outlook.com (2603:10b6:610:27f::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.19; Tue, 3 Dec 2024 20:30:31 +0000 Received: from BN1PEPF00006002.namprd05.prod.outlook.com (2603:10b6:208:237:cafe::8b) by MN2PR15CA0048.outlook.office365.com (2603:10b6:208:237::17) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8207.18 via Frontend Transport; Tue, 3 Dec 2024 20:30:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00006002.mail.protection.outlook.com (10.167.243.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:30 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:09 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:09 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:05 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Carolina Jubran , "Cosmin Ratiu" , Tariq Toukan Subject: [PATCH mlx5-next V4 03/11] net/mlx5: Add support for new scheduling elements Date: Tue, 3 Dec 2024 22:29:16 +0200 Message-ID: <20241203202924.228440-4-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00006002:EE_|CH2PR12MB9494:EE_ X-MS-Office365-Filtering-Correlation-Id: 0db7c381-6019-4d07-6ba2-08dd13d954f7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: suQnd/oT4WcpAu8A6xGshohr9NLe5pXzc0GkWkFqRy/m65l6pjaIVot7VotHfWKtAaauUn2kXTPbvtZa7BWnWiIq9abrG12VLTwNxRZNgJfkWP5uv50b65Mwed1UsXmCOeeeRy83XbhAi8Spjw2mBW4LJT05IwmNgduc11/69rvj8xOJ9HlQLiNBJWTc89vmdBnInvVlVU2v9aWtbm9D4kn/rHnqSHzeXMzMK1l8Hh0SB/r0CnoMC2OjNjHGNOkJEahAa7yTSZ3f9Ig09E91WaW891LMG9K5pGcPrgQ/d7vnAPGVaH54W6wwwcodELaE3rmBLZUHi+Mj1dj9CUzgZz5QwHhzU+9x0TaJ/23W8I41gQl0OAGaeRNjI7YntwfIZSF/fHtCchYL2QuNovOd5ftxuqhzeyJ3JFP4LF4MUuwNgdkRxXQp58ZpnUWiL7mXeJdBGDN0IFYK9QSEXzKfwZTlP5uPSA9UUoMiKy8yD41kqtz09uNyUHZPoqHixrWyk06gVXhTjch2/YVzmM8XASeyDsnw7hQuu5TML2dPggVjJc/fPu36R+eG07o9v1Eo7eR4yaKmNL9Rk5ynDGA7aECTAHLIkmywkHZvJsEIdjt2KOyONr5PaNQDx6qM4JXlcVuqmEAxm2PaHFWvTKxkKg5LAcN4z/QLuyFg2xNjy7mgCgK5rsbrmENQ+Ah6E5JwX+4vZIf+cICn21bPYlxrLS3KghCs10MGgRBgIg2lJPDuDI60jTT6gzqQf5+xeej3vTSVAT5OTcD0g2ZucylM8KVAnjpED6Hz+8e0ajUAvtI09hXTR/L+yCvJGMN2AiAj2zAXfG6vWi02p09+ff4v16uKt5tgpohdUo+XRdhMvSIHWyTDRWsTXZ/UdqzHdo2hypxolxPd/FFd9nLtJ2P5htjJesdbSSgw8/bOmiJgK63KI+qDq2th5YhIjgv8jGVf3kDohs+TefoAeCf6p6uF8ZOybiU0GJm01ljPS2t/T6fP4fPB2bkpvvYnyMK8ozRFKg8hlRABcBwyaOKuXoLDPAq4vtKH1M4OIPoio5vSGN1AdmotW5SjMRyhlNSIKLtWIpZc4Ti0Q2IEYlDRVb9jE9EVENIs+StdtRQfytkcnI5UhoRtW7Qnue1c1a1DWBXumciNPJ/kyUX9LkaC8YMuOR8qirSoV5B9kRwUV6Yt+m0Muo0rPetQvNzl1KlrQ/J3eu4w3whFaiBpiTN9FHKWUtthznW2AdkJB/XuRYYOoOCvaH1x4HtRWIavNPETjpgpjcOwmmKQBic4FxpgeiBgtRZSJ8qUf9/xNTt8/GpWsSICePS1+l0TcIaPg05QLUoyJBfzfDYGTLxIV6a+/P1mmHXVZN9yfaMasCmofvtSJ0DPcCJU+ju9lXwy5OXOb3QMMRX3MoY7be9ytqPFxCUXNDV4EM37d/g7ZYjt2MNzBdY71v/wLYMLNRi0RllymomA X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(376014)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:30.8935 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0db7c381-6019-4d07-6ba2-08dd13d954f7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00006002.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB9494 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce new scheduling elements in the E-Switch QoS hierarchy to enhance traffic management capabilities. This patch adds support for: - Rate Limit scheduling elements: Enables bandwidth limitation across multiple nodes without a shared ancestor, providing a mechanism for more granular control of bandwidth allocation. - Traffic Class Transmit Scheduling Arbiter (TSAR): Introduces the infrastructure for creating Traffic Class TSARs, allowing hierarchical arbitration based on traffic classes. - Traffic Class Arbiter TSAR: Adds support for a TSAR capable of managing arbitration between multiple traffic classes, enabling improved bandwidth prioritization and traffic management. No functional changes are introduced in this patch. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/rl.c | 4 ++++ include/linux/mlx5/mlx5_ifc.h | 14 +++++++++++--- 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rl.c b/drivers/net/ethernet/mellanox/mlx5/core/rl.c index e393391966e0..39a209b9b684 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/rl.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/rl.c @@ -56,6 +56,8 @@ bool mlx5_qos_tsar_type_supported(struct mlx5_core_dev *dev, int type, u8 hierar return cap & TSAR_TYPE_CAP_MASK_ROUND_ROBIN; case TSAR_ELEMENT_TSAR_TYPE_ETS: return cap & TSAR_TYPE_CAP_MASK_ETS; + case TSAR_ELEMENT_TSAR_TYPE_TC_ARB: + return cap & TSAR_TYPE_CAP_MASK_TC_ARB; } return false; @@ -87,6 +89,8 @@ bool mlx5_qos_element_type_supported(struct mlx5_core_dev *dev, int type, u8 hie return cap & ELEMENT_TYPE_CAP_MASK_PARA_VPORT_TC; case SCHEDULING_CONTEXT_ELEMENT_TYPE_QUEUE_GROUP: return cap & ELEMENT_TYPE_CAP_MASK_QUEUE_GROUP; + case SCHEDULING_CONTEXT_ELEMENT_TYPE_RATE_LIMIT: + return cap & ELEMENT_TYPE_CAP_MASK_RATE_LIMIT; } return false; diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index bd9b1833408e..8b202521b774 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1103,7 +1103,8 @@ struct mlx5_ifc_qos_cap_bits { u8 packet_pacing_min_rate[0x20]; - u8 reserved_at_80[0x10]; + u8 reserved_at_80[0xb]; + u8 log_esw_max_rate_limit[0x5]; u8 packet_pacing_rate_table_size[0x10]; u8 esw_element_type[0x10]; @@ -4104,6 +4105,7 @@ enum { SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT_TC = 0x2, SCHEDULING_CONTEXT_ELEMENT_TYPE_PARA_VPORT_TC = 0x3, SCHEDULING_CONTEXT_ELEMENT_TYPE_QUEUE_GROUP = 0x4, + SCHEDULING_CONTEXT_ELEMENT_TYPE_RATE_LIMIT = 0x5, }; enum { @@ -4112,22 +4114,26 @@ enum { ELEMENT_TYPE_CAP_MASK_VPORT_TC = 1 << 2, ELEMENT_TYPE_CAP_MASK_PARA_VPORT_TC = 1 << 3, ELEMENT_TYPE_CAP_MASK_QUEUE_GROUP = 1 << 4, + ELEMENT_TYPE_CAP_MASK_RATE_LIMIT = 1 << 5, }; enum { TSAR_ELEMENT_TSAR_TYPE_DWRR = 0x0, TSAR_ELEMENT_TSAR_TYPE_ROUND_ROBIN = 0x1, TSAR_ELEMENT_TSAR_TYPE_ETS = 0x2, + TSAR_ELEMENT_TSAR_TYPE_TC_ARB = 0x3, }; enum { TSAR_TYPE_CAP_MASK_DWRR = 1 << 0, TSAR_TYPE_CAP_MASK_ROUND_ROBIN = 1 << 1, TSAR_TYPE_CAP_MASK_ETS = 1 << 2, + TSAR_TYPE_CAP_MASK_TC_ARB = 1 << 3, }; struct mlx5_ifc_tsar_element_bits { - u8 reserved_at_0[0x8]; + u8 traffic_class[0x4]; + u8 reserved_at_4[0x4]; u8 tsar_type[0x8]; u8 reserved_at_10[0x10]; }; @@ -4164,7 +4170,9 @@ struct mlx5_ifc_scheduling_context_bits { u8 max_average_bw[0x20]; - u8 reserved_at_e0[0x120]; + u8 max_bw_obj_id[0x20]; + + u8 reserved_at_100[0x100]; }; struct mlx5_ifc_rqtc_bits { From patchwork Tue Dec 3 20:29:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892909 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2060.outbound.protection.outlook.com [40.107.100.60]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 629401F9F71; Tue, 3 Dec 2024 20:30:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.100.60 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257839; cv=fail; b=UACx7azodNn/WswRhYlHlDw+z6q2eloGn8AGzW7EPs/mjUU4awCeLChuFLr7q3+/DOjonuoFcJbMo+SjcM8JujnxyJ1ZmujtX9fjJssCzEC4hkHaLRMWamzkNWMnNdjWHI8V7Np5uXuxlwWxDQxYD6T1HsxWhwohYbPrhQFHlOk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257839; c=relaxed/simple; bh=HIeby63zlBVtNEym/1TK6RajyVz1Sc4CtkbCJmkqEfQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kChik8vRaEZ5uBhK6nuMFhyXXOLRSb9A919kxyrGHVsCLDQES/G4INJZasobHUYQv8BvJvsP5jrJV4P0n/gACF6II8xOpFcBMgawsUhgOxbkrLMwawx/HZU8X8oO0VdMbZZMc5Un+/Wk5GOT5iEQXz1AAL1LRNbugn0y//701r0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=C98nAQrj; arc=fail smtp.client-ip=40.107.100.60 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="C98nAQrj" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ZpDiwZQ3ozx4kcB1cdWqxBt3DsjVDVyOEwkhlU8lt4K0VrRlMzRLnxaXlEnBjaNnIO23hUIdBmtKE/TnWkc+1WeGM26ftYu6TNimJNwBG98YrVTvYW8YlnIzrcJbdsOsU+soPfokvSBkxG3hRLLfVnKRKQxdxcW2FAnLFTnCnFjeXPIwV0RvBbgi1IwD3yvy1ZpZg1bkPQfAys1hM/HxX86Y9Y3IDKsMMFUYTdGD8QbV0uU4VI7bUIeSPGkBS2SxK/yFUzonJRmIvfUiuiPOYAhm/MbaPukPSsUiwn61NdAS7oJZXeYtbHmjEtbqvBNsWwUMIpWcisY+mPqO5Fq88g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=B98sCeREJgBNTceVJtAoIDlxW4LsRiUDtglf8xuqaaQ=; b=ROGaUHgJjTTQYJNU5C6NIKCPpuFrA1+n9MjLBVSoIQ4Vkz2Tp7FPX7IUtam/Zuk3dQo58X0lEat40EtVtlDxKtpBtYFVrr66D+1KVlyRO1u2wDAFW+GAwqRgPS3lFD5YMoUhR0djKzvxLNuRJKus+5jnzRR+ofaTp0jEt2i/t25FYvmEUMxNSI+tOvtYbNylzuQwUCyZ2CP8KOUf/kxb0GRYNoMvK+gESIrURNZc9Iy+9g+rtP6H74HMVRQBVYUJfyLAnsb5dcx9NFt9K0ZnmC7iMlNtM10fwHAlfucg9G+2qVhpNwDoHFjDEvGrRwSZuv+tyQgqj1QlbVbi8okp2Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=B98sCeREJgBNTceVJtAoIDlxW4LsRiUDtglf8xuqaaQ=; b=C98nAQrjKXG+IN3GMoSogPa/SWJKi6z6QlCD53GLRO2mdzJNvB3yjTqHKppjDe8IYiX59zeXRFnQMuNVGBAjE/vPDxwCaT+uaUXww0qhsMT0aLEP4w8gmzGkwzLQ854nSGPvetIUSGmxQT6Zt4LtveOS8QnFfM4EoJ2sXvrUZyHIvfu+dAOb7cFwr/3Vj5BPciBpMKJ5VK628Nkvco41wtff8wHIksRgEuASH5z4CAQ6ULt52N2PcF0NWx41YnjuN2c1RDzAJQcMXuU6Q7ACum2LTcWgZ8tQO8wFxZuwje2ad6Lyt4TjbhS83xnmQRHCUPWwazb7k1X0Y3TEtib5mg== Received: from CH0PR03CA0413.namprd03.prod.outlook.com (2603:10b6:610:11b::11) by BY5PR12MB4146.namprd12.prod.outlook.com (2603:10b6:a03:20d::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.20; Tue, 3 Dec 2024 20:30:32 +0000 Received: from DS2PEPF00003448.namprd04.prod.outlook.com (2603:10b6:610:11b:cafe::81) by CH0PR03CA0413.outlook.office365.com (2603:10b6:610:11b::11) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8207.18 via Frontend Transport; Tue, 3 Dec 2024 20:30:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by DS2PEPF00003448.mail.protection.outlook.com (10.167.17.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:31 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:13 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:12 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:09 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Cosmin Ratiu , Tariq Toukan Subject: [PATCH net-next V4 04/11] net/mlx5: qos: Add ifc support for cross-esw scheduling Date: Tue, 3 Dec 2024 22:29:17 +0200 Message-ID: <20241203202924.228440-5-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003448:EE_|BY5PR12MB4146:EE_ X-MS-Office365-Filtering-Correlation-Id: 0d98d4ec-1c84-4ae5-7a2f-08dd13d95534 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|1800799024|36860700013|376014; X-Microsoft-Antispam-Message-Info: Rrynmgh3pfMPfEZUKJoSU7JUu6u6pKRfHr/vXuBTyCTkDgINpy2/z462ykFqYw3e3p71w+1LJ1fxehreSvfI+SLldk/tZs1Y/OA3rTjJ8AJ6B2n40HWME9i6hTGL4rkXQE+XCi7NrzkS7J4ymqLAojPjuZFGpDcQNQBWkktMFE0vOgT80HC1lbSIqthBQHcFzPj3Ly0+NMIBIpcav+iAB1GlfdgunmB9z+d3gVm1AJkT/fsxP7Y+Hl2caOUTH3UCnGBQkqCvHXfEPW8impxIpZLcz0H7AAt7IH3XnKJPtxhECQbVqFASXkEcPCJgoxqtdbkqPnVMSetnpew4lNPGPAK+DIkksVs1cF6+P/2+J1JJLD8XZuzcbE711TwGTX5E57cVO9yTbIN4x5z8YsTWPjo3qOvPTqxH3h6fY6cYjelf2JwTzmC2bsMxFACcudRDNvvgsN4yJeGu0BVsqNUe0pCRk8BOHvr78lwW9UTCv27QvMT1guQEwU/7giv6W1BQuGAvf+jPzpnPWVrwIsvWWFEB5tOAONpYqKyQvS4VDZDyTYsTXkYaDuTTQfIF5urb58xcXqil8VDJihyNlNFbsareGcEwAOYk4jByxCRTcA5GuFiZlDAcwJI3xnVJip3lXWY+FZoWq1PxZ4zzHEdoxSjnRkud4P6o/HZUaCbcH3nDY6BdfG36j697mn1chuOqfAObKwu0nBkBOUquM7Gf4bPWGzoXG6OfXFTS/iBs68nEGQdSdDkLvO01PyrE6Ih+rg2XtUpwMT0Y4Y6sxRD3reNrBnFS6yIX65+BIQbrf4StlitaiVp8DQCAHElrggh/NPxXHUJJlVhSGXz12hw6S0XDyvDwrBtAuW2LtD99ew177EpWCQ1/RSjsMy/rooPjGkvb4V9vAzGfWkTLOwDpcTqsBMpxR/LJ8fVCjPz/iU95W4CJMAjo1FBp3QCJEYZkQ5++HlUs7ImCRX4Dn6gGdm5PUJhbsCeQwQICgr4dcV5gpz+Q+YkiVL7n8QSg6ZGTcIJesHYn0hKoTFtwat9Mwpn9AIHvxRDNmq6m1TxK6ONo1+IrvzoeM1cUSUSrD9tndUXrcaH/zpd9xijdHQZ/EMacjnFJlDasFuD8jPclRt5fACc9Fjuyqd8OiKOsawh8sVYCGNVciZ6TSlhQh4OejeYqK4XazeNbsmQruJRKT3IWgYz7ekF7gtSt9W7LGAT8Y3zAMTHp0J9vQu3qvzLQJLxz3Q0tkmV99NVGijwvmmEmXW4pSP28ymZX2yQ+AMC6Ayu+h45345WB6YCvg5llZl6Aq5Pn3DObu5+QfSitq1accYX7B6yY2X+15B+d2AQBqwBOqZE8C2wnE7FmKinUZSMZvQwsRP3HJDAB1dqJuYKfZI9Kx9Dx4XC0otErx+hdyW2YxD30hIU1XI16vXOeEUQTq5/xuTXleqZN/zYxpvW9V50fCjF154ddeCzWX7xz X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(1800799024)(36860700013)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:31.3561 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0d98d4ec-1c84-4ae5-7a2f-08dd13d95534 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003448.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4146 X-Patchwork-Delegate: kuba@kernel.org From: Cosmin Ratiu This adds the capability bit and the vport element fields related to cross-esw scheduling. Signed-off-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- include/linux/mlx5/mlx5_ifc.h | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 8b202521b774..5451ff1d4356 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1095,7 +1095,9 @@ struct mlx5_ifc_qos_cap_bits { u8 log_esw_max_sched_depth[0x4]; u8 reserved_at_10[0x10]; - u8 reserved_at_20[0xb]; + u8 reserved_at_20[0x9]; + u8 esw_cross_esw_sched[0x1]; + u8 reserved_at_2a[0x1]; u8 log_max_qos_nic_queue_group[0x5]; u8 reserved_at_30[0x10]; @@ -4139,13 +4141,16 @@ struct mlx5_ifc_tsar_element_bits { }; struct mlx5_ifc_vport_element_bits { - u8 reserved_at_0[0x10]; + u8 reserved_at_0[0x4]; + u8 eswitch_owner_vhca_id_valid[0x1]; + u8 eswitch_owner_vhca_id[0xb]; u8 vport_number[0x10]; }; struct mlx5_ifc_vport_tc_element_bits { u8 traffic_class[0x4]; - u8 reserved_at_4[0xc]; + u8 eswitch_owner_vhca_id_valid[0x1]; + u8 eswitch_owner_vhca_id[0xb]; u8 vport_number[0x10]; }; From patchwork Tue Dec 3 20:29:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892911 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2065.outbound.protection.outlook.com [40.107.220.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D902B1FE473; Tue, 3 Dec 2024 20:30:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.65 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257856; cv=fail; b=Lc86A0xTo2RgZ3iUgTKBUCZAZLAfnOaHWh+Gpb4UfmJEDmfxtoPQ1uR0KC+Hinoj6SRNyUt0AW0wKe5gPDIUiMKt7ks1hVusLTSuJaacevZ9gP/M6u8ta5AU8zACuOCz4ib56uxK3u1RcUAv/85pW4bNgwncTRGpCa19L4KKukw= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257856; c=relaxed/simple; bh=u5gboAqmZ4cg9fUrTFwfulOROlP3FOFOXHwTi4TXjT4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NpFepYDTL7cjwf95Q6L3ePbk9rMY69CaJc2IQlHOmXHnWmVdg5fLqwDKj1n1fA28eTXGYSZ5cYncKxxZLGtRomz8Yy1S4yL5i0P4eXVAJ2hIH5VYpNLpALAohD3yWCFVBCHiG1fXuVH+XDOuVry7bn5i9zqNKR9XJvoHpLSjUjM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=aQlPqgjo; arc=fail smtp.client-ip=40.107.220.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="aQlPqgjo" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=HPgqBnRtFx+VUsGbgNgeqtj0DAhYiDlRXJErle7ZwWCQd/F3h/KzZ/RTL3D97gw7dIUwjMNuZUNfxDsZp3f3yKRKpwx3Z23Ci7F295AHAt63PdVOGl/z4IIJ6hLdhJb6S7qj2DJDjXpT00q5ZEXf6tLknZohbbd+CN9SG5kjPLdA2VF0eXC4ZgEWfv2bNr56yTMplWGH+QkF9G5lN97DxLfS2tWfUYHRquIJdN6MvN3/NijN/W5qS556ykJRyzq5asNiMyjaRCfNkeal4VJYem3pd2PgpLmE7Y3kTTquaBwH0N5VyFufHro9f3cMHiml8/IF3lz7GQCWNFiISzB/Qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Pau08nKiXKlCsZtbvC2vBvvbLmkUKXzWrUDYBnn9VX0=; b=qXK0UlxgM7iG5Rh9iPALQ1o0v3cmVZgKMOKM9RBUELnC0gnxim9+wkbwBIHhuOv4f+NO/alEoZbvkJoWM9r0drlZdxv/XInPK+vRXakce9IM64Thjv2JQFY57Ig/CM78cbIv8zxK3IU0be0fDAr27Y+s0BqmxZN50I3LUPKG0MT2Gyi7ikw0/CuFL/QrXocs5UsRzsiue86O5Az5YIvdr28UuujIlI+CdDd9unBtkdeAi28fAGVbd9pii97r7BP+EMN8m6J0yXkFCdXt36Rhjgz/SaE2+OciQPgIbvW7odKY930ZCdNiaZdeRwjlwyofXmrCaieDzq4JwO4IQvygUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Pau08nKiXKlCsZtbvC2vBvvbLmkUKXzWrUDYBnn9VX0=; b=aQlPqgjoIgS59iDj32ujOIWpS79/MhocqtYKwoGDYGbApRTQHV32QCcbZl5uuXDXwVPrYapZEmm4giw0J7OXLojZMzBDcURuyukkyuzvUIaiEEugi3wK92VAEeQuGL2oZ7Gb5CVnz843oXVpwZ/qvKxErH+muKVAv7ox5B6CPJ82JvW/hQS7cUbZeHGLnFd9HQiSPdmoUdAvJWdBhPkRbFFBqXVRtPKqMvl36jCHna957uqe3Srfynj6mSmry2rj9OE8fZMPYvfDn1Db4+fXoqhvQ8RxdEpRRYuC/OMK+qvZlfuK+2Pr6Dq6YN0++jB42o+pDXFynizmAQlgmNowig== Received: from DM6PR11CA0037.namprd11.prod.outlook.com (2603:10b6:5:14c::14) by SN7PR12MB7812.namprd12.prod.outlook.com (2603:10b6:806:329::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.20; Tue, 3 Dec 2024 20:30:39 +0000 Received: from DS2PEPF00003443.namprd04.prod.outlook.com (2603:10b6:5:14c:cafe::5c) by DM6PR11CA0037.outlook.office365.com (2603:10b6:5:14c::14) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8207.18 via Frontend Transport; Tue, 3 Dec 2024 20:30:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by DS2PEPF00003443.mail.protection.outlook.com (10.167.17.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:39 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:17 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:16 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:13 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Itamar Gozlan , "Yevgeny Kliteynik" , Tariq Toukan Subject: [PATCH net-next V4 05/11] net/mlx5: DR, Expand SWS STE callbacks and consolidate common structs Date: Tue, 3 Dec 2024 22:29:18 +0200 Message-ID: <20241203202924.228440-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003443:EE_|SN7PR12MB7812:EE_ X-MS-Office365-Filtering-Correlation-Id: fc34eaba-a8ad-48c0-5053-08dd13d959d3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|376014|82310400026; X-Microsoft-Antispam-Message-Info: zi4Ckl9h50uACmG1qr4DC6KQEmfXTM05twRLDroZ1zhvgspbjf+V5J7eH7sGIm9Mv92TLFmZkG7CK0vEwX/gb0TjEwAsEpc0OnA6S7+GOdXaTGkCCyGS/BB2ZJGw0Yz1/FmydthlM3bY2rOrWosIj2mHhPozVDazV9Z4G9JMXYwalKkDUOS9enZ9GlMPOHgtMgLZ5kUV4Jj9oe1mMtcR5wZdfRZdAvIdBdapl89VErP7qK7F2kBZrUk9hwWwb81jhXMamTVG1gw/Kqcu4gGimWB9OLhOc91zimsmwrLyosDCHloPm/Qh3n0uLe43IWjAqki8CiD6QYvWSf3JR1uvuUSjHJgBFkP1knzBUezkVJP8qVVhL6SabvOJvQHlI0RXQC5ZvXy+ZIuQs/S20J2meNgUX1H8CShZtOHfSWh/4G/213pyRWfIlt7wGucO0kNetlK0u7o28b96kJjaO+6c3OpEDqbiUtLp0nOw0f7LwCqJNZQ0stGlBAVaEExuYIa4R2sb8hH5j552e4TNJrq9/0Xwg7TC6abODqoS7Mb1Ra6PXITQd1E/nRQT0lIJ2xIZSMLhD8T+T7ma3Bd3Y6TuSfDU9w9BcD/05fTNU18twq+1u/NCrt8WDcAUyfmHs3TCiRcAHZsEnOl0wLdDamuqpycyM/R+Im/1WsFhqN+y91TZT/oC5Py91vH58YoCMGXR1TghMNb82fPtGgMLa1vHXWBOhFTghzqJngLCe3uv5dvrdo4LT/J6MvHYKMQe9ZGu+W41BCs+Pmvu3OgcS2RBTyLJbMs5cetAYkcZ91af0D/GE50bE9HaERxXmJMvkrWYxgeFugw+AFikqyYa3x6KT3aK1a771RjAeR/XW28U0kQmVHHlHNrR1zkE6h2cJbosV3Ee3Mth8szMFzPhBtI1fnegGezxn8EnlszQRMMLhropmmKaJwD7DhC7iMlVDDQx7EIlyuYVX7niTn1DmxOsLhn6EeB8r5m8OEC95I5954m856JOzhzjqsBKBMHidlHuy8gIX83fsLqVUKxWcbVPbtW4ZdSWsXAD/ragXquPcY4dPCW+tyjemzOLdvLi+wJi5ULRqEir2CBaI7xjfGyOhp/xoZdLCkQLQwp88lrJsD2TM5u2LWGUNbYArqhMVWQIvWdWle2xCjEUEpm16iGZE/wTdapTRHydyEPc3UliBA+g/gm/Azk0qilktOl+NQFDnT3fTIcvb6v8Ikrav25R3z4/2HKjxWn4dq99eygayysbIcy7wP3oslhA1hsPBdMdeVVSxlhWUFJNraZCSXsYsYNgaj3GZBqpEAAV9h6GIvwjx/btAUyvhp5qx9oehSUnoPN1tzrcSIUtNncwRVAKk15H4QH6CRKyykO7cFpuboVnPqMgWja6lHT/OUyhb+/t6WR7wNqnRxH9oEVr+GzTxn3uYNT5kRDtGQCthJnAvLuHpG74iePWgs/9hZd5Av+w X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:39.0924 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fc34eaba-a8ad-48c0-5053-08dd13d959d3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003443.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7812 X-Patchwork-Delegate: kuba@kernel.org From: Itamar Gozlan Expand SWS STE callbacks to support ConnectX-8 hardware. Move common enums and structures to a shared header file. Signed-off-by: Itamar Gozlan Signed-off-by: Yevgeny Kliteynik Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/sws/dr_ste.c | 4 +- .../mellanox/mlx5/core/steering/sws/dr_ste.h | 18 +- .../mlx5/core/steering/sws/dr_ste_v0.c | 6 +- .../mlx5/core/steering/sws/dr_ste_v1.c | 207 ++++-------------- .../mlx5/core/steering/sws/dr_ste_v1.h | 147 ++++++++++++- .../mlx5/core/steering/sws/dr_ste_v2.c | 169 +------------- .../mlx5/core/steering/sws/dr_ste_v2.h | 168 ++++++++++++++ 7 files changed, 377 insertions(+), 342 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v2.h diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.c index e94fbb015efa..01ba8eae2983 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.c @@ -555,7 +555,7 @@ void mlx5dr_ste_set_actions_tx(struct mlx5dr_ste_ctx *ste_ctx, struct mlx5dr_ste_actions_attr *attr, u32 *added_stes) { - ste_ctx->set_actions_tx(dmn, action_type_set, ste_ctx->actions_caps, + ste_ctx->set_actions_tx(ste_ctx, dmn, action_type_set, ste_ctx->actions_caps, hw_ste_arr, attr, added_stes); } @@ -566,7 +566,7 @@ void mlx5dr_ste_set_actions_rx(struct mlx5dr_ste_ctx *ste_ctx, struct mlx5dr_ste_actions_attr *attr, u32 *added_stes) { - ste_ctx->set_actions_rx(dmn, action_type_set, ste_ctx->actions_caps, + ste_ctx->set_actions_rx(ste_ctx, dmn, action_type_set, ste_ctx->actions_caps, hw_ste_arr, attr, added_stes); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.h index 54a6619c3ecb..b6ec8d30d990 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.h @@ -160,13 +160,15 @@ struct mlx5dr_ste_ctx { /* Actions */ u32 actions_caps; - void (*set_actions_rx)(struct mlx5dr_domain *dmn, + void (*set_actions_rx)(struct mlx5dr_ste_ctx *ste_ctx, + struct mlx5dr_domain *dmn, u8 *action_type_set, u32 actions_caps, u8 *hw_ste_arr, struct mlx5dr_ste_actions_attr *attr, u32 *added_stes); - void (*set_actions_tx)(struct mlx5dr_domain *dmn, + void (*set_actions_tx)(struct mlx5dr_ste_ctx *ste_ctx, + struct mlx5dr_domain *dmn, u8 *action_type_set, u32 actions_caps, u8 *hw_ste_arr, @@ -197,7 +199,17 @@ struct mlx5dr_ste_ctx { u16 *used_hw_action_num); int (*alloc_modify_hdr_chunk)(struct mlx5dr_action *action); void (*dealloc_modify_hdr_chunk)(struct mlx5dr_action *action); - + /* Actions bit set */ + void (*set_encap)(u8 *hw_ste_p, u8 *d_action, + u32 reformat_id, int size); + void (*set_push_vlan)(u8 *ste, u8 *d_action, + u32 vlan_hdr); + void (*set_pop_vlan)(u8 *hw_ste_p, u8 *s_action, + u8 vlans_num); + void (*set_rx_decap)(u8 *hw_ste_p, u8 *s_action); + void (*set_encap_l3)(u8 *hw_ste_p, u8 *frst_s_action, + u8 *scnd_d_action, u32 reformat_id, + int size); /* Send */ void (*prepare_for_postsend)(u8 *hw_ste_p, u32 ste_size); }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v0.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v0.c index e9f6c7ed7a7b..42536bee55e2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v0.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v0.c @@ -406,7 +406,8 @@ static void dr_ste_v0_arr_init_next(u8 **last_ste, } static void -dr_ste_v0_set_actions_tx(struct mlx5dr_domain *dmn, +dr_ste_v0_set_actions_tx(struct mlx5dr_ste_ctx *ste_ctx, + struct mlx5dr_domain *dmn, u8 *action_type_set, u32 actions_caps, u8 *last_ste, @@ -476,7 +477,8 @@ dr_ste_v0_set_actions_tx(struct mlx5dr_domain *dmn, } static void -dr_ste_v0_set_actions_rx(struct mlx5dr_domain *dmn, +dr_ste_v0_set_actions_rx(struct mlx5dr_ste_ctx *ste_ctx, + struct mlx5dr_domain *dmn, u8 *action_type_set, u32 actions_caps, u8 *last_ste, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.c index 1d49704b9542..7f83d77c43ef 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.c @@ -5,136 +5,6 @@ #include "mlx5_ifc_dr_ste_v1.h" #include "dr_ste_v1.h" -#define DR_STE_CALC_DFNR_TYPE(lookup_type, inner) \ - ((inner) ? DR_STE_V1_LU_TYPE_##lookup_type##_I : \ - DR_STE_V1_LU_TYPE_##lookup_type##_O) - -enum dr_ste_v1_entry_format { - DR_STE_V1_TYPE_BWC_BYTE = 0x0, - DR_STE_V1_TYPE_BWC_DW = 0x1, - DR_STE_V1_TYPE_MATCH = 0x2, - DR_STE_V1_TYPE_MATCH_RANGES = 0x7, -}; - -/* Lookup type is built from 2B: [ Definer mode 1B ][ Definer index 1B ] */ -enum { - DR_STE_V1_LU_TYPE_NOP = 0x0000, - DR_STE_V1_LU_TYPE_ETHL2_TNL = 0x0002, - DR_STE_V1_LU_TYPE_IBL3_EXT = 0x0102, - DR_STE_V1_LU_TYPE_ETHL2_O = 0x0003, - DR_STE_V1_LU_TYPE_IBL4 = 0x0103, - DR_STE_V1_LU_TYPE_ETHL2_I = 0x0004, - DR_STE_V1_LU_TYPE_SRC_QP_GVMI = 0x0104, - DR_STE_V1_LU_TYPE_ETHL2_SRC_O = 0x0005, - DR_STE_V1_LU_TYPE_ETHL2_HEADERS_O = 0x0105, - DR_STE_V1_LU_TYPE_ETHL2_SRC_I = 0x0006, - DR_STE_V1_LU_TYPE_ETHL2_HEADERS_I = 0x0106, - DR_STE_V1_LU_TYPE_ETHL3_IPV4_5_TUPLE_O = 0x0007, - DR_STE_V1_LU_TYPE_IPV6_DES_O = 0x0107, - DR_STE_V1_LU_TYPE_ETHL3_IPV4_5_TUPLE_I = 0x0008, - DR_STE_V1_LU_TYPE_IPV6_DES_I = 0x0108, - DR_STE_V1_LU_TYPE_ETHL4_O = 0x0009, - DR_STE_V1_LU_TYPE_IPV6_SRC_O = 0x0109, - DR_STE_V1_LU_TYPE_ETHL4_I = 0x000a, - DR_STE_V1_LU_TYPE_IPV6_SRC_I = 0x010a, - DR_STE_V1_LU_TYPE_ETHL2_SRC_DST_O = 0x000b, - DR_STE_V1_LU_TYPE_MPLS_O = 0x010b, - DR_STE_V1_LU_TYPE_ETHL2_SRC_DST_I = 0x000c, - DR_STE_V1_LU_TYPE_MPLS_I = 0x010c, - DR_STE_V1_LU_TYPE_ETHL3_IPV4_MISC_O = 0x000d, - DR_STE_V1_LU_TYPE_GRE = 0x010d, - DR_STE_V1_LU_TYPE_FLEX_PARSER_TNL_HEADER = 0x000e, - DR_STE_V1_LU_TYPE_GENERAL_PURPOSE = 0x010e, - DR_STE_V1_LU_TYPE_ETHL3_IPV4_MISC_I = 0x000f, - DR_STE_V1_LU_TYPE_STEERING_REGISTERS_0 = 0x010f, - DR_STE_V1_LU_TYPE_STEERING_REGISTERS_1 = 0x0110, - DR_STE_V1_LU_TYPE_FLEX_PARSER_OK = 0x0011, - DR_STE_V1_LU_TYPE_FLEX_PARSER_0 = 0x0111, - DR_STE_V1_LU_TYPE_FLEX_PARSER_1 = 0x0112, - DR_STE_V1_LU_TYPE_ETHL4_MISC_O = 0x0113, - DR_STE_V1_LU_TYPE_ETHL4_MISC_I = 0x0114, - DR_STE_V1_LU_TYPE_INVALID = 0x00ff, - DR_STE_V1_LU_TYPE_DONT_CARE = MLX5DR_STE_LU_TYPE_DONT_CARE, -}; - -enum dr_ste_v1_header_anchors { - DR_STE_HEADER_ANCHOR_START_OUTER = 0x00, - DR_STE_HEADER_ANCHOR_1ST_VLAN = 0x02, - DR_STE_HEADER_ANCHOR_IPV6_IPV4 = 0x07, - DR_STE_HEADER_ANCHOR_INNER_MAC = 0x13, - DR_STE_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, -}; - -enum dr_ste_v1_action_size { - DR_STE_ACTION_SINGLE_SZ = 4, - DR_STE_ACTION_DOUBLE_SZ = 8, - DR_STE_ACTION_TRIPLE_SZ = 12, -}; - -enum dr_ste_v1_action_insert_ptr_attr { - DR_STE_V1_ACTION_INSERT_PTR_ATTR_NONE = 0, /* Regular push header (e.g. push vlan) */ - DR_STE_V1_ACTION_INSERT_PTR_ATTR_ENCAP = 1, /* Encapsulation / Tunneling */ - DR_STE_V1_ACTION_INSERT_PTR_ATTR_ESP = 2, /* IPsec */ -}; - -enum dr_ste_v1_action_id { - DR_STE_V1_ACTION_ID_NOP = 0x00, - DR_STE_V1_ACTION_ID_COPY = 0x05, - DR_STE_V1_ACTION_ID_SET = 0x06, - DR_STE_V1_ACTION_ID_ADD = 0x07, - DR_STE_V1_ACTION_ID_REMOVE_BY_SIZE = 0x08, - DR_STE_V1_ACTION_ID_REMOVE_HEADER_TO_HEADER = 0x09, - DR_STE_V1_ACTION_ID_INSERT_INLINE = 0x0a, - DR_STE_V1_ACTION_ID_INSERT_POINTER = 0x0b, - DR_STE_V1_ACTION_ID_FLOW_TAG = 0x0c, - DR_STE_V1_ACTION_ID_QUEUE_ID_SEL = 0x0d, - DR_STE_V1_ACTION_ID_ACCELERATED_LIST = 0x0e, - DR_STE_V1_ACTION_ID_MODIFY_LIST = 0x0f, - DR_STE_V1_ACTION_ID_ASO = 0x12, - DR_STE_V1_ACTION_ID_TRAILER = 0x13, - DR_STE_V1_ACTION_ID_COUNTER_ID = 0x14, - DR_STE_V1_ACTION_ID_MAX = 0x21, - /* use for special cases */ - DR_STE_V1_ACTION_ID_SPECIAL_ENCAP_L3 = 0x22, -}; - -enum { - DR_STE_V1_ACTION_MDFY_FLD_L2_OUT_0 = 0x00, - DR_STE_V1_ACTION_MDFY_FLD_L2_OUT_1 = 0x01, - DR_STE_V1_ACTION_MDFY_FLD_L2_OUT_2 = 0x02, - DR_STE_V1_ACTION_MDFY_FLD_SRC_L2_OUT_0 = 0x08, - DR_STE_V1_ACTION_MDFY_FLD_SRC_L2_OUT_1 = 0x09, - DR_STE_V1_ACTION_MDFY_FLD_L3_OUT_0 = 0x0e, - DR_STE_V1_ACTION_MDFY_FLD_L4_OUT_0 = 0x18, - DR_STE_V1_ACTION_MDFY_FLD_L4_OUT_1 = 0x19, - DR_STE_V1_ACTION_MDFY_FLD_IPV4_OUT_0 = 0x40, - DR_STE_V1_ACTION_MDFY_FLD_IPV4_OUT_1 = 0x41, - DR_STE_V1_ACTION_MDFY_FLD_IPV6_DST_OUT_0 = 0x44, - DR_STE_V1_ACTION_MDFY_FLD_IPV6_DST_OUT_1 = 0x45, - DR_STE_V1_ACTION_MDFY_FLD_IPV6_DST_OUT_2 = 0x46, - DR_STE_V1_ACTION_MDFY_FLD_IPV6_DST_OUT_3 = 0x47, - DR_STE_V1_ACTION_MDFY_FLD_IPV6_SRC_OUT_0 = 0x4c, - DR_STE_V1_ACTION_MDFY_FLD_IPV6_SRC_OUT_1 = 0x4d, - DR_STE_V1_ACTION_MDFY_FLD_IPV6_SRC_OUT_2 = 0x4e, - DR_STE_V1_ACTION_MDFY_FLD_IPV6_SRC_OUT_3 = 0x4f, - DR_STE_V1_ACTION_MDFY_FLD_TCP_MISC_0 = 0x5e, - DR_STE_V1_ACTION_MDFY_FLD_TCP_MISC_1 = 0x5f, - DR_STE_V1_ACTION_MDFY_FLD_CFG_HDR_0_0 = 0x6f, - DR_STE_V1_ACTION_MDFY_FLD_CFG_HDR_0_1 = 0x70, - DR_STE_V1_ACTION_MDFY_FLD_METADATA_2_CQE = 0x7b, - DR_STE_V1_ACTION_MDFY_FLD_GNRL_PURPOSE = 0x7c, - DR_STE_V1_ACTION_MDFY_FLD_REGISTER_2_0 = 0x8c, - DR_STE_V1_ACTION_MDFY_FLD_REGISTER_2_1 = 0x8d, - DR_STE_V1_ACTION_MDFY_FLD_REGISTER_1_0 = 0x8e, - DR_STE_V1_ACTION_MDFY_FLD_REGISTER_1_1 = 0x8f, - DR_STE_V1_ACTION_MDFY_FLD_REGISTER_0_0 = 0x90, - DR_STE_V1_ACTION_MDFY_FLD_REGISTER_0_1 = 0x91, -}; - -enum dr_ste_v1_aso_ctx_type { - DR_STE_V1_ASO_CTX_TYPE_POLICERS = 0x2, -}; - static const struct mlx5dr_ste_action_modify_field dr_ste_v1_action_modify_field_arr[] = { [MLX5_ACTION_IN_FIELD_OUT_SMAC_47_16] = { .hw_field = DR_STE_V1_ACTION_MDFY_FLD_SRC_L2_OUT_0, .start = 0, .end = 31, @@ -379,13 +249,12 @@ static void dr_ste_v1_set_counter_id(u8 *hw_ste_p, u32 ctr_id) MLX5_SET(ste_match_bwc_v1, hw_ste_p, counter_id, ctr_id); } -static void dr_ste_v1_set_reparse(u8 *hw_ste_p) +void dr_ste_v1_set_reparse(u8 *hw_ste_p) { MLX5_SET(ste_match_bwc_v1, hw_ste_p, reparse, 1); } -static void dr_ste_v1_set_encap(u8 *hw_ste_p, u8 *d_action, - u32 reformat_id, int size) +void dr_ste_v1_set_encap(u8 *hw_ste_p, u8 *d_action, u32 reformat_id, int size) { MLX5_SET(ste_double_action_insert_with_ptr_v1, d_action, action_id, DR_STE_V1_ACTION_ID_INSERT_POINTER); @@ -432,8 +301,7 @@ static void dr_ste_v1_set_remove_hdr(u8 *hw_ste_p, u8 *s_action, dr_ste_v1_set_reparse(hw_ste_p); } -static void dr_ste_v1_set_push_vlan(u8 *hw_ste_p, u8 *d_action, - u32 vlan_hdr) +void dr_ste_v1_set_push_vlan(u8 *hw_ste_p, u8 *d_action, u32 vlan_hdr) { MLX5_SET(ste_double_action_insert_with_inline_v1, d_action, action_id, DR_STE_V1_ACTION_ID_INSERT_INLINE); @@ -446,7 +314,7 @@ static void dr_ste_v1_set_push_vlan(u8 *hw_ste_p, u8 *d_action, dr_ste_v1_set_reparse(hw_ste_p); } -static void dr_ste_v1_set_pop_vlan(u8 *hw_ste_p, u8 *s_action, u8 vlans_num) +void dr_ste_v1_set_pop_vlan(u8 *hw_ste_p, u8 *s_action, u8 vlans_num) { MLX5_SET(ste_single_action_remove_header_size_v1, s_action, action_id, DR_STE_V1_ACTION_ID_REMOVE_BY_SIZE); @@ -459,11 +327,8 @@ static void dr_ste_v1_set_pop_vlan(u8 *hw_ste_p, u8 *s_action, u8 vlans_num) dr_ste_v1_set_reparse(hw_ste_p); } -static void dr_ste_v1_set_encap_l3(u8 *hw_ste_p, - u8 *frst_s_action, - u8 *scnd_d_action, - u32 reformat_id, - int size) +void dr_ste_v1_set_encap_l3(u8 *hw_ste_p, u8 *frst_s_action, u8 *scnd_d_action, + u32 reformat_id, int size) { /* Remove L2 headers */ MLX5_SET(ste_single_action_remove_header_v1, frst_s_action, action_id, @@ -483,7 +348,7 @@ static void dr_ste_v1_set_encap_l3(u8 *hw_ste_p, dr_ste_v1_set_reparse(hw_ste_p); } -static void dr_ste_v1_set_rx_decap(u8 *hw_ste_p, u8 *s_action) +void dr_ste_v1_set_rx_decap(u8 *hw_ste_p, u8 *s_action) { MLX5_SET(ste_single_action_remove_header_v1, s_action, action_id, DR_STE_V1_ACTION_ID_REMOVE_HEADER_TO_HEADER); @@ -620,7 +485,8 @@ static void dr_ste_v1_arr_init_next_match_range(u8 **last_ste, dr_ste_v1_set_entry_type(*last_ste, DR_STE_V1_TYPE_MATCH_RANGES); } -void dr_ste_v1_set_actions_tx(struct mlx5dr_domain *dmn, +void dr_ste_v1_set_actions_tx(struct mlx5dr_ste_ctx *ste_ctx, + struct mlx5dr_domain *dmn, u8 *action_type_set, u32 actions_caps, u8 *last_ste, @@ -640,7 +506,7 @@ void dr_ste_v1_set_actions_tx(struct mlx5dr_domain *dmn, last_ste, action); action_sz = DR_STE_ACTION_TRIPLE_SZ; } - dr_ste_v1_set_pop_vlan(last_ste, action, attr->vlans.count); + ste_ctx->set_pop_vlan(last_ste, action, attr->vlans.count); action_sz -= DR_STE_ACTION_SINGLE_SZ; action += DR_STE_ACTION_SINGLE_SZ; @@ -677,8 +543,8 @@ void dr_ste_v1_set_actions_tx(struct mlx5dr_domain *dmn, action_sz = DR_STE_ACTION_TRIPLE_SZ; allow_encap = true; } - dr_ste_v1_set_push_vlan(last_ste, action, - attr->vlans.headers[i]); + ste_ctx->set_push_vlan(last_ste, action, + attr->vlans.headers[i]); action_sz -= DR_STE_ACTION_DOUBLE_SZ; action += DR_STE_ACTION_DOUBLE_SZ; } @@ -691,9 +557,9 @@ void dr_ste_v1_set_actions_tx(struct mlx5dr_domain *dmn, action_sz = DR_STE_ACTION_TRIPLE_SZ; allow_encap = true; } - dr_ste_v1_set_encap(last_ste, action, - attr->reformat.id, - attr->reformat.size); + ste_ctx->set_encap(last_ste, action, + attr->reformat.id, + attr->reformat.size); action_sz -= DR_STE_ACTION_DOUBLE_SZ; action += DR_STE_ACTION_DOUBLE_SZ; } else if (action_type_set[DR_ACTION_TYP_L2_TO_TNL_L3]) { @@ -706,10 +572,10 @@ void dr_ste_v1_set_actions_tx(struct mlx5dr_domain *dmn, } d_action = action + DR_STE_ACTION_SINGLE_SZ; - dr_ste_v1_set_encap_l3(last_ste, - action, d_action, - attr->reformat.id, - attr->reformat.size); + ste_ctx->set_encap_l3(last_ste, + action, d_action, + attr->reformat.id, + attr->reformat.size); action_sz -= DR_STE_ACTION_TRIPLE_SZ; action += DR_STE_ACTION_TRIPLE_SZ; } else if (action_type_set[DR_ACTION_TYP_INSERT_HDR]) { @@ -776,7 +642,8 @@ void dr_ste_v1_set_actions_tx(struct mlx5dr_domain *dmn, dr_ste_v1_set_hit_addr(last_ste, attr->final_icm_addr, 1); } -void dr_ste_v1_set_actions_rx(struct mlx5dr_domain *dmn, +void dr_ste_v1_set_actions_rx(struct mlx5dr_ste_ctx *ste_ctx, + struct mlx5dr_domain *dmn, u8 *action_type_set, u32 actions_caps, u8 *last_ste, @@ -799,7 +666,7 @@ void dr_ste_v1_set_actions_rx(struct mlx5dr_domain *dmn, allow_modify_hdr = false; allow_ctr = false; } else if (action_type_set[DR_ACTION_TYP_TNL_L2_TO_L2]) { - dr_ste_v1_set_rx_decap(last_ste, action); + ste_ctx->set_rx_decap(last_ste, action); action_sz -= DR_STE_ACTION_SINGLE_SZ; action += DR_STE_ACTION_SINGLE_SZ; allow_modify_hdr = false; @@ -827,7 +694,7 @@ void dr_ste_v1_set_actions_rx(struct mlx5dr_domain *dmn, action_sz = DR_STE_ACTION_TRIPLE_SZ; } - dr_ste_v1_set_pop_vlan(last_ste, action, attr->vlans.count); + ste_ctx->set_pop_vlan(last_ste, action, attr->vlans.count); action_sz -= DR_STE_ACTION_SINGLE_SZ; action += DR_STE_ACTION_SINGLE_SZ; allow_ctr = false; @@ -868,8 +735,8 @@ void dr_ste_v1_set_actions_rx(struct mlx5dr_domain *dmn, last_ste, action); action_sz = DR_STE_ACTION_TRIPLE_SZ; } - dr_ste_v1_set_push_vlan(last_ste, action, - attr->vlans.headers[i]); + ste_ctx->set_push_vlan(last_ste, action, + attr->vlans.headers[i]); action_sz -= DR_STE_ACTION_DOUBLE_SZ; action += DR_STE_ACTION_DOUBLE_SZ; } @@ -895,9 +762,9 @@ void dr_ste_v1_set_actions_rx(struct mlx5dr_domain *dmn, action = MLX5_ADDR_OF(ste_mask_and_match_v1, last_ste, action); action_sz = DR_STE_ACTION_TRIPLE_SZ; } - dr_ste_v1_set_encap(last_ste, action, - attr->reformat.id, - attr->reformat.size); + ste_ctx->set_encap(last_ste, action, + attr->reformat.id, + attr->reformat.size); action_sz -= DR_STE_ACTION_DOUBLE_SZ; action += DR_STE_ACTION_DOUBLE_SZ; allow_modify_hdr = false; @@ -912,10 +779,10 @@ void dr_ste_v1_set_actions_rx(struct mlx5dr_domain *dmn, d_action = action + DR_STE_ACTION_SINGLE_SZ; - dr_ste_v1_set_encap_l3(last_ste, - action, d_action, - attr->reformat.id, - attr->reformat.size); + ste_ctx->set_encap_l3(last_ste, + action, d_action, + attr->reformat.id, + attr->reformat.size); action_sz -= DR_STE_ACTION_TRIPLE_SZ; allow_modify_hdr = false; } else if (action_type_set[DR_ACTION_TYP_INSERT_HDR]) { @@ -1027,9 +894,6 @@ void dr_ste_v1_set_action_copy(u8 *d_action, MLX5_SET(ste_double_action_copy_v1, d_action, source_right_shifter, src_shifter); } -#define DR_STE_DECAP_L3_ACTION_NUM 8 -#define DR_STE_L2_HDR_MAX_SZ 20 - int dr_ste_v1_set_action_decap_l3_list(void *data, u32 data_sz, u8 *hw_action, @@ -2330,7 +2194,12 @@ static struct mlx5dr_ste_ctx ste_ctx_v1 = { .set_action_decap_l3_list = &dr_ste_v1_set_action_decap_l3_list, .alloc_modify_hdr_chunk = &dr_ste_v1_alloc_modify_hdr_ptrn_arg, .dealloc_modify_hdr_chunk = &dr_ste_v1_free_modify_hdr_ptrn_arg, - + /* Actions bit set */ + .set_encap = &dr_ste_v1_set_encap, + .set_push_vlan = &dr_ste_v1_set_push_vlan, + .set_pop_vlan = &dr_ste_v1_set_pop_vlan, + .set_rx_decap = &dr_ste_v1_set_rx_decap, + .set_encap_l3 = &dr_ste_v1_set_encap_l3, /* Send */ .prepare_for_postsend = &dr_ste_v1_prepare_for_postsend, }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.h index e2fc69867088..a8d9e308d339 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.h @@ -7,6 +7,138 @@ #include "dr_types.h" #include "dr_ste.h" +#define DR_STE_DECAP_L3_ACTION_NUM 8 +#define DR_STE_L2_HDR_MAX_SZ 20 +#define DR_STE_CALC_DFNR_TYPE(lookup_type, inner) \ + ((inner) ? DR_STE_V1_LU_TYPE_##lookup_type##_I : \ + DR_STE_V1_LU_TYPE_##lookup_type##_O) + +enum dr_ste_v1_entry_format { + DR_STE_V1_TYPE_BWC_BYTE = 0x0, + DR_STE_V1_TYPE_BWC_DW = 0x1, + DR_STE_V1_TYPE_MATCH = 0x2, + DR_STE_V1_TYPE_MATCH_RANGES = 0x7, +}; + +/* Lookup type is built from 2B: [ Definer mode 1B ][ Definer index 1B ] */ +enum { + DR_STE_V1_LU_TYPE_NOP = 0x0000, + DR_STE_V1_LU_TYPE_ETHL2_TNL = 0x0002, + DR_STE_V1_LU_TYPE_IBL3_EXT = 0x0102, + DR_STE_V1_LU_TYPE_ETHL2_O = 0x0003, + DR_STE_V1_LU_TYPE_IBL4 = 0x0103, + DR_STE_V1_LU_TYPE_ETHL2_I = 0x0004, + DR_STE_V1_LU_TYPE_SRC_QP_GVMI = 0x0104, + DR_STE_V1_LU_TYPE_ETHL2_SRC_O = 0x0005, + DR_STE_V1_LU_TYPE_ETHL2_HEADERS_O = 0x0105, + DR_STE_V1_LU_TYPE_ETHL2_SRC_I = 0x0006, + DR_STE_V1_LU_TYPE_ETHL2_HEADERS_I = 0x0106, + DR_STE_V1_LU_TYPE_ETHL3_IPV4_5_TUPLE_O = 0x0007, + DR_STE_V1_LU_TYPE_IPV6_DES_O = 0x0107, + DR_STE_V1_LU_TYPE_ETHL3_IPV4_5_TUPLE_I = 0x0008, + DR_STE_V1_LU_TYPE_IPV6_DES_I = 0x0108, + DR_STE_V1_LU_TYPE_ETHL4_O = 0x0009, + DR_STE_V1_LU_TYPE_IPV6_SRC_O = 0x0109, + DR_STE_V1_LU_TYPE_ETHL4_I = 0x000a, + DR_STE_V1_LU_TYPE_IPV6_SRC_I = 0x010a, + DR_STE_V1_LU_TYPE_ETHL2_SRC_DST_O = 0x000b, + DR_STE_V1_LU_TYPE_MPLS_O = 0x010b, + DR_STE_V1_LU_TYPE_ETHL2_SRC_DST_I = 0x000c, + DR_STE_V1_LU_TYPE_MPLS_I = 0x010c, + DR_STE_V1_LU_TYPE_ETHL3_IPV4_MISC_O = 0x000d, + DR_STE_V1_LU_TYPE_GRE = 0x010d, + DR_STE_V1_LU_TYPE_FLEX_PARSER_TNL_HEADER = 0x000e, + DR_STE_V1_LU_TYPE_GENERAL_PURPOSE = 0x010e, + DR_STE_V1_LU_TYPE_ETHL3_IPV4_MISC_I = 0x000f, + DR_STE_V1_LU_TYPE_STEERING_REGISTERS_0 = 0x010f, + DR_STE_V1_LU_TYPE_STEERING_REGISTERS_1 = 0x0110, + DR_STE_V1_LU_TYPE_FLEX_PARSER_OK = 0x0011, + DR_STE_V1_LU_TYPE_FLEX_PARSER_0 = 0x0111, + DR_STE_V1_LU_TYPE_FLEX_PARSER_1 = 0x0112, + DR_STE_V1_LU_TYPE_ETHL4_MISC_O = 0x0113, + DR_STE_V1_LU_TYPE_ETHL4_MISC_I = 0x0114, + DR_STE_V1_LU_TYPE_INVALID = 0x00ff, + DR_STE_V1_LU_TYPE_DONT_CARE = MLX5DR_STE_LU_TYPE_DONT_CARE, +}; + +enum dr_ste_v1_header_anchors { + DR_STE_HEADER_ANCHOR_START_OUTER = 0x00, + DR_STE_HEADER_ANCHOR_1ST_VLAN = 0x02, + DR_STE_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + DR_STE_HEADER_ANCHOR_INNER_MAC = 0x13, + DR_STE_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, +}; + +enum dr_ste_v1_action_size { + DR_STE_ACTION_SINGLE_SZ = 4, + DR_STE_ACTION_DOUBLE_SZ = 8, + DR_STE_ACTION_TRIPLE_SZ = 12, +}; + +enum dr_ste_v1_action_insert_ptr_attr { + DR_STE_V1_ACTION_INSERT_PTR_ATTR_NONE = 0, /* Regular push header (e.g. push vlan) */ + DR_STE_V1_ACTION_INSERT_PTR_ATTR_ENCAP = 1, /* Encapsulation / Tunneling */ + DR_STE_V1_ACTION_INSERT_PTR_ATTR_ESP = 2, /* IPsec */ +}; + +enum dr_ste_v1_action_id { + DR_STE_V1_ACTION_ID_NOP = 0x00, + DR_STE_V1_ACTION_ID_COPY = 0x05, + DR_STE_V1_ACTION_ID_SET = 0x06, + DR_STE_V1_ACTION_ID_ADD = 0x07, + DR_STE_V1_ACTION_ID_REMOVE_BY_SIZE = 0x08, + DR_STE_V1_ACTION_ID_REMOVE_HEADER_TO_HEADER = 0x09, + DR_STE_V1_ACTION_ID_INSERT_INLINE = 0x0a, + DR_STE_V1_ACTION_ID_INSERT_POINTER = 0x0b, + DR_STE_V1_ACTION_ID_FLOW_TAG = 0x0c, + DR_STE_V1_ACTION_ID_QUEUE_ID_SEL = 0x0d, + DR_STE_V1_ACTION_ID_ACCELERATED_LIST = 0x0e, + DR_STE_V1_ACTION_ID_MODIFY_LIST = 0x0f, + DR_STE_V1_ACTION_ID_ASO = 0x12, + DR_STE_V1_ACTION_ID_TRAILER = 0x13, + DR_STE_V1_ACTION_ID_COUNTER_ID = 0x14, + DR_STE_V1_ACTION_ID_MAX = 0x21, + /* use for special cases */ + DR_STE_V1_ACTION_ID_SPECIAL_ENCAP_L3 = 0x22, +}; + +enum { + DR_STE_V1_ACTION_MDFY_FLD_L2_OUT_0 = 0x00, + DR_STE_V1_ACTION_MDFY_FLD_L2_OUT_1 = 0x01, + DR_STE_V1_ACTION_MDFY_FLD_L2_OUT_2 = 0x02, + DR_STE_V1_ACTION_MDFY_FLD_SRC_L2_OUT_0 = 0x08, + DR_STE_V1_ACTION_MDFY_FLD_SRC_L2_OUT_1 = 0x09, + DR_STE_V1_ACTION_MDFY_FLD_L3_OUT_0 = 0x0e, + DR_STE_V1_ACTION_MDFY_FLD_L4_OUT_0 = 0x18, + DR_STE_V1_ACTION_MDFY_FLD_L4_OUT_1 = 0x19, + DR_STE_V1_ACTION_MDFY_FLD_IPV4_OUT_0 = 0x40, + DR_STE_V1_ACTION_MDFY_FLD_IPV4_OUT_1 = 0x41, + DR_STE_V1_ACTION_MDFY_FLD_IPV6_DST_OUT_0 = 0x44, + DR_STE_V1_ACTION_MDFY_FLD_IPV6_DST_OUT_1 = 0x45, + DR_STE_V1_ACTION_MDFY_FLD_IPV6_DST_OUT_2 = 0x46, + DR_STE_V1_ACTION_MDFY_FLD_IPV6_DST_OUT_3 = 0x47, + DR_STE_V1_ACTION_MDFY_FLD_IPV6_SRC_OUT_0 = 0x4c, + DR_STE_V1_ACTION_MDFY_FLD_IPV6_SRC_OUT_1 = 0x4d, + DR_STE_V1_ACTION_MDFY_FLD_IPV6_SRC_OUT_2 = 0x4e, + DR_STE_V1_ACTION_MDFY_FLD_IPV6_SRC_OUT_3 = 0x4f, + DR_STE_V1_ACTION_MDFY_FLD_TCP_MISC_0 = 0x5e, + DR_STE_V1_ACTION_MDFY_FLD_TCP_MISC_1 = 0x5f, + DR_STE_V1_ACTION_MDFY_FLD_CFG_HDR_0_0 = 0x6f, + DR_STE_V1_ACTION_MDFY_FLD_CFG_HDR_0_1 = 0x70, + DR_STE_V1_ACTION_MDFY_FLD_METADATA_2_CQE = 0x7b, + DR_STE_V1_ACTION_MDFY_FLD_GNRL_PURPOSE = 0x7c, + DR_STE_V1_ACTION_MDFY_FLD_REGISTER_2_0 = 0x8c, + DR_STE_V1_ACTION_MDFY_FLD_REGISTER_2_1 = 0x8d, + DR_STE_V1_ACTION_MDFY_FLD_REGISTER_1_0 = 0x8e, + DR_STE_V1_ACTION_MDFY_FLD_REGISTER_1_1 = 0x8f, + DR_STE_V1_ACTION_MDFY_FLD_REGISTER_0_0 = 0x90, + DR_STE_V1_ACTION_MDFY_FLD_REGISTER_0_1 = 0x91, +}; + +enum dr_ste_v1_aso_ctx_type { + DR_STE_V1_ASO_CTX_TYPE_POLICERS = 0x2, +}; + bool dr_ste_v1_is_miss_addr_set(u8 *hw_ste_p); void dr_ste_v1_set_miss_addr(u8 *hw_ste_p, u64 miss_addr); u64 dr_ste_v1_get_miss_addr(u8 *hw_ste_p); @@ -17,11 +149,18 @@ u16 dr_ste_v1_get_next_lu_type(u8 *hw_ste_p); void dr_ste_v1_set_hit_addr(u8 *hw_ste_p, u64 icm_addr, u32 ht_size); void dr_ste_v1_init(u8 *hw_ste_p, u16 lu_type, bool is_rx, u16 gvmi); void dr_ste_v1_prepare_for_postsend(u8 *hw_ste_p, u32 ste_size); -void dr_ste_v1_set_actions_tx(struct mlx5dr_domain *dmn, u8 *action_type_set, - u32 actions_caps, u8 *last_ste, +void dr_ste_v1_set_reparse(u8 *hw_ste_p); +void dr_ste_v1_set_encap(u8 *hw_ste_p, u8 *d_action, u32 reformat_id, int size); +void dr_ste_v1_set_push_vlan(u8 *hw_ste_p, u8 *d_action, u32 vlan_hdr); +void dr_ste_v1_set_pop_vlan(u8 *hw_ste_p, u8 *s_action, u8 vlans_num); +void dr_ste_v1_set_encap_l3(u8 *hw_ste_p, u8 *frst_s_action, u8 *scnd_d_action, + u32 reformat_id, int size); +void dr_ste_v1_set_rx_decap(u8 *hw_ste_p, u8 *s_action); +void dr_ste_v1_set_actions_tx(struct mlx5dr_ste_ctx *ste_ctx, struct mlx5dr_domain *dmn, + u8 *action_type_set, u32 actions_caps, u8 *last_ste, struct mlx5dr_ste_actions_attr *attr, u32 *added_stes); -void dr_ste_v1_set_actions_rx(struct mlx5dr_domain *dmn, u8 *action_type_set, - u32 actions_caps, u8 *last_ste, +void dr_ste_v1_set_actions_rx(struct mlx5dr_ste_ctx *ste_ctx, struct mlx5dr_domain *dmn, + u8 *action_type_set, u32 actions_caps, u8 *last_ste, struct mlx5dr_ste_actions_attr *attr, u32 *added_stes); void dr_ste_v1_set_action_set(u8 *d_action, u8 hw_field, u8 shifter, u8 length, u32 data); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v2.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v2.c index 808b013cf48c..0882dba0f64b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v2.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v2.c @@ -2,167 +2,7 @@ /* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ #include "dr_ste_v1.h" - -enum { - DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_0 = 0x00, - DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_1 = 0x01, - DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_2 = 0x02, - DR_STE_V2_ACTION_MDFY_FLD_SRC_L2_OUT_0 = 0x08, - DR_STE_V2_ACTION_MDFY_FLD_SRC_L2_OUT_1 = 0x09, - DR_STE_V2_ACTION_MDFY_FLD_L3_OUT_0 = 0x0e, - DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0 = 0x18, - DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_1 = 0x19, - DR_STE_V2_ACTION_MDFY_FLD_IPV4_OUT_0 = 0x40, - DR_STE_V2_ACTION_MDFY_FLD_IPV4_OUT_1 = 0x41, - DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_0 = 0x44, - DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_1 = 0x45, - DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_2 = 0x46, - DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_3 = 0x47, - DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_0 = 0x4c, - DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_1 = 0x4d, - DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_2 = 0x4e, - DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_3 = 0x4f, - DR_STE_V2_ACTION_MDFY_FLD_TCP_MISC_0 = 0x5e, - DR_STE_V2_ACTION_MDFY_FLD_TCP_MISC_1 = 0x5f, - DR_STE_V2_ACTION_MDFY_FLD_CFG_HDR_0_0 = 0x6f, - DR_STE_V2_ACTION_MDFY_FLD_CFG_HDR_0_1 = 0x70, - DR_STE_V2_ACTION_MDFY_FLD_METADATA_2_CQE = 0x7b, - DR_STE_V2_ACTION_MDFY_FLD_GNRL_PURPOSE = 0x7c, - DR_STE_V2_ACTION_MDFY_FLD_REGISTER_2_0 = 0x90, - DR_STE_V2_ACTION_MDFY_FLD_REGISTER_2_1 = 0x91, - DR_STE_V2_ACTION_MDFY_FLD_REGISTER_1_0 = 0x92, - DR_STE_V2_ACTION_MDFY_FLD_REGISTER_1_1 = 0x93, - DR_STE_V2_ACTION_MDFY_FLD_REGISTER_0_0 = 0x94, - DR_STE_V2_ACTION_MDFY_FLD_REGISTER_0_1 = 0x95, -}; - -static const struct mlx5dr_ste_action_modify_field dr_ste_v2_action_modify_field_arr[] = { - [MLX5_ACTION_IN_FIELD_OUT_SMAC_47_16] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_SRC_L2_OUT_0, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_OUT_SMAC_15_0] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_SRC_L2_OUT_1, .start = 16, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_OUT_ETHERTYPE] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_1, .start = 0, .end = 15, - }, - [MLX5_ACTION_IN_FIELD_OUT_DMAC_47_16] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_0, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_OUT_DMAC_15_0] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_1, .start = 16, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_OUT_IP_DSCP] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L3_OUT_0, .start = 18, .end = 23, - }, - [MLX5_ACTION_IN_FIELD_OUT_TCP_FLAGS] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_1, .start = 16, .end = 24, - .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_TCP, - }, - [MLX5_ACTION_IN_FIELD_OUT_TCP_SPORT] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0, .start = 16, .end = 31, - .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_TCP, - }, - [MLX5_ACTION_IN_FIELD_OUT_TCP_DPORT] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0, .start = 0, .end = 15, - .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_TCP, - }, - [MLX5_ACTION_IN_FIELD_OUT_IP_TTL] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L3_OUT_0, .start = 8, .end = 15, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV4, - }, - [MLX5_ACTION_IN_FIELD_OUT_IPV6_HOPLIMIT] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L3_OUT_0, .start = 8, .end = 15, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, - }, - [MLX5_ACTION_IN_FIELD_OUT_UDP_SPORT] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0, .start = 16, .end = 31, - .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_UDP, - }, - [MLX5_ACTION_IN_FIELD_OUT_UDP_DPORT] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0, .start = 0, .end = 15, - .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_UDP, - }, - [MLX5_ACTION_IN_FIELD_OUT_SIPV6_127_96] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_0, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, - }, - [MLX5_ACTION_IN_FIELD_OUT_SIPV6_95_64] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_1, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, - }, - [MLX5_ACTION_IN_FIELD_OUT_SIPV6_63_32] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_2, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, - }, - [MLX5_ACTION_IN_FIELD_OUT_SIPV6_31_0] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_3, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, - }, - [MLX5_ACTION_IN_FIELD_OUT_DIPV6_127_96] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_0, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, - }, - [MLX5_ACTION_IN_FIELD_OUT_DIPV6_95_64] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_1, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, - }, - [MLX5_ACTION_IN_FIELD_OUT_DIPV6_63_32] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_2, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, - }, - [MLX5_ACTION_IN_FIELD_OUT_DIPV6_31_0] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_3, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, - }, - [MLX5_ACTION_IN_FIELD_OUT_SIPV4] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV4_OUT_0, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV4, - }, - [MLX5_ACTION_IN_FIELD_OUT_DIPV4] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV4_OUT_1, .start = 0, .end = 31, - .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV4, - }, - [MLX5_ACTION_IN_FIELD_METADATA_REG_A] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_GNRL_PURPOSE, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_METADATA_REG_B] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_METADATA_2_CQE, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_METADATA_REG_C_0] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_0_0, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_METADATA_REG_C_1] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_0_1, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_METADATA_REG_C_2] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_1_0, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_METADATA_REG_C_3] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_1_1, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_METADATA_REG_C_4] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_2_0, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_METADATA_REG_C_5] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_2_1, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_OUT_TCP_SEQ_NUM] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_TCP_MISC_0, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_OUT_TCP_ACK_NUM] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_TCP_MISC_1, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_OUT_FIRST_VID] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_2, .start = 0, .end = 15, - }, - [MLX5_ACTION_IN_FIELD_OUT_EMD_31_0] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_CFG_HDR_0_1, .start = 0, .end = 31, - }, - [MLX5_ACTION_IN_FIELD_OUT_EMD_47_32] = { - .hw_field = DR_STE_V2_ACTION_MDFY_FLD_CFG_HDR_0_0, .start = 0, .end = 15, - }, -}; +#include "dr_ste_v2.h" static struct mlx5dr_ste_ctx ste_ctx_v2 = { /* Builders */ @@ -223,7 +63,12 @@ static struct mlx5dr_ste_ctx ste_ctx_v2 = { .set_action_decap_l3_list = &dr_ste_v1_set_action_decap_l3_list, .alloc_modify_hdr_chunk = &dr_ste_v1_alloc_modify_hdr_ptrn_arg, .dealloc_modify_hdr_chunk = &dr_ste_v1_free_modify_hdr_ptrn_arg, - + /* Actions bit set */ + .set_encap = &dr_ste_v1_set_encap, + .set_push_vlan = &dr_ste_v1_set_push_vlan, + .set_pop_vlan = &dr_ste_v1_set_pop_vlan, + .set_rx_decap = &dr_ste_v1_set_rx_decap, + .set_encap_l3 = &dr_ste_v1_set_encap_l3, /* Send */ .prepare_for_postsend = &dr_ste_v1_prepare_for_postsend, }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v2.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v2.h new file mode 100644 index 000000000000..d853fde49cfc --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v2.h @@ -0,0 +1,168 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ + +#ifndef _DR_STE_V2_ +#define _DR_STE_V2_ + +enum { + DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_0 = 0x00, + DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_1 = 0x01, + DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_2 = 0x02, + DR_STE_V2_ACTION_MDFY_FLD_SRC_L2_OUT_0 = 0x08, + DR_STE_V2_ACTION_MDFY_FLD_SRC_L2_OUT_1 = 0x09, + DR_STE_V2_ACTION_MDFY_FLD_L3_OUT_0 = 0x0e, + DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0 = 0x18, + DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_1 = 0x19, + DR_STE_V2_ACTION_MDFY_FLD_IPV4_OUT_0 = 0x40, + DR_STE_V2_ACTION_MDFY_FLD_IPV4_OUT_1 = 0x41, + DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_0 = 0x44, + DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_1 = 0x45, + DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_2 = 0x46, + DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_3 = 0x47, + DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_0 = 0x4c, + DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_1 = 0x4d, + DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_2 = 0x4e, + DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_3 = 0x4f, + DR_STE_V2_ACTION_MDFY_FLD_TCP_MISC_0 = 0x5e, + DR_STE_V2_ACTION_MDFY_FLD_TCP_MISC_1 = 0x5f, + DR_STE_V2_ACTION_MDFY_FLD_CFG_HDR_0_0 = 0x6f, + DR_STE_V2_ACTION_MDFY_FLD_CFG_HDR_0_1 = 0x70, + DR_STE_V2_ACTION_MDFY_FLD_METADATA_2_CQE = 0x7b, + DR_STE_V2_ACTION_MDFY_FLD_GNRL_PURPOSE = 0x7c, + DR_STE_V2_ACTION_MDFY_FLD_REGISTER_2_0 = 0x90, + DR_STE_V2_ACTION_MDFY_FLD_REGISTER_2_1 = 0x91, + DR_STE_V2_ACTION_MDFY_FLD_REGISTER_1_0 = 0x92, + DR_STE_V2_ACTION_MDFY_FLD_REGISTER_1_1 = 0x93, + DR_STE_V2_ACTION_MDFY_FLD_REGISTER_0_0 = 0x94, + DR_STE_V2_ACTION_MDFY_FLD_REGISTER_0_1 = 0x95, +}; + +static const struct mlx5dr_ste_action_modify_field dr_ste_v2_action_modify_field_arr[] = { + [MLX5_ACTION_IN_FIELD_OUT_SMAC_47_16] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_SRC_L2_OUT_0, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_OUT_SMAC_15_0] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_SRC_L2_OUT_1, .start = 16, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_OUT_ETHERTYPE] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_1, .start = 0, .end = 15, + }, + [MLX5_ACTION_IN_FIELD_OUT_DMAC_47_16] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_0, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_OUT_DMAC_15_0] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_1, .start = 16, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_OUT_IP_DSCP] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L3_OUT_0, .start = 18, .end = 23, + }, + [MLX5_ACTION_IN_FIELD_OUT_TCP_FLAGS] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_1, .start = 16, .end = 24, + .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_TCP, + }, + [MLX5_ACTION_IN_FIELD_OUT_TCP_SPORT] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0, .start = 16, .end = 31, + .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_TCP, + }, + [MLX5_ACTION_IN_FIELD_OUT_TCP_DPORT] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0, .start = 0, .end = 15, + .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_TCP, + }, + [MLX5_ACTION_IN_FIELD_OUT_IP_TTL] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L3_OUT_0, .start = 8, .end = 15, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV4, + }, + [MLX5_ACTION_IN_FIELD_OUT_IPV6_HOPLIMIT] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L3_OUT_0, .start = 8, .end = 15, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, + }, + [MLX5_ACTION_IN_FIELD_OUT_UDP_SPORT] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0, .start = 16, .end = 31, + .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_UDP, + }, + [MLX5_ACTION_IN_FIELD_OUT_UDP_DPORT] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L4_OUT_0, .start = 0, .end = 15, + .l4_type = DR_STE_ACTION_MDFY_TYPE_L4_UDP, + }, + [MLX5_ACTION_IN_FIELD_OUT_SIPV6_127_96] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_0, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, + }, + [MLX5_ACTION_IN_FIELD_OUT_SIPV6_95_64] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_1, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, + }, + [MLX5_ACTION_IN_FIELD_OUT_SIPV6_63_32] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_2, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, + }, + [MLX5_ACTION_IN_FIELD_OUT_SIPV6_31_0] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_SRC_OUT_3, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, + }, + [MLX5_ACTION_IN_FIELD_OUT_DIPV6_127_96] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_0, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, + }, + [MLX5_ACTION_IN_FIELD_OUT_DIPV6_95_64] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_1, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, + }, + [MLX5_ACTION_IN_FIELD_OUT_DIPV6_63_32] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_2, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, + }, + [MLX5_ACTION_IN_FIELD_OUT_DIPV6_31_0] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV6_DST_OUT_3, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV6, + }, + [MLX5_ACTION_IN_FIELD_OUT_SIPV4] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV4_OUT_0, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV4, + }, + [MLX5_ACTION_IN_FIELD_OUT_DIPV4] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_IPV4_OUT_1, .start = 0, .end = 31, + .l3_type = DR_STE_ACTION_MDFY_TYPE_L3_IPV4, + }, + [MLX5_ACTION_IN_FIELD_METADATA_REG_A] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_GNRL_PURPOSE, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_METADATA_REG_B] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_METADATA_2_CQE, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_METADATA_REG_C_0] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_0_0, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_METADATA_REG_C_1] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_0_1, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_METADATA_REG_C_2] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_1_0, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_METADATA_REG_C_3] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_1_1, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_METADATA_REG_C_4] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_2_0, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_METADATA_REG_C_5] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_REGISTER_2_1, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_OUT_TCP_SEQ_NUM] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_TCP_MISC_0, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_OUT_TCP_ACK_NUM] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_TCP_MISC_1, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_OUT_FIRST_VID] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_L2_OUT_2, .start = 0, .end = 15, + }, + [MLX5_ACTION_IN_FIELD_OUT_EMD_31_0] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_CFG_HDR_0_1, .start = 0, .end = 31, + }, + [MLX5_ACTION_IN_FIELD_OUT_EMD_47_32] = { + .hw_field = DR_STE_V2_ACTION_MDFY_FLD_CFG_HDR_0_0, .start = 0, .end = 15, + }, +}; + +#endif /* _DR_STE_V2_ */ From patchwork Tue Dec 3 20:29:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892910 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2054.outbound.protection.outlook.com [40.107.93.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F70A207A0B; Tue, 3 Dec 2024 20:30:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.54 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257856; cv=fail; b=AnDka8F/Adj5C+tI2i5dheBm7k2K5kGINs1ec3lC6Vu3nI0Lky/ihKjZdVMDQAandwSU7g9AmC6g3hggh2WLI7rYZpdxOx6C2W55MjmLCmTh53xKFaXf6fwhV/HtkHjmy2ZwS/fZwOPJkAy7HeY7ZIXWIk7AkMExU1YDD6QzpgE= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257856; c=relaxed/simple; bh=QCSEl6bliieREXUr8/JO++VkxisTk0SeosFxUyuPj3M=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=I+Fu4arRV3tb7BVWT3foMPTWr0KpV+mZZpRYYCPsSQF4mV2Hs0FPQeVwe8Yuf0uVD9FCaW4U0a8S2E7+FzBVCwWC8ilRvN63V0vjOyGxCS/Hj3AtUUFdqDoS+bcvEbc2hHYVMapq2Tv2SQ2xNFzjVFw8+rGKw/p5mq9rI2FXxLg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=HPeAD85O; arc=fail smtp.client-ip=40.107.93.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="HPeAD85O" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=vDD8E3ZSrvRi3LdQO6FCThI7yyy0ZpKybof43+tfA3RsCM23BOKZpXeFVS9M+5dJInjmcXUg0qrT1cUO8FOJ3pj7hMepqB8WI4YonmzSiHlB4zCML0q44U0kPwVEKFLs580y2AwBXPGNqQl15MP3/HQS+ys+frgteG9iA2oMWBaJj5dGBeuiXXdNacvs8K5Ku6EcrIheqJ6Av5QQ19CLjQxFVCsvRiHDhcmR7d7FmRypWMQtXV8/46BIj/vumpIZBAOeE/Yz7hmQ1TFP+Npy/rO4DayqQEXojwuSWFvcBVFryJrfExTeZgeaZwAyqBbsCTAaf7FxQKeyoqs2zGLstQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jmm9CWkmGIV+pxRhdQaKECJunEX5SFomq8AeB3ctlYE=; b=DERmx/5hl/jz59x5zddAAgFMifC1rPwY8mCDrVPuaLHPGNakyM+h2FhTPtAlrj40sithNQHrkgM7wAp4ZfYauW9JgEUR+birqGQaelX4zoFahl1DyZAyDEoVP34bPRG9IbiJeXLfjm6la+eAAaLXOxxBZHWnUvQcAD7ifqm4coLzLaKZCHe+MF8yShBid2OO9c8tq1lNQUgL2RHNPEK9KdGDB87WLqWyDI+K8pw8SI7lNJgm5WHCoNcJlWoPu5CnOKIk4xrznkhPJhrauyPsYOHEanlp3619QJH/stnXTwEbZOG+srpQ5q7hCvtBZ0QXhaI5BhyqPxpUHDktaoqJ5g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jmm9CWkmGIV+pxRhdQaKECJunEX5SFomq8AeB3ctlYE=; b=HPeAD85OLVx8EiewZACsV5z2bK8K6vLmGDPaSg7RS/psYbo7wKUa7IOsuFevr1pF/KpvtkqgY5se/L/6otEA5WimJaIWZj39+Q/T79Jv4sf348StIp/956P7FXvZ/4oDedf49nYZ79tjuiHiOIcMD0QrJVn8f0KcE2ieh7RGdoa8DU6Fp+82SKLoLJMbhFvOz0B7rQ+CbmS7Xu4wWzCDV6qHjDUKTOBrhZA3RWBSVDZhA7e6aIN72N68KBnk03FawLJwDzJcT8rjX7LkuQRqwVxLSjAZYewkNgf0k7cFhr+cPc0NNnCVSaMouMyeMr2i5r/WwmYO3hmSHSUfmaA93A== Received: from BN1PR13CA0020.namprd13.prod.outlook.com (2603:10b6:408:e2::25) by PH7PR12MB5976.namprd12.prod.outlook.com (2603:10b6:510:1db::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.18; Tue, 3 Dec 2024 20:30:48 +0000 Received: from BN1PEPF00006000.namprd05.prod.outlook.com (2603:10b6:408:e2:cafe::3a) by BN1PR13CA0020.outlook.office365.com (2603:10b6:408:e2::25) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8230.8 via Frontend Transport; Tue, 3 Dec 2024 20:30:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00006000.mail.protection.outlook.com (10.167.243.232) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:47 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:21 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:20 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:17 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Itamar Gozlan , "Yevgeny Kliteynik" , Tariq Toukan Subject: [PATCH net-next V4 06/11] net/mlx5: DR, Add support for ConnectX-8 steering Date: Tue, 3 Dec 2024 22:29:19 +0200 Message-ID: <20241203202924.228440-7-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00006000:EE_|PH7PR12MB5976:EE_ X-MS-Office365-Filtering-Correlation-Id: ca81d76b-5686-4dec-50dc-08dd13d95ec1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|82310400026|376014; X-Microsoft-Antispam-Message-Info: TEw0NDfdGnN8dbt4jvgyZzf3ATzNSa8h7T0lgdGg1w+fWkkdlHFAkF424rx+cdyT2VsWto7TFktZ7a1xUJevhZiAFMkW3nQsJqdDAFUeS182zw2Q+yQFclJ7YIxcOfDLReUYVMc2HplYpru99irEDTErhSB9v2cgB9JkAoEMYiUoz9FwwaP5ImRMP7l0nZ4x0WUfL1VTKlaxlTNQMJD1JjMqwPdrEbR4PHQfw3N4unjUkQmEc6QHZ+uLjKu3ZGtQVQ+fO7NjFxZUT+6oc00cyanDGjw4xBIbu6bMLeuVUMKxTs89bxTjMtIef0pmYLg3EV8WKBwEmzuGndrlpH5+Y5YKpUHVeZ3ZAJ+pY7N94Aga03rkPIfoiAdH7q/QLSduUaG1wG7IEqPfz1+LArF4NfemmONwzOlfpL+WrfSR5iFqeQO/ze2PwQyZUxJmPrIpv/9UKKHNPx0tk9PfSZIPxGeMZLACaUC7VpIZtgo6+8g7ziOEiYPjNmwWsudjzVDAnxW+bQ3/KAQNexEfe97IAUsmchU0pvKNB5w2bz3XgABTXalj71Mr+WqNBRnm31qx8Hx5WWyDRjar88/o6XYrt3bLrsDAVm6zEAZRjyRetqn+Y6rXzs2hH2+JuRdX/C5u48ycypC/JtkDF3fpK9thECpo0GAb6oXpQjp+27OrnwqUxj/rqsZY85o2EIVo/EPMBxHOT0YYENP6HKEZPvCvTHl9/fKVopNT21OhWwIF2cPvYBxk4AV43UrwSbh9LYa4x8ZMI6eYJ+0iJ/ztmTqKzxSc5TwQQwpFlZMbJ7jtadmrfDwuAqy1AeKLtwH6rUfEahJO08jLENAtWvIcn9lddKlfGY06FrsDmmZ0nhU/3XojS5qQvw4n3Y1dIRbbcElKfyQGe8Gs4SgeHCzaYA9SRgCPRklUgy5llLdDxlPar2U6xILUMYSWELyKgYcG5VZM0iOAGSu2M9cjDo8YvuM8tTBi6g1ikR2LtJTaBUwflxRpjB7vNO1MrlVWTLs8AnVhtHUq5hruCNTDeyG81W4AHh5ksS5S5PQ+DUHeVowhjt+gF+3wkitRmKCwuPfWvu1+OAQ3bWkVnPNZRNtXXWmiDPC7ATIUO5tkEgU2ROz/r0/dVL69yD4XpEra+J0TTdMY+fg7nY+jenRiTeTW3hfnEgNw59T99XIAY3fpq3xd+S8H4KUTQSXU0arjmEAgN0YlX2pMV9HmnocEcjMzjCr/rMg5vK/dQFrAw0Q1Eyk21s8y9wi/9Me4q/wAxwiRZOH6F7Vy4GVJ905VnVxS2Gj3bWFG2oABVvLDPKSX43UFz7H6wLGrKM4ViwR6McGfKDGUOk6Dys3IKQ5Kjd2kjjkYVkRNeFJZDlG5xEF8GX7eM4jZkFeWZ4aWK8liJJzx8N7udj0prfY6oj1yCXwvTH8xEiJDDFUy3JOP41uHKd0rIl6TKjydr9ryPVGvdS4ScV85 X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(82310400026)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:47.3161 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ca81d76b-5686-4dec-50dc-08dd13d95ec1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00006000.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5976 X-Patchwork-Delegate: kuba@kernel.org From: Itamar Gozlan Add support for a new steering format version that is implemented by ConnectX-8. Except for several differences, the STEv3 is identical to STEv2, so for most callbacks STEv3 context struct will call STEv2 functions. Signed-off-by: Itamar Gozlan Signed-off-by: Yevgeny Kliteynik Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/Makefile | 1 + .../mlx5/core/steering/sws/dr_domain.c | 2 +- .../mellanox/mlx5/core/steering/sws/dr_ste.c | 2 + .../mellanox/mlx5/core/steering/sws/dr_ste.h | 1 + .../mlx5/core/steering/sws/dr_ste_v3.c | 221 ++++++++++++++++++ .../mlx5/core/steering/sws/mlx5_ifc_dr.h | 40 ++++ .../mellanox/mlx5/core/steering/sws/mlx5dr.h | 2 +- 7 files changed, 267 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v3.c diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index be3d0876c521..f9db8b8374fa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -123,6 +123,7 @@ mlx5_core-$(CONFIG_MLX5_SW_STEERING) += steering/sws/dr_domain.o \ steering/sws/dr_ste_v0.o \ steering/sws/dr_ste_v1.o \ steering/sws/dr_ste_v2.o \ + steering/sws/dr_ste_v3.o \ steering/sws/dr_cmd.o \ steering/sws/dr_fw.o \ steering/sws/dr_action.o \ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_domain.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_domain.c index 3d74109f8230..bd361ba6658c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_domain.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_domain.c @@ -8,7 +8,7 @@ #define DR_DOMAIN_SW_STEERING_SUPPORTED(dmn, dmn_type) \ ((dmn)->info.caps.dmn_type##_sw_owner || \ ((dmn)->info.caps.dmn_type##_sw_owner_v2 && \ - (dmn)->info.caps.sw_format_ver <= MLX5_STEERING_FORMAT_CONNECTX_7)) + (dmn)->info.caps.sw_format_ver <= MLX5_STEERING_FORMAT_CONNECTX_8)) bool mlx5dr_domain_is_support_ptrn_arg(struct mlx5dr_domain *dmn) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.c index 01ba8eae2983..c8b8ff80c7c7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.c @@ -1458,6 +1458,8 @@ struct mlx5dr_ste_ctx *mlx5dr_ste_get_ctx(u8 version) return mlx5dr_ste_get_ctx_v1(); else if (version == MLX5_STEERING_FORMAT_CONNECTX_7) return mlx5dr_ste_get_ctx_v2(); + else if (version == MLX5_STEERING_FORMAT_CONNECTX_8) + return mlx5dr_ste_get_ctx_v3(); return NULL; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.h index b6ec8d30d990..5f409dc30aca 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.h @@ -217,5 +217,6 @@ struct mlx5dr_ste_ctx { struct mlx5dr_ste_ctx *mlx5dr_ste_get_ctx_v0(void); struct mlx5dr_ste_ctx *mlx5dr_ste_get_ctx_v1(void); struct mlx5dr_ste_ctx *mlx5dr_ste_get_ctx_v2(void); +struct mlx5dr_ste_ctx *mlx5dr_ste_get_ctx_v3(void); #endif /* _DR_STE_ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v3.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v3.c new file mode 100644 index 000000000000..cc60ce1d274e --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v3.c @@ -0,0 +1,221 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ + +#include "dr_ste_v1.h" +#include "dr_ste_v2.h" + +static void dr_ste_v3_set_encap(u8 *hw_ste_p, u8 *d_action, + u32 reformat_id, int size) +{ + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, action_id, + DR_STE_V1_ACTION_ID_INSERT_POINTER); + /* The hardware expects here size in words (2 byte) */ + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, size, size / 2); + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, pointer, reformat_id); + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, attributes, + DR_STE_V1_ACTION_INSERT_PTR_ATTR_ENCAP); + dr_ste_v1_set_reparse(hw_ste_p); +} + +static void dr_ste_v3_set_push_vlan(u8 *ste, u8 *d_action, + u32 vlan_hdr) +{ + MLX5_SET(ste_double_action_insert_with_inline_v3, d_action, action_id, + DR_STE_V1_ACTION_ID_INSERT_INLINE); + /* The hardware expects here offset to vlan header in words (2 byte) */ + MLX5_SET(ste_double_action_insert_with_inline_v3, d_action, start_offset, + HDR_LEN_L2_MACS >> 1); + MLX5_SET(ste_double_action_insert_with_inline_v3, d_action, inline_data, vlan_hdr); + dr_ste_v1_set_reparse(ste); +} + +static void dr_ste_v3_set_pop_vlan(u8 *hw_ste_p, u8 *s_action, + u8 vlans_num) +{ + MLX5_SET(ste_single_action_remove_header_size_v3, s_action, + action_id, DR_STE_V1_ACTION_ID_REMOVE_BY_SIZE); + MLX5_SET(ste_single_action_remove_header_size_v3, s_action, + start_anchor, DR_STE_HEADER_ANCHOR_1ST_VLAN); + /* The hardware expects here size in words (2 byte) */ + MLX5_SET(ste_single_action_remove_header_size_v3, s_action, + remove_size, (HDR_LEN_L2_VLAN >> 1) * vlans_num); + + dr_ste_v1_set_reparse(hw_ste_p); +} + +static void dr_ste_v3_set_encap_l3(u8 *hw_ste_p, + u8 *frst_s_action, + u8 *scnd_d_action, + u32 reformat_id, + int size) +{ + /* Remove L2 headers */ + MLX5_SET(ste_single_action_remove_header_v3, frst_s_action, action_id, + DR_STE_V1_ACTION_ID_REMOVE_HEADER_TO_HEADER); + MLX5_SET(ste_single_action_remove_header_v3, frst_s_action, end_anchor, + DR_STE_HEADER_ANCHOR_IPV6_IPV4); + + /* Encapsulate with given reformat ID */ + MLX5_SET(ste_double_action_insert_with_ptr_v3, scnd_d_action, action_id, + DR_STE_V1_ACTION_ID_INSERT_POINTER); + /* The hardware expects here size in words (2 byte) */ + MLX5_SET(ste_double_action_insert_with_ptr_v3, scnd_d_action, size, size / 2); + MLX5_SET(ste_double_action_insert_with_ptr_v3, scnd_d_action, pointer, reformat_id); + MLX5_SET(ste_double_action_insert_with_ptr_v3, scnd_d_action, attributes, + DR_STE_V1_ACTION_INSERT_PTR_ATTR_ENCAP); + + dr_ste_v1_set_reparse(hw_ste_p); +} + +static void dr_ste_v3_set_rx_decap(u8 *hw_ste_p, u8 *s_action) +{ + MLX5_SET(ste_single_action_remove_header_v3, s_action, action_id, + DR_STE_V1_ACTION_ID_REMOVE_HEADER_TO_HEADER); + MLX5_SET(ste_single_action_remove_header_v3, s_action, decap, 1); + MLX5_SET(ste_single_action_remove_header_v3, s_action, vni_to_cqe, 1); + MLX5_SET(ste_single_action_remove_header_v3, s_action, end_anchor, + DR_STE_HEADER_ANCHOR_INNER_MAC); + + dr_ste_v1_set_reparse(hw_ste_p); +} + +static int +dr_ste_v3_set_action_decap_l3_list(void *data, u32 data_sz, + u8 *hw_action, u32 hw_action_sz, + uint16_t *used_hw_action_num) +{ + u8 padded_data[DR_STE_L2_HDR_MAX_SZ] = {}; + void *data_ptr = padded_data; + u16 used_actions = 0; + u32 inline_data_sz; + u32 i; + + if (hw_action_sz / DR_STE_ACTION_DOUBLE_SZ < DR_STE_DECAP_L3_ACTION_NUM) + return -EINVAL; + + inline_data_sz = + MLX5_FLD_SZ_BYTES(ste_double_action_insert_with_inline_v3, inline_data); + + /* Add an alignment padding */ + memcpy(padded_data + data_sz % inline_data_sz, data, data_sz); + + /* Remove L2L3 outer headers */ + MLX5_SET(ste_single_action_remove_header_v3, hw_action, action_id, + DR_STE_V1_ACTION_ID_REMOVE_HEADER_TO_HEADER); + MLX5_SET(ste_single_action_remove_header_v3, hw_action, decap, 1); + MLX5_SET(ste_single_action_remove_header_v3, hw_action, vni_to_cqe, 1); + MLX5_SET(ste_single_action_remove_header_v3, hw_action, end_anchor, + DR_STE_HEADER_ANCHOR_INNER_IPV6_IPV4); + hw_action += DR_STE_ACTION_DOUBLE_SZ; + used_actions++; /* Remove and NOP are a single double action */ + + /* Point to the last dword of the header */ + data_ptr += (data_sz / inline_data_sz) * inline_data_sz; + + /* Add the new header using inline action 4Byte at a time, the header + * is added in reversed order to the beginning of the packet to avoid + * incorrect parsing by the HW. Since header is 14B or 18B an extra + * two bytes are padded and later removed. + */ + for (i = 0; i < data_sz / inline_data_sz + 1; i++) { + void *addr_inline; + + MLX5_SET(ste_double_action_insert_with_inline_v3, hw_action, action_id, + DR_STE_V1_ACTION_ID_INSERT_INLINE); + /* The hardware expects here offset to words (2 bytes) */ + MLX5_SET(ste_double_action_insert_with_inline_v3, hw_action, start_offset, 0); + + /* Copy bytes one by one to avoid endianness problem */ + addr_inline = MLX5_ADDR_OF(ste_double_action_insert_with_inline_v3, + hw_action, inline_data); + memcpy(addr_inline, data_ptr - i * inline_data_sz, inline_data_sz); + hw_action += DR_STE_ACTION_DOUBLE_SZ; + used_actions++; + } + + /* Remove first 2 extra bytes */ + MLX5_SET(ste_single_action_remove_header_size_v3, hw_action, action_id, + DR_STE_V1_ACTION_ID_REMOVE_BY_SIZE); + MLX5_SET(ste_single_action_remove_header_size_v3, hw_action, start_offset, 0); + /* The hardware expects here size in words (2 bytes) */ + MLX5_SET(ste_single_action_remove_header_size_v3, hw_action, remove_size, 1); + used_actions++; + + *used_hw_action_num = used_actions; + + return 0; +} + +static struct mlx5dr_ste_ctx ste_ctx_v3 = { + /* Builders */ + .build_eth_l2_src_dst_init = &dr_ste_v1_build_eth_l2_src_dst_init, + .build_eth_l3_ipv6_src_init = &dr_ste_v1_build_eth_l3_ipv6_src_init, + .build_eth_l3_ipv6_dst_init = &dr_ste_v1_build_eth_l3_ipv6_dst_init, + .build_eth_l3_ipv4_5_tuple_init = &dr_ste_v1_build_eth_l3_ipv4_5_tuple_init, + .build_eth_l2_src_init = &dr_ste_v1_build_eth_l2_src_init, + .build_eth_l2_dst_init = &dr_ste_v1_build_eth_l2_dst_init, + .build_eth_l2_tnl_init = &dr_ste_v1_build_eth_l2_tnl_init, + .build_eth_l3_ipv4_misc_init = &dr_ste_v1_build_eth_l3_ipv4_misc_init, + .build_eth_ipv6_l3_l4_init = &dr_ste_v1_build_eth_ipv6_l3_l4_init, + .build_mpls_init = &dr_ste_v1_build_mpls_init, + .build_tnl_gre_init = &dr_ste_v1_build_tnl_gre_init, + .build_tnl_mpls_init = &dr_ste_v1_build_tnl_mpls_init, + .build_tnl_mpls_over_udp_init = &dr_ste_v1_build_tnl_mpls_over_udp_init, + .build_tnl_mpls_over_gre_init = &dr_ste_v1_build_tnl_mpls_over_gre_init, + .build_icmp_init = &dr_ste_v1_build_icmp_init, + .build_general_purpose_init = &dr_ste_v1_build_general_purpose_init, + .build_eth_l4_misc_init = &dr_ste_v1_build_eth_l4_misc_init, + .build_tnl_vxlan_gpe_init = &dr_ste_v1_build_flex_parser_tnl_vxlan_gpe_init, + .build_tnl_geneve_init = &dr_ste_v1_build_flex_parser_tnl_geneve_init, + .build_tnl_geneve_tlv_opt_init = &dr_ste_v1_build_flex_parser_tnl_geneve_tlv_opt_init, + .build_tnl_geneve_tlv_opt_exist_init = + &dr_ste_v1_build_flex_parser_tnl_geneve_tlv_opt_exist_init, + .build_register_0_init = &dr_ste_v1_build_register_0_init, + .build_register_1_init = &dr_ste_v1_build_register_1_init, + .build_src_gvmi_qpn_init = &dr_ste_v1_build_src_gvmi_qpn_init, + .build_flex_parser_0_init = &dr_ste_v1_build_flex_parser_0_init, + .build_flex_parser_1_init = &dr_ste_v1_build_flex_parser_1_init, + .build_tnl_gtpu_init = &dr_ste_v1_build_flex_parser_tnl_gtpu_init, + .build_tnl_header_0_1_init = &dr_ste_v1_build_tnl_header_0_1_init, + .build_tnl_gtpu_flex_parser_0_init = &dr_ste_v1_build_tnl_gtpu_flex_parser_0_init, + .build_tnl_gtpu_flex_parser_1_init = &dr_ste_v1_build_tnl_gtpu_flex_parser_1_init, + + /* Getters and Setters */ + .ste_init = &dr_ste_v1_init, + .set_next_lu_type = &dr_ste_v1_set_next_lu_type, + .get_next_lu_type = &dr_ste_v1_get_next_lu_type, + .is_miss_addr_set = &dr_ste_v1_is_miss_addr_set, + .set_miss_addr = &dr_ste_v1_set_miss_addr, + .get_miss_addr = &dr_ste_v1_get_miss_addr, + .set_hit_addr = &dr_ste_v1_set_hit_addr, + .set_byte_mask = &dr_ste_v1_set_byte_mask, + .get_byte_mask = &dr_ste_v1_get_byte_mask, + + /* Actions */ + .actions_caps = DR_STE_CTX_ACTION_CAP_TX_POP | + DR_STE_CTX_ACTION_CAP_RX_PUSH | + DR_STE_CTX_ACTION_CAP_RX_ENCAP, + .set_actions_rx = &dr_ste_v1_set_actions_rx, + .set_actions_tx = &dr_ste_v1_set_actions_tx, + .modify_field_arr_sz = ARRAY_SIZE(dr_ste_v2_action_modify_field_arr), + .modify_field_arr = dr_ste_v2_action_modify_field_arr, + .set_action_set = &dr_ste_v1_set_action_set, + .set_action_add = &dr_ste_v1_set_action_add, + .set_action_copy = &dr_ste_v1_set_action_copy, + .set_action_decap_l3_list = &dr_ste_v3_set_action_decap_l3_list, + .alloc_modify_hdr_chunk = &dr_ste_v1_alloc_modify_hdr_ptrn_arg, + .dealloc_modify_hdr_chunk = &dr_ste_v1_free_modify_hdr_ptrn_arg, + /* Actions bit set */ + .set_encap = &dr_ste_v3_set_encap, + .set_push_vlan = &dr_ste_v3_set_push_vlan, + .set_pop_vlan = &dr_ste_v3_set_pop_vlan, + .set_rx_decap = &dr_ste_v3_set_rx_decap, + .set_encap_l3 = &dr_ste_v3_set_encap_l3, + /* Send */ + .prepare_for_postsend = &dr_ste_v1_prepare_for_postsend, +}; + +struct mlx5dr_ste_ctx *mlx5dr_ste_get_ctx_v3(void) +{ + return &ste_ctx_v3; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/mlx5_ifc_dr.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/mlx5_ifc_dr.h index fb078fa0f0cc..898c3618ff26 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/mlx5_ifc_dr.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/mlx5_ifc_dr.h @@ -600,4 +600,44 @@ struct mlx5_ifc_ste_double_action_aso_v1_bits { }; }; +struct mlx5_ifc_ste_single_action_remove_header_v3_bits { + u8 action_id[0x8]; + u8 start_anchor[0x7]; + u8 end_anchor[0x7]; + u8 reserved_at_16[0x1]; + u8 outer_l4_remove[0x1]; + u8 reserved_at_18[0x4]; + u8 decap[0x1]; + u8 vni_to_cqe[0x1]; + u8 qos_profile[0x2]; +}; + +struct mlx5_ifc_ste_single_action_remove_header_size_v3_bits { + u8 action_id[0x8]; + u8 start_anchor[0x7]; + u8 start_offset[0x8]; + u8 outer_l4_remove[0x1]; + u8 reserved_at_18[0x2]; + u8 remove_size[0x6]; +}; + +struct mlx5_ifc_ste_double_action_insert_with_inline_v3_bits { + u8 action_id[0x8]; + u8 start_anchor[0x7]; + u8 start_offset[0x8]; + u8 reserved_at_17[0x9]; + + u8 inline_data[0x20]; +}; + +struct mlx5_ifc_ste_double_action_insert_with_ptr_v3_bits { + u8 action_id[0x8]; + u8 start_anchor[0x7]; + u8 start_offset[0x8]; + u8 size[0x6]; + u8 attributes[0x3]; + + u8 pointer[0x20]; +}; + #endif /* MLX5_IFC_DR_H */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/mlx5dr.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/mlx5dr.h index 3ac7dc67509f..0bb3724c10c2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/mlx5dr.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/sws/mlx5dr.h @@ -160,7 +160,7 @@ mlx5dr_is_supported(struct mlx5_core_dev *dev) (MLX5_CAP_ESW_FLOWTABLE_FDB(dev, sw_owner) || (MLX5_CAP_ESW_FLOWTABLE_FDB(dev, sw_owner_v2) && (MLX5_CAP_GEN(dev, steering_format_version) <= - MLX5_STEERING_FORMAT_CONNECTX_7))); + MLX5_STEERING_FORMAT_CONNECTX_8))); } /* buddy functions & structure */ From patchwork Tue Dec 3 20:29:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892912 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2088.outbound.protection.outlook.com [40.107.236.88]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E02F4207A20; Tue, 3 Dec 2024 20:30:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.88 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257858; cv=fail; b=DW6iIu9GEXxTeYTs+ipAF+an2rIi614GvJMXWSP7mTO/3q+kZ9syciqmniEPhAPlp2CaLJEkQ/7akW8PO1508YO+kInGAti4oF5E5b9Di4oTl/I5NcJMS9IER/ZdXJtUR2mQW0TES8NUbi5YcgttwQa0s72cc5fEbrkNnMs60d8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257858; c=relaxed/simple; bh=LUZC/QOtbMVrjzcS7lknKQcZx+E5bbMBxqlWCH94+m8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=E+Ab+HU1ZMEUHDBKhzW9DO8GihFdJN7kXqRsIsv56nul7Wn1UnVnj109CECPXSvSOP3l/PMS5hW1n062Rc9jqdcwWKceiBIlxRD4tRYD5YgEBoRxTtykwWgnEd+zCzPRJfmo0D1qIU1vhIRmjNHtH/KEPxHaFArjWo4hIuVET9k= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Z7T758gc; arc=fail smtp.client-ip=40.107.236.88 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Z7T758gc" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=J6MDA3atUBl+0Kl6xcVvjM082cSqrzFViW5fytWjIm5JRnhaNbIIkrSh24zXbUZu5lhLl9rSZbSbDnETCmbTCTeZ+R9TpDnVdA6pLFvQ6Sr+Wmj4e0TYa6HYSc8A7oSXVazdlsHAIHdM9HBmI9mFma4QknGB1gu3dOOktoO89N0i/ujOjFtcb9Hzo8i9bPWXEYewxHi9KzVTi1RbMajjbBlVJ7BTmrai6+Eq19Ks1MSy6hXx0/hVnZ+lyheiD/9XtyOcmUGgr5OCCsNM7LrqIJjr9+n9fmJvHjLFL+WgMZhhoxstqVziQftaFapyqGTsj+VG2PLpoBIaVEmHnA/LgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Lfw/84nsqwql2/zuanWRtXULHkyD15eu+icIsIgqIOg=; b=iL9DbfFAx8b9KpsdhxVn574GplOnwDVTA7JZBUat9TwwF9CQ0ZKWjVCkeWLPU5bBKL+ZZmrXRWNfYVfLvfj0UYPGAyVcOhAKrq5k4ndZtBng+uarvKzkTIiq3bhPU/cl2efK1p7AIKxU38vl5/TdMbnUc4cE/GbL+imuiJjQMwNKFAhFht07nTEe7cnkVJHuh+3Mwn9dvReVYIkI/WCeYGUNEsqy+T9uukm6ik7biK4Rbrp8MzZ/XTX2KMmqv6pjUFx/6ITuDL8F5J6dCC2OnJfsrycyPsNW91OlAHm59AEbi1GIdtkWin1IztTLXzEmWGYbGuV7AfcmyE4RPigKfg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Lfw/84nsqwql2/zuanWRtXULHkyD15eu+icIsIgqIOg=; b=Z7T758gc67XCurlA0Iv5PCKgbWqcBXAgJ5d0mXk8iRKw5zxXJMaI8H3MP81oRtiRx86v9DTGRWHz2nwUaCRK3GiUbq4oysgPGKhn7eElAzJ89EXzvnOgv/UDl8/z9cSr5KwsonOGAN7vFFve2HEHxLe5xbkdKDmlg2EtVVlVEMycAJxAg5etx5Z+/51dsH29CZtUaYRWPOUJGNRzCWSOqKmRPOEubnDmWU6eBcl5rckip94aJI9cYU7ay4dNEzb6XkgStI6MIrHLNdcYZDY0bjXI0V/h+KFwU8i2isVT3x5ZtDkdTaJcgx2b/8BG9ln3cR+eLpKrwSZQyPRvSIVMmw== Received: from DM6PR11CA0063.namprd11.prod.outlook.com (2603:10b6:5:14c::40) by DS7PR12MB5813.namprd12.prod.outlook.com (2603:10b6:8:75::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.20; Tue, 3 Dec 2024 20:30:48 +0000 Received: from DS2PEPF00003443.namprd04.prod.outlook.com (2603:10b6:5:14c:cafe::77) by DM6PR11CA0063.outlook.office365.com (2603:10b6:5:14c::40) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8207.18 via Frontend Transport; Tue, 3 Dec 2024 20:30:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by DS2PEPF00003443.mail.protection.outlook.com (10.167.17.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:48 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:25 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:24 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:21 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Carolina Jubran , "Cosmin Ratiu" , Jiri Pirko , Tariq Toukan Subject: [PATCH net-next V4 07/11] devlink: Extend devlink rate API with traffic classes bandwidth management Date: Tue, 3 Dec 2024 22:29:20 +0200 Message-ID: <20241203202924.228440-8-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003443:EE_|DS7PR12MB5813:EE_ X-MS-Office365-Filtering-Correlation-Id: 57c1f330-cbe3-4801-b261-08dd13d95f43 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: 8slhiKiwD5TySUBkSYsMD2/U3K+lREyWSIx8nA4DEkovP47rFnG5CgrewsmN4s66iQu5+7SwD2uEk/G3zr4PtouD62FGbJYZxD2Vcr2VyfEaEbB63Az3r0sgxO+0ObcyRCL5eSue6cgNgt5jdF+tLHjFYSIl6fanMJsp0H4dA4XFrH8jzh5AWSsb7crO71jqKDQN7zewClguYJB0W4qmYdKPQdR8EIUC9Msn2apocBWbtEX+znEuaCcZI+zKeYQGoMoNf+GJjcC4dZC6qVjcazYl5aQ4meYh0rinah4gZ5BvFrBQ5nyqVYeiICv3zaaFoMN81SihoFPttPnyyT+xUk74iYzfYnLenirIaDeCmtucy8e06PvRFa1H5Dk9OnKdi8GdtMO0QivErJafpA6212F1lL1vJAHVJkQvCX5dPfOO5Un/Ya5zCJYZduDt6VuFzQLUWndLeD1cJv19yJ7FRDPZSZExoHwaY0vdMNCtG96dW6eXDbFxNjOHjCaQlgLoOcSljgN5CD6j8M2/OYLA8J0Vsc9GIcIAdnSZSKA6drMvxNWNgUWZaYlBRXhBH5ouS9i9SAjGuw1ftSXM9f47j9Txovz4n77sVebKrZ8Gm6g7rLARYcwGSc0wY4f8Yo+owHOF0ZtAa9hG79Y2c88DFSb0vGMbw0GMTVCGRmTwx3AdOXYVt5ZVBbvYIRToiwrLhNebX7q5LvsyeT1q91D7UjL1k1hpJb2OOmrs8EzSjGNbKVPuMPp7aaQQ2ECe7samdvsoaxyVUppBdX80ZMUFEUtTVrI7s3XheFwCwHzPl/+4v6lTnz49Cc1eDGk+me1JVzBXcUPuHUzozPIju3muh4coHn3RcXBZDhtSkKgQD6lHFdyLRrZ5sf5WsiFE7AT9W2XAwNCUQNgs7VtH+lgjZUGW5BW+pdeYX26V74mproFMS4+G8OuWYwKbO1nHk4ldGG+GU7OPsPCEzXUUKmuwVkMTbWCX24tu6FRnf/2uSl7aENjrseknviufUFlIe9/uoOSX3Jg9wwXz9nFaVbm2A5MV3+cJdJlffQxIGwsiyfDNLWDddyCg13dLDQnanYGFN+OZyABqUSjTTc2Gl7sWmGSkgL7BsOlC66VArChJ5Tvmj78yOwqRCZ7wq6CkQ0kE4Bg7oxGVrx5tnUV1PydYEI2BMUHHHlaB1tHv2QcE3Q53+BkP+4NqQek943MonwHj9lld6bD0rOyVFfX5NmjdHnEfXZ1EpQ6wbJDTQ5rUXL1+KH+bHgNSJkVRD/WNIc2MGj7oZcuOEYC+puIGOnC4zyNkW1UdIcWOSuoPeSAH6owD/NTzqPi3NRs1sx+6qh087ugSBp0j/Ke5O/5rsyNJu/r7896tF/ooM9eGbEvPch+M80+Lahotbp4x5C5POR3xBJNEg91UXibjui9aitutQoiixfagv006yPPus/TxYXQGT9HJP4wIpkwwJ3dBKR/E X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(376014)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:48.2331 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 57c1f330-cbe3-4801-b261-08dd13d95f43 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003443.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5813 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce support for specifying bandwidth proportions between traffic classes (TC) in the devlink-rate API. This new option allows users to allocate bandwidth across multiple traffic classes in a single command. This feature provides a more granular control over traffic management, especially for scenarios requiring Enhanced Transmission Selection. Users can now define a specific bandwidth share for each traffic class, such as allocating 20% for TC0 (TCP/UDP) and 80% for TC5 (RoCE). Example: DEV=pci/0000:08:00.0 $ devlink port function rate add $DEV/vfs_group tx_share 10Gbit \ tx_max 50Gbit tc-bw 0:20 1:0 2:0 3:0 4:0 5:80 6:0 7:0 $ devlink port function rate set $DEV/vfs_group \ tc-bw 0:20 1:0 2:0 3:0 4:0 5:20 6:60 7:0 Example usage with ynl: ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/devlink.yaml \ --do rate-set --json '{ "bus-name": "pci", "dev-name": "0000:08:00.0", "port-index": 1, "rate-tc-bws": [ {"rate-tc-index": 0, "rate-tc-bw": 50}, {"rate-tc-index": 1, "rate-tc-bw": 50}, {"rate-tc-index": 2, "rate-tc-bw": 0}, {"rate-tc-index": 3, "rate-tc-bw": 0}, {"rate-tc-index": 4, "rate-tc-bw": 0}, {"rate-tc-index": 5, "rate-tc-bw": 0}, {"rate-tc-index": 6, "rate-tc-bw": 0}, {"rate-tc-index": 7, "rate-tc-bw": 0} ] }' ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/devlink.yaml \ --do rate-get --json '{ "bus-name": "pci", "dev-name": "0000:08:00.0", "port-index": 1 }' output for rate-get: {'bus-name': 'pci', 'dev-name': '0000:08:00.0', 'port-index': 1, 'rate-tc-bws': [{'rate-tc-bw': 50, 'rate-tc-index': 0}, {'rate-tc-bw': 50, 'rate-tc-index': 1}, {'rate-tc-bw': 0, 'rate-tc-index': 2}, {'rate-tc-bw': 0, 'rate-tc-index': 3}, {'rate-tc-bw': 0, 'rate-tc-index': 4}, {'rate-tc-bw': 0, 'rate-tc-index': 5}, {'rate-tc-bw': 0, 'rate-tc-index': 6}, {'rate-tc-bw': 0, 'rate-tc-index': 7}], 'rate-tx-max': 0, 'rate-tx-priority': 0, 'rate-tx-share': 0, 'rate-tx-weight': 0, 'rate-type': 'leaf'} Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Reviewed-by: Jiri Pirko Signed-off-by: Tariq Toukan --- Documentation/netlink/specs/devlink.yaml | 28 ++++- include/net/devlink.h | 7 ++ include/uapi/linux/devlink.h | 4 + net/devlink/netlink_gen.c | 15 ++- net/devlink/netlink_gen.h | 1 + net/devlink/rate.c | 124 +++++++++++++++++++++++ 6 files changed, 174 insertions(+), 5 deletions(-) diff --git a/Documentation/netlink/specs/devlink.yaml b/Documentation/netlink/specs/devlink.yaml index 09fbb4c03fc8..7fceb8fdc73b 100644 --- a/Documentation/netlink/specs/devlink.yaml +++ b/Documentation/netlink/specs/devlink.yaml @@ -820,7 +820,23 @@ attribute-sets: - name: region-direct type: flag - + - + name: rate-tc-bws + type: nest + multi-attr: true + nested-attributes: dl-rate-tc-bws + - + name: rate-tc-index + type: u8 + - + name: rate-tc-bw + type: u32 + doc: | + Specifies the bandwidth allocation for the Traffic Class as a + percentage. + checks: + min: 0 + max: 100 - name: dl-dev-stats subset-of: devlink @@ -1225,6 +1241,14 @@ attribute-sets: - name: flash type: flag + - + name: dl-rate-tc-bws + subset-of: devlink + attributes: + - + name: rate-tc-index + - + name: rate-tc-bw operations: enum-model: directional @@ -2149,6 +2173,7 @@ operations: - rate-tx-priority - rate-tx-weight - rate-parent-node-name + - rate-tc-bws - name: rate-new @@ -2169,6 +2194,7 @@ operations: - rate-tx-priority - rate-tx-weight - rate-parent-node-name + - rate-tc-bws - name: rate-del diff --git a/include/net/devlink.h b/include/net/devlink.h index fbb9a2668e24..277b826cdd60 100644 --- a/include/net/devlink.h +++ b/include/net/devlink.h @@ -20,6 +20,7 @@ #include #include #include +#include struct devlink; struct devlink_linecard; @@ -117,6 +118,8 @@ struct devlink_rate { u32 tx_priority; u32 tx_weight; + + u32 tc_bw[IEEE_8021QAZ_MAX_TCS]; }; struct devlink_port { @@ -1469,6 +1472,8 @@ struct devlink_ops { u32 tx_priority, struct netlink_ext_ack *extack); int (*rate_leaf_tx_weight_set)(struct devlink_rate *devlink_rate, void *priv, u32 tx_weight, struct netlink_ext_ack *extack); + int (*rate_leaf_tc_bw_set)(struct devlink_rate *devlink_rate, void *priv, + u32 *tc_bw, struct netlink_ext_ack *extack); int (*rate_node_tx_share_set)(struct devlink_rate *devlink_rate, void *priv, u64 tx_share, struct netlink_ext_ack *extack); int (*rate_node_tx_max_set)(struct devlink_rate *devlink_rate, void *priv, @@ -1477,6 +1482,8 @@ struct devlink_ops { u32 tx_priority, struct netlink_ext_ack *extack); int (*rate_node_tx_weight_set)(struct devlink_rate *devlink_rate, void *priv, u32 tx_weight, struct netlink_ext_ack *extack); + int (*rate_node_tc_bw_set)(struct devlink_rate *devlink_rate, void *priv, + u32 *tc_bw, struct netlink_ext_ack *extack); int (*rate_node_new)(struct devlink_rate *rate_node, void **priv, struct netlink_ext_ack *extack); int (*rate_node_del)(struct devlink_rate *rate_node, void *priv, diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h index 9401aa343673..b3b538c67c34 100644 --- a/include/uapi/linux/devlink.h +++ b/include/uapi/linux/devlink.h @@ -614,6 +614,10 @@ enum devlink_attr { DEVLINK_ATTR_REGION_DIRECT, /* flag */ + DEVLINK_ATTR_RATE_TC_BWS, /* nested */ + DEVLINK_ATTR_RATE_TC_INDEX, /* u8 */ + DEVLINK_ATTR_RATE_TC_BW, /* u32 */ + /* Add new attributes above here, update the spec in * Documentation/netlink/specs/devlink.yaml and re-generate * net/devlink/netlink_gen.c. diff --git a/net/devlink/netlink_gen.c b/net/devlink/netlink_gen.c index f9786d51f68f..b7b7829175dc 100644 --- a/net/devlink/netlink_gen.c +++ b/net/devlink/netlink_gen.c @@ -18,6 +18,11 @@ const struct nla_policy devlink_dl_port_function_nl_policy[DEVLINK_PORT_FN_ATTR_ [DEVLINK_PORT_FN_ATTR_CAPS] = NLA_POLICY_BITFIELD32(15), }; +const struct nla_policy devlink_dl_rate_tc_bws_nl_policy[DEVLINK_ATTR_RATE_TC_BW + 1] = { + [DEVLINK_ATTR_RATE_TC_INDEX] = { .type = NLA_U8, }, + [DEVLINK_ATTR_RATE_TC_BW] = NLA_POLICY_RANGE(NLA_U32, 0, 100), +}; + const struct nla_policy devlink_dl_selftest_id_nl_policy[DEVLINK_ATTR_SELFTEST_ID_FLASH + 1] = { [DEVLINK_ATTR_SELFTEST_ID_FLASH] = { .type = NLA_FLAG, }, }; @@ -496,7 +501,7 @@ static const struct nla_policy devlink_rate_get_dump_nl_policy[DEVLINK_ATTR_DEV_ }; /* DEVLINK_CMD_RATE_SET - do */ -static const struct nla_policy devlink_rate_set_nl_policy[DEVLINK_ATTR_RATE_TX_WEIGHT + 1] = { +static const struct nla_policy devlink_rate_set_nl_policy[DEVLINK_ATTR_RATE_TC_BWS + 1] = { [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, }, [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, }, [DEVLINK_ATTR_RATE_NODE_NAME] = { .type = NLA_NUL_STRING, }, @@ -505,10 +510,11 @@ static const struct nla_policy devlink_rate_set_nl_policy[DEVLINK_ATTR_RATE_TX_W [DEVLINK_ATTR_RATE_TX_PRIORITY] = { .type = NLA_U32, }, [DEVLINK_ATTR_RATE_TX_WEIGHT] = { .type = NLA_U32, }, [DEVLINK_ATTR_RATE_PARENT_NODE_NAME] = { .type = NLA_NUL_STRING, }, + [DEVLINK_ATTR_RATE_TC_BWS] = NLA_POLICY_NESTED(devlink_dl_rate_tc_bws_nl_policy), }; /* DEVLINK_CMD_RATE_NEW - do */ -static const struct nla_policy devlink_rate_new_nl_policy[DEVLINK_ATTR_RATE_TX_WEIGHT + 1] = { +static const struct nla_policy devlink_rate_new_nl_policy[DEVLINK_ATTR_RATE_TC_BWS + 1] = { [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, }, [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, }, [DEVLINK_ATTR_RATE_NODE_NAME] = { .type = NLA_NUL_STRING, }, @@ -517,6 +523,7 @@ static const struct nla_policy devlink_rate_new_nl_policy[DEVLINK_ATTR_RATE_TX_W [DEVLINK_ATTR_RATE_TX_PRIORITY] = { .type = NLA_U32, }, [DEVLINK_ATTR_RATE_TX_WEIGHT] = { .type = NLA_U32, }, [DEVLINK_ATTR_RATE_PARENT_NODE_NAME] = { .type = NLA_NUL_STRING, }, + [DEVLINK_ATTR_RATE_TC_BWS] = NLA_POLICY_NESTED(devlink_dl_rate_tc_bws_nl_policy), }; /* DEVLINK_CMD_RATE_DEL - do */ @@ -1164,7 +1171,7 @@ const struct genl_split_ops devlink_nl_ops[74] = { .doit = devlink_nl_rate_set_doit, .post_doit = devlink_nl_post_doit, .policy = devlink_rate_set_nl_policy, - .maxattr = DEVLINK_ATTR_RATE_TX_WEIGHT, + .maxattr = DEVLINK_ATTR_RATE_TC_BWS, .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO, }, { @@ -1174,7 +1181,7 @@ const struct genl_split_ops devlink_nl_ops[74] = { .doit = devlink_nl_rate_new_doit, .post_doit = devlink_nl_post_doit, .policy = devlink_rate_new_nl_policy, - .maxattr = DEVLINK_ATTR_RATE_TX_WEIGHT, + .maxattr = DEVLINK_ATTR_RATE_TC_BWS, .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO, }, { diff --git a/net/devlink/netlink_gen.h b/net/devlink/netlink_gen.h index 8f2bd50ddf5e..fb733b5d4ff1 100644 --- a/net/devlink/netlink_gen.h +++ b/net/devlink/netlink_gen.h @@ -13,6 +13,7 @@ /* Common nested types */ extern const struct nla_policy devlink_dl_port_function_nl_policy[DEVLINK_PORT_FN_ATTR_CAPS + 1]; +extern const struct nla_policy devlink_dl_rate_tc_bws_nl_policy[DEVLINK_ATTR_RATE_TC_BW + 1]; extern const struct nla_policy devlink_dl_selftest_id_nl_policy[DEVLINK_ATTR_SELFTEST_ID_FLASH + 1]; /* Ops table for devlink */ diff --git a/net/devlink/rate.c b/net/devlink/rate.c index 8828ffaf6cbc..372eebd807d6 100644 --- a/net/devlink/rate.c +++ b/net/devlink/rate.c @@ -80,6 +80,29 @@ devlink_rate_get_from_info(struct devlink *devlink, struct genl_info *info) return ERR_PTR(-EINVAL); } +static int devlink_rate_put_tc_bws(struct sk_buff *msg, u32 *tc_bw) +{ + struct nlattr *nla_tc_bw; + int i; + + for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + nla_tc_bw = nla_nest_start(msg, DEVLINK_ATTR_RATE_TC_BWS); + if (!nla_tc_bw) + return -EMSGSIZE; + + if (nla_put_u8(msg, DEVLINK_ATTR_RATE_TC_INDEX, i) || + nla_put_u32(msg, DEVLINK_ATTR_RATE_TC_BW, tc_bw[i])) + goto nla_put_failure; + + nla_nest_end(msg, nla_tc_bw); + } + return 0; + +nla_put_failure: + nla_nest_cancel(msg, nla_tc_bw); + return -EMSGSIZE; +} + static int devlink_nl_rate_fill(struct sk_buff *msg, struct devlink_rate *devlink_rate, enum devlink_command cmd, u32 portid, u32 seq, @@ -129,6 +152,9 @@ static int devlink_nl_rate_fill(struct sk_buff *msg, devlink_rate->parent->name)) goto nla_put_failure; + if (devlink_rate_put_tc_bws(msg, devlink_rate->tc_bw)) + goto nla_put_failure; + genlmsg_end(msg, hdr); return 0; @@ -316,6 +342,86 @@ devlink_nl_rate_parent_node_set(struct devlink_rate *devlink_rate, return 0; } +static int devlink_nl_rate_tc_bw_parse(struct nlattr *parent_nest, u32 *tc_bw, + unsigned long *bitmap, struct netlink_ext_ack *extack) +{ + struct nlattr *tb[DEVLINK_ATTR_MAX + 1]; + u8 tc_index; + + nla_parse_nested(tb, DEVLINK_ATTR_MAX, parent_nest, devlink_dl_rate_tc_bws_nl_policy, + extack); + if (!tb[DEVLINK_ATTR_RATE_TC_INDEX]) { + NL_SET_ERR_MSG(extack, "Traffic class index is expected"); + return -EINVAL; + } + + tc_index = nla_get_u8(tb[DEVLINK_ATTR_RATE_TC_INDEX]); + + if (tc_index >= IEEE_8021QAZ_MAX_TCS) { + NL_SET_ERR_MSG_FMT(extack, + "Provided traffic class index (%u) exceeds the maximum allowed value (%u)", + tc_index, IEEE_8021QAZ_MAX_TCS - 1); + return -EINVAL; + } + + if (!tb[DEVLINK_ATTR_RATE_TC_BW]) { + NL_SET_ERR_MSG(extack, "Traffic class bandwidth is expected"); + return -EINVAL; + } + + if (test_and_set_bit(tc_index, bitmap)) { + NL_SET_ERR_MSG(extack, "Duplicate traffic class index specified"); + return -EINVAL; + } + + tc_bw[tc_index] = nla_get_u32(tb[DEVLINK_ATTR_RATE_TC_BW]); + + return 0; +} + +static int devlink_nl_rate_tc_bw_set(struct devlink_rate *devlink_rate, + struct genl_info *info) +{ + DECLARE_BITMAP(bitmap, IEEE_8021QAZ_MAX_TCS) = {}; + struct devlink *devlink = devlink_rate->devlink; + const struct devlink_ops *ops = devlink->ops; + u32 tc_bw[IEEE_8021QAZ_MAX_TCS] = {}; + int rem, err = -EOPNOTSUPP, i; + struct nlattr *attr; + + nla_for_each_attr(attr, genlmsg_data(info->genlhdr), + genlmsg_len(info->genlhdr), rem) { + if (nla_type(attr) == DEVLINK_ATTR_RATE_TC_BWS) { + err = devlink_nl_rate_tc_bw_parse(attr, tc_bw, bitmap, info->extack); + if (err) + return err; + } + } + + for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + if (!test_bit(i, bitmap)) { + NL_SET_ERR_MSG_FMT(info->extack, + "Incomplete traffic class bandwidth values, all %u traffic classes must be specified", + IEEE_8021QAZ_MAX_TCS); + return -EINVAL; + } + } + + if (devlink_rate_is_leaf(devlink_rate)) + err = ops->rate_leaf_tc_bw_set(devlink_rate, devlink_rate->priv, tc_bw, + info->extack); + else if (devlink_rate_is_node(devlink_rate)) + err = ops->rate_node_tc_bw_set(devlink_rate, devlink_rate->priv, tc_bw, + info->extack); + + if (err) + return err; + + memcpy(devlink_rate->tc_bw, tc_bw, sizeof(tc_bw)); + + return 0; +} + static int devlink_nl_rate_set(struct devlink_rate *devlink_rate, const struct devlink_ops *ops, struct genl_info *info) @@ -388,6 +494,12 @@ static int devlink_nl_rate_set(struct devlink_rate *devlink_rate, return err; } + if (attrs[DEVLINK_ATTR_RATE_TC_BWS]) { + err = devlink_nl_rate_tc_bw_set(devlink_rate, info); + if (err) + return err; + } + return 0; } @@ -423,6 +535,12 @@ static bool devlink_rate_set_ops_supported(const struct devlink_ops *ops, "TX weight set isn't supported for the leafs"); return false; } + if (attrs[DEVLINK_ATTR_RATE_TC_BWS] && !ops->rate_leaf_tc_bw_set) { + NL_SET_ERR_MSG_ATTR(info->extack, + attrs[DEVLINK_ATTR_RATE_TC_BWS], + "TC bandwidth set isn't supported for the leafs"); + return false; + } } else if (type == DEVLINK_RATE_TYPE_NODE) { if (attrs[DEVLINK_ATTR_RATE_TX_SHARE] && !ops->rate_node_tx_share_set) { NL_SET_ERR_MSG(info->extack, "TX share set isn't supported for the nodes"); @@ -449,6 +567,12 @@ static bool devlink_rate_set_ops_supported(const struct devlink_ops *ops, "TX weight set isn't supported for the nodes"); return false; } + if (attrs[DEVLINK_ATTR_RATE_TC_BWS] && !ops->rate_node_tc_bw_set) { + NL_SET_ERR_MSG_ATTR(info->extack, + attrs[DEVLINK_ATTR_RATE_TC_BWS], + "TC bandwidth set isn't supported for the nodes"); + return false; + } } else { WARN(1, "Unknown type of rate object"); return false; From patchwork Tue Dec 3 20:29:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892913 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2081.outbound.protection.outlook.com [40.107.223.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 463DE1F75B3; Tue, 3 Dec 2024 20:31:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.81 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257861; cv=fail; b=MQV0fZUX3oFvZ4IvhF2iWPXhXZfTr6/VkkwHY0nEcGyqnDpV+9nl7MRsfNqUJkIItXahuWn5xgiAdg7NiSy/sGKBERq/5eFVPhvvMZ3Rzfh+X4fVaBSqOORW5a/dZDdAjOx7+6kq+lc0fE7WNyEpsrwxorMMvYYi4DpbbSjpTpw= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257861; c=relaxed/simple; bh=p+f0Rc2cXt21kTHLqooaSn39K7o1wGJSQMQlC7VmNFk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=YI4f9/0S5bRw/AjbCMizCOOyPa8x2vaItBaqw0ixRM5pr0RmoLVH4nsXbojN8z9kOBvvpZXBf2k3uRAB0qQ+qpI7pz9+mHQgRJ8BdzL2zz0hx827id7or6YQhiTTI7DKZ4usuLF3Q4cD5a7kZgsfQ5IK4Vv4gVJSW3jSVkT7UdE= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Ozm/UvsG; arc=fail smtp.client-ip=40.107.223.81 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Ozm/UvsG" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=KXcBPqPG4q5a3KTdONUwLqAm7Prh2K86XLIKWfDOJUu/hZlrg3GM976Om4bJo99a71UlK9ZTUTyBN1RuvUC9RmYA/GhVDcGUWt31Dvuc8wyDPy03lRACde/c4JxwgQoRWsT4P1SEbgv9mN+uXKF2a4lOh+2qFi7Gv5hsl9tnwpz2zpYvPhekiAkxnAHl0dxI/TUjMBX03DHTmDflSBhL9u1nN+jcKAl8gqYITBpT5FVwzo/L/s6s0Tp8cyFT1xLvkYR4iYsK3U1PO6TcD9OpKGfojUzCPXqIaTRU8+EFQRbr1jSQb2fZyOk+xU05FNmxgxswxFDwndbbCnXPG57muA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=m7HaYmOaPnhb5t6S6+OAIrA4j5fQBco6gkGceNj+Ws4=; b=qbnXzv88kT6cgQm4O5Ct+FrXdq+dXFYMFSY8tEODQh/Up0gPSExuUn+hSYvFoTgztzZgj6wjGNC8jrQvHIpxueDN9Tu377EO6EN1dZKcPvG5Ojt3O1qSg2BzyulZ+X10rjiAqJwfkKwIafYqm0+hmGMpdT+ZkarM3iLVsd7uJQilOFMjh2pG0a6/c5fD8iT90uTYu0unEEm4Oe5PDhpSYvshWrenqc1xn1V9fmSwYRG4UnXh921sCH8J/D5CRbUV58R5cjLBv9CUyQZehHhKG9GOlWM4unKpugbyVKr1u/5fm8HZz/z5Jrsl6XJPw11vHpVy+JOWeV5wf8lEO8E8Ew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=m7HaYmOaPnhb5t6S6+OAIrA4j5fQBco6gkGceNj+Ws4=; b=Ozm/UvsGppKUNyF2OVqynqdXLtg3K4taVlWY3u21etOVQv6aNC+Hi9eNafYjxg69wt8T+DvUY1QbshXqbKyHoxnqU3a0P52wehaHhRuS2pSX3qjDhJIgkW1e8UWeYQK/kQzRFgeqGQytqHo2sOvVIgQz57kQiefoe0+YpcFd1N+uMtJ+i5toBqyHWFkjOJhyffXZrSibgbf6IENmjSGst4Saj6WcfWTy82btaottEDiPJ0F8qGVurF4zQY/E8nNsc7LLJTFNSOUngFST+8/vTs0K6hIUVz7sCaP2GtSYq6IBzPSW+AoHHk/vHzQgEoDljLeS5bBWiBKb0p1YgJl6CQ== Received: from MN0PR02CA0019.namprd02.prod.outlook.com (2603:10b6:208:530::34) by BY5PR12MB4212.namprd12.prod.outlook.com (2603:10b6:a03:202::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.15; Tue, 3 Dec 2024 20:30:53 +0000 Received: from BN1PEPF00005FFE.namprd05.prod.outlook.com (2603:10b6:208:530:cafe::1) by MN0PR02CA0019.outlook.office365.com (2603:10b6:208:530::34) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8207.19 via Frontend Transport; Tue, 3 Dec 2024 20:30:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00005FFE.mail.protection.outlook.com (10.167.243.230) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:51 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:29 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:28 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:25 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Carolina Jubran , "Cosmin Ratiu" , Tariq Toukan Subject: [PATCH net-next V4 08/11] net/mlx5: Add no-op implementation for setting tc-bw on rate objects Date: Tue, 3 Dec 2024 22:29:21 +0200 Message-ID: <20241203202924.228440-9-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00005FFE:EE_|BY5PR12MB4212:EE_ X-MS-Office365-Filtering-Correlation-Id: 6dc5c8bd-ac0c-4d53-c7b6-08dd13d9615d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|82310400026|36860700013; X-Microsoft-Antispam-Message-Info: lVHUm7ECHze6jeFIkje80OwFuf6oKGofi5LFktH7+OkyL3xc+sUiCrY7hNHw39W76mhVN1o0aq6w6sbSL59tjzBuVHmXxUMTYKI2mrc3V5zYBuQola0hVmP8MQHrta8qSvkV66Y2NuEpidyXtZtLE3j6l/Mjw6JLj30xDqvNJe/Yk67G3VdkkAApfQsBnhd6ryZqPvI5dwWMRF74XaD2WcfbTV3IECqP8/L+aZxdqkfyp37KPjPW9grJqjmTjhpszUEAKllqsoT1j/g7l8LuUdLvyuAIYoa5Jf4z4ONhqdJ1mIPRqL4JOyoT4khkteJ4AOgJdD3WjjqyI7r78w0niwpvws4M0cpQDfHFoxKHIS/X81JHJvCzJYGWnMmfyMzAiDufqT1mY7NlmYaWB1LRId93IitUqGZ+s/e+ktZnj4VIxLywpT49SJ4IeZwknth9ALzjsVacbwNHAzueWCqBdzzg66SUdmydq+eeM1Zs9ORhltAdrgZ2Huv7ArrqFMVATw6aFjBhCkI2IIDg2IONSxEdnn4JIRt3Cs/Y0G6z5MBX/w7HmjT0FMphcq7JRg0IVcmJQQgJJRzBWeWd8FDOWJ5KJ9cqQUi3yPi+F9PxhvKqxbzqvq99s/fEtuVy1I7Z2Lwd/c1+70BeQHHzLLMzzG0fVmBxEds8vxQYtoopwOWoE/ayk4LuWPrvvqAyYR1HgymRpJSH/WLU1F0YVq/6z0YbSCElb2vgszuvjrlS4FngC9+l+d3Qdq2c+5ExWsXnEg12n0ew2M5nuaYkl145cYkqhNAKABXNKSRyR1C8KPrB5Qocmkz8AmiVG0jQAfi/nsLL05yq2EWUoDBxNTcMSY6kjyBz3WwZEvVPMIxIBs17ii5ERATm5r7oUNWKlSONRWUMgLKRbzJ2lyJjyLcqfNMYSD/LLaSTyF4z1pAZE3vVp1OHpEwT4xWNUsqDgvEDhrRHohCg0bXvAmAwnj6a0Mtl0Gt4unOSBTrfV/K3JRpuIVM+m67B1zsPQFNXBXr+V4Cl5RijO3ir1bF+qDKWbHlIfzdK0Pug9Q7jzZLXB4ANftDP3JX9Wowbjxz0kCPMXvHjDHy4AohhxaMx1Rba5ORsxaHJeLhHj7BG4lCIX1gHj4RPtXruX+aUkH6VEYiY/mMnbaFiu9fzzdA/r+ecu53BGP1zDMHg7Zf4LFecqPstJRAFdFYOlCEkMv7Qxr5E7uqzJEc45wSsOEf1v4z9YevWBY9Dzg/UOxke9llwRAHXjLTz3n1hE9vMB2jKt50yCtKqSq8DosnVK8CYugHmUoWHx0M9Q4zCg0HxI+LKuXe78yA4oYqn0Iu0ZuI5IFzHNgj8k/eTX3PxMGigOOPZjNe5u0JHfQg9/CzjUATkkFD9eMK77sHmLfAFcIo1tAhyS+pRnpB2Ro7yTGiV1fGeOzo+xdoSuMPpmkmpEDRXxPx6suqbLelQcNTcn7l9Ripa X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(1800799024)(82310400026)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:51.6814 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6dc5c8bd-ac0c-4d53-c7b6-08dd13d9615d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00005FFE.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4212 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce `mlx5_esw_devlink_rate_node_tc_bw_set()` and `mlx5_esw_devlink_rate_leaf_tc_bw_set()` with no-op logic. Future patches will add support for setting traffic class bandwidth on rate objects. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/devlink.c | 2 ++ drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c | 14 ++++++++++++++ drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h | 4 ++++ 3 files changed, 20 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c index 98d4306929f3..728d5c06d612 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c @@ -320,6 +320,8 @@ static const struct devlink_ops mlx5_devlink_ops = { .eswitch_encap_mode_get = mlx5_devlink_eswitch_encap_mode_get, .rate_leaf_tx_share_set = mlx5_esw_devlink_rate_leaf_tx_share_set, .rate_leaf_tx_max_set = mlx5_esw_devlink_rate_leaf_tx_max_set, + .rate_leaf_tc_bw_set = mlx5_esw_devlink_rate_leaf_tc_bw_set, + .rate_node_tc_bw_set = mlx5_esw_devlink_rate_node_tc_bw_set, .rate_node_tx_share_set = mlx5_esw_devlink_rate_node_tx_share_set, .rate_node_tx_max_set = mlx5_esw_devlink_rate_node_tx_max_set, .rate_node_new = mlx5_esw_devlink_rate_node_new, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 8b7c843446e1..db112a87b7ee 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -882,6 +882,20 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * return err; } +int mlx5_esw_devlink_rate_leaf_tc_bw_set(struct devlink_rate *rate_leaf, void *priv, + u32 *tc_bw, struct netlink_ext_ack *extack) +{ + NL_SET_ERR_MSG_MOD(extack, "TC bandwidth shares are not supported on leafs"); + return -EOPNOTSUPP; +} + +int mlx5_esw_devlink_rate_node_tc_bw_set(struct devlink_rate *rate_node, void *priv, + u32 *tc_bw, struct netlink_ext_ack *extack) +{ + NL_SET_ERR_MSG_MOD(extack, "TC bandwidth shares are not supported on nodes"); + return -EOPNOTSUPP; +} + int mlx5_esw_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void *priv, u64 tx_share, struct netlink_ext_ack *extack) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h index 6eb8f6a648c8..0239f10f95e7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h @@ -21,6 +21,10 @@ int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void u64 tx_share, struct netlink_ext_ack *extack); int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void *priv, u64 tx_max, struct netlink_ext_ack *extack); +int mlx5_esw_devlink_rate_leaf_tc_bw_set(struct devlink_rate *rate_node, void *priv, + u32 *tc_bw, struct netlink_ext_ack *extack); +int mlx5_esw_devlink_rate_node_tc_bw_set(struct devlink_rate *rate_node, void *priv, + u32 *tc_bw, struct netlink_ext_ack *extack); int mlx5_esw_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void *priv, u64 tx_share, struct netlink_ext_ack *extack); int mlx5_esw_devlink_rate_node_tx_max_set(struct devlink_rate *rate_node, void *priv, From patchwork Tue Dec 3 20:29:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892914 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC48220A5F6; Tue, 3 Dec 2024 20:31:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.40 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257864; cv=fail; b=lURCLoUs3EC3YnRJUIJtA9fMB0QnLL5brmqEiYI5o0TBtBYTjEYhR8fHKUC0OObSrVsMG2/vvIRDyPwL5DjoDhyK6NADJrMqd67A3V2KpZ7PPcMZLelPu31rRDt+s49xBTOAiQrj0XjTL2tN5ydeAmcSr1wts/xkcWZMLkQRqzw= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257864; c=relaxed/simple; bh=FPuvYi9r0lQX3bfQkfNbraHml67mhEjEYOCQHOXl6Wg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZiCeyBjCA8QR1kpN1Ojr8g6fK5GBxTdIBkLR953R0Aq6cbF0tEc7D3zWnH9+19ayVaBSm8SXP/2XJJkRbbYUQCwr22Zn29XKbFZKcT15coxW8suD24lRbKQqLbR8kDFpF/A/H7sICahSYVks4uZFtbQYX2JFdYjTvE4Yw3kK/Cs= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=ZIlXc97l; arc=fail smtp.client-ip=40.107.236.40 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="ZIlXc97l" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Y1GMjklDivTwKdvYQHhEyk4POCnBkygMeyjCyk4T3hN0WxXPsgZ1+cPiPMVx418vFWTIVG4wUjavdDe6igBAmSrlYCKPKjDFtmAVrTl4jUJh08pdDY8Ynhgd3Sn9AfNCSzM0dLy7gs3TTAYf9Q8CXuGM0bWKk54fE9b1EAzmbYqACseo1vssuQ3zXEjLaC3cdHwZ0+x/LBuFk49XvmcoHoHH5hGmih3nIwDEXgNqWqkccoEgdo3t9dZdyGsm4ZfvEQ7TAm0TdLaYJB0naAOeQWS9AluMQcEOposJRmJ1qgfzrhXpux5RYnLm5rqUd2oT7kZYqSDF8e8xGALfhpML+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+X4E+XFBH9pCyxc0EKw76fyVXGUiE49BhpfBub6y5Ds=; b=LAbLVPDwm/qFZMpdsgGwugjtwtbFLHfEBa8bHtlmKjyfQSH33FSm0awgBBQIbXAl1Lw2vQB1043cmFFnZkFaPad8pASZb0EPaSDu2Ye7q9ZWPfwK0OjLLWafayKpv+5wKSDxwKMkhBZKTAeCGT59BjBgJqYMbzjr3poAOhxnHYnOB/58crR3oev7/a6h9XqJvv8a/r7TpPJYupVs5j5ANaxQDKuiWKk6b9NhV1+u0FJqkFiVb90bMKhiymmlmTPP1jxO+nnGAh2b1n/tM2W2lhgRXQS2qo9CuBJye+5Qz0QGcRgo0eK2NgZQXryNNXfifwCliTBjQ0jQB/m8CMnz+Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+X4E+XFBH9pCyxc0EKw76fyVXGUiE49BhpfBub6y5Ds=; b=ZIlXc97lG/4+oo8Mv9bw3HThPzOiBnyLVJQrl7A6N01CKASAkLhqxRqrP8X8q2cCFwQ2NzAqcHwAmVbr9DULzDdYnv7+64Np7DHp9A6S+0DIuLhgvrRYzyH614iAUe9aUqYr2s+xkBFUFQXbPeN/KuyDdRbKuCouiJpYVk0qkTO36BVqDZr7uya72jSX+1HHzzPoaMEyVWCwtvi95gIrii1K2cth+cUGUooCPr5CMetSuTqwyUrB0l7OLfFg4hK5SdeTDE4L3vgwUDeX1++/fuc2O8rnBoHZ6iFyOZwkgfZcy8PE9aSZdAtGmPfJogcbRxWoyTz95MHb7nGjlWnyWw== Received: from DM6PR11CA0069.namprd11.prod.outlook.com (2603:10b6:5:14c::46) by SJ0PR12MB6903.namprd12.prod.outlook.com (2603:10b6:a03:485::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.18; Tue, 3 Dec 2024 20:30:53 +0000 Received: from DS2PEPF00003443.namprd04.prod.outlook.com (2603:10b6:5:14c:cafe::2f) by DM6PR11CA0069.outlook.office365.com (2603:10b6:5:14c::46) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8207.17 via Frontend Transport; Tue, 3 Dec 2024 20:30:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by DS2PEPF00003443.mail.protection.outlook.com (10.167.17.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:53 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:33 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:32 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:29 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Carolina Jubran , "Cosmin Ratiu" , Tariq Toukan Subject: [PATCH net-next V4 09/11] net/mlx5: Add support for setting tc-bw on nodes Date: Tue, 3 Dec 2024 22:29:22 +0200 Message-ID: <20241203202924.228440-10-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003443:EE_|SJ0PR12MB6903:EE_ X-MS-Office365-Filtering-Correlation-Id: 91cbd6db-9212-4e33-a56c-08dd13d9623e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|36860700013|376014; X-Microsoft-Antispam-Message-Info: =?utf-8?q?/tZ0+VPeRamYyVfNI53KgP7lc+0pE0h?= =?utf-8?q?2l0fdotRhJo9EivRoNg9hH+8J9SEBrZuPhnRVl+bnnXyXqEND6VOEoQ/NC/dhcRwa?= =?utf-8?q?IGwc35yP3DuNCJVCyARvQ5GMR9FVoTuiLtuTKiQYgIchOZrKOKyAVYGJR5pjQUZyK?= =?utf-8?q?V8LTE6eByLEyAGZ75uE8vJyB0iga5dk8ZJ7GOduwFhXMK+ucM26d9Mb9e+HTMRKOV?= =?utf-8?q?vf47HPzjPHq/ToRO5PMAGTk84Og1NsY44JvN8I8y7yYfmsaRT/ywjKMZMrrsLUVb/?= =?utf-8?q?NlO01HOCRFohyjQDEqsvbwRnn/LWxVpWDWQifazAaA2IhX0EfB0qgt3IxT7KKP/Sx?= =?utf-8?q?unFMOwHYGHPclLiDJkJ4JuEtGa5Vxs5V1QzeW7L0QCM3zqtiVcMOP0qq9y6O+y573?= =?utf-8?q?Vu2cGl2xg+Wjq3Pi9CPPuL+wLDzc+linQ4HFs9sZMcD0k7RD32WK7mkN4zmzEN4b+?= =?utf-8?q?ahSEoXBuXWWkthNBZ6sDX1lOnIMqV6SOquEmzW68DZknCEXRF++v0RFD9cNrZ1ng/?= =?utf-8?q?K4dO9pM7ZrlrCcPANJKZxbSofhHjqIScNYstjeP0ISL1xKvjIYAdxBKkOVFxRvHKp?= =?utf-8?q?GEOzJjUo5fE0dI66yryojLxjqd+BO832GVD/Kft0YyWP96w9y7ehqw/s6I0vCQS2q?= =?utf-8?q?TkZ30SWDxSpYYPw9RCkM0NgSooreT6zT5YdHC6+K7XC+WnKHxzSS5i3pIGyL7ahei?= =?utf-8?q?5DNr/vWbHzgMjY2Pam8t+5/QLdZCIrLLxDHYgulrJiS4DHVxbdAK+/Rxae6hMlRW2?= =?utf-8?q?vBClL4cJcJdzKRdV0KwsxFlZm5/bDTEQmzSTX42FVsB11qpEXeHcMgGc5VxRpvrFd?= =?utf-8?q?rvd2Y+Pz4RZ0Phoia4FCDX2xv+yMka2g6skelrB99n6cKpSPmIHWF/lvtASzVVSg/?= =?utf-8?q?YwrUltQ6qbg1lKYxoIS+nRv7+MVbezsR3rPmOC7aWPJGWFNIXKuknR1LX7JNEK9ku?= =?utf-8?q?d5qWfcEOcRCmwWjloIzv5I2xxBiPx+NHxXnWEPMaTRmwdonCkEOduGegya/Z/6ETW?= =?utf-8?q?BReG/H/LUS4CuczUxDiOnUXA7AuvlTvtljCm1Rbd9JXG2eQfSIIJw8Wyx58fF6HFE?= =?utf-8?q?2EAEazwpAQxQMBu5k610MVdSYI9Hs6BpNvnazsvGjMJWuQlYpCuaFQAvGmcc75Hw4?= =?utf-8?q?YnPEAmfxKUSMNzR+pV4fFy3SYvhqONE93blLMkJ4Sia/XHailX4QAUFYWLtcLI9bk?= =?utf-8?q?Jvoi3A0GPvlA/E76F0nAO9gs2t/R5tLw5OwFny7uLeLSwBsf2kN2jJO0eKwLpASfA?= =?utf-8?q?LtyK6blzfREWdhIEZcBIo4Lttn11G7OEIyfEbFuhyiclkP+Ro1h3HZrGRi+2P4PUF?= =?utf-8?q?88DdTTwb3N9CWMacrMpA82mqiNVOWTwUt4P0bPkMg0pJHW9MOBpph6lAEyu2Dlg4h?= =?utf-8?q?wc8R9kz5Qon?= X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(36860700013)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:53.2019 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 91cbd6db-9212-4e33-a56c-08dd13d9623e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003443.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6903 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce support for enabling and disabling Traffic Class (TC) arbitration for existing devlink rate nodes. This patch adds support for a new scheduling node type, `SCHED_NODE_TYPE_TC_ARBITER_TSAR`. Key changes include: - New helper functions for transitioning existing rate nodes to TC arbiter nodes and vice versa. These functions handle the allocation of TC arbiter nodes, copying of child nodes, and restoring vport QoS settings when TC arbitration is disabled. - Implementation of `mlx5_esw_devlink_rate_node_tc_bw_set()` to manage tc-bw configuration on nodes. - Introduced stubs for `esw_qos_tc_arbiter_scheduling_setup()` and `esw_qos_tc_arbiter_scheduling_teardown()`, which will be extended in future patches to provide full support for tc-bw on devlink rate objects. - Validation functions for tc-bw settings, allowing graceful handling of unsupported traffic class bandwidth configurations. - Updated `__esw_qos_alloc_node()` to insert the new node into the parent’s children list only if the parent is not NULL. For the root TSAR, the new node is inserted directly after the allocation call. This patch lays the groundwork for future support for configuring tc-bw on devlink rate nodes. Although the infrastructure is in place, full support for tc-bw is not yet implemented; attempts to set tc-bw on nodes will return `-EOPNOTSUPP`. No functional changes are introduced at this stage. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 260 +++++++++++++++++- 1 file changed, 246 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index db112a87b7ee..b17c3a82d175 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -64,11 +64,13 @@ static void esw_qos_domain_release(struct mlx5_eswitch *esw) enum sched_node_type { SCHED_NODE_TYPE_VPORTS_TSAR, SCHED_NODE_TYPE_VPORT, + SCHED_NODE_TYPE_TC_ARBITER_TSAR, }; static const char * const sched_node_type_str[] = { [SCHED_NODE_TYPE_VPORTS_TSAR] = "vports TSAR", [SCHED_NODE_TYPE_VPORT] = "vport", + [SCHED_NODE_TYPE_TC_ARBITER_TSAR] = "TC Arbiter TSAR", }; struct mlx5_esw_sched_node { @@ -92,6 +94,13 @@ struct mlx5_esw_sched_node { struct mlx5_vport *vport; }; +static int esw_qos_num_tcs(struct mlx5_core_dev *dev) +{ + int num_tcs = mlx5_max_tc(dev) + 1; + + return num_tcs < IEEE_8021QAZ_MAX_TCS ? num_tcs : IEEE_8021QAZ_MAX_TCS; +} + static void esw_qos_node_set_parent(struct mlx5_esw_sched_node *node, struct mlx5_esw_sched_node *parent) { @@ -101,6 +110,15 @@ esw_qos_node_set_parent(struct mlx5_esw_sched_node *node, struct mlx5_esw_sched_ node->esw = parent->esw; } +static void +esw_qos_nodes_set_parent(struct list_head *nodes, struct mlx5_esw_sched_node *parent) +{ + struct mlx5_esw_sched_node *node, *tmp; + + list_for_each_entry_safe(node, tmp, nodes, entry) + esw_qos_node_set_parent(node, parent); +} + void mlx5_esw_qos_vport_qos_free(struct mlx5_vport *vport) { kfree(vport->qos.sched_node); @@ -126,16 +144,23 @@ mlx5_esw_qos_vport_get_parent(const struct mlx5_vport *vport) static void esw_qos_sched_elem_warn(struct mlx5_esw_sched_node *node, int err, const char *op) { - if (node->vport) { + switch (node->type) { + case SCHED_NODE_TYPE_VPORT: esw_warn(node->esw->dev, "E-Switch %s %s scheduling element failed (vport=%d,err=%d)\n", op, sched_node_type_str[node->type], node->vport->vport, err); - return; + break; + case SCHED_NODE_TYPE_TC_ARBITER_TSAR: + case SCHED_NODE_TYPE_VPORTS_TSAR: + esw_warn(node->esw->dev, + "E-Switch %s %s scheduling element failed (err=%d)\n", + op, sched_node_type_str[node->type], err); + break; + default: + esw_warn(node->esw->dev, + "E-Switch %s scheduling element failed (err=%d)\n", op, err); + break; } - - esw_warn(node->esw->dev, - "E-Switch %s %s scheduling element failed (err=%d)\n", - op, sched_node_type_str[node->type], err); } static int esw_qos_node_create_sched_element(struct mlx5_esw_sched_node *node, void *ctx, @@ -358,7 +383,6 @@ static struct mlx5_esw_sched_node * __esw_qos_alloc_node(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_node_type type, struct mlx5_esw_sched_node *parent) { - struct list_head *parent_children; struct mlx5_esw_sched_node *node; node = kzalloc(sizeof(*node), GFP_KERNEL); @@ -370,8 +394,10 @@ __esw_qos_alloc_node(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_node_type node->type = type; node->parent = parent; INIT_LIST_HEAD(&node->children); - parent_children = parent ? &parent->children : &esw->qos.domain->nodes; - list_add_tail(&node->entry, parent_children); + if (parent) + list_add_tail(&node->entry, &parent->children); + else + INIT_LIST_HEAD(&node->entry); return node; } @@ -409,6 +435,7 @@ __esw_qos_create_vports_sched_node(struct mlx5_eswitch *esw, struct mlx5_esw_sch goto err_alloc_node; } + list_add_tail(&node->entry, &esw->qos.domain->nodes); esw_qos_normalize_min_rate(esw, NULL, extack); trace_mlx5_esw_node_qos_create(esw->dev, node, node->ix); @@ -475,11 +502,11 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta /* The eswitch doesn't support scheduling nodes. * Create a software-only node0 using the root TSAR to attach vport QoS to. */ - if (!__esw_qos_alloc_node(esw, - esw->qos.root_tsar_ix, - SCHED_NODE_TYPE_VPORTS_TSAR, + if (!__esw_qos_alloc_node(esw, esw->qos.root_tsar_ix, SCHED_NODE_TYPE_VPORTS_TSAR, NULL)) esw->qos.node0 = ERR_PTR(-ENOMEM); + else + list_add_tail(&esw->qos.node0->entry, &esw->qos.domain->nodes); } if (IS_ERR(esw->qos.node0)) { err = PTR_ERR(esw->qos.node0); @@ -537,6 +564,17 @@ static void esw_qos_put(struct mlx5_eswitch *esw) esw_qos_destroy(esw); } +static void esw_qos_tc_arbiter_scheduling_teardown(struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) +{} + +static int esw_qos_tc_arbiter_scheduling_setup(struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) +{ + NL_SET_ERR_MSG_MOD(extack, "TC arbiter elements are not supported."); + return -EOPNOTSUPP; +} + static void esw_qos_vport_disable(struct mlx5_vport *vport, struct netlink_ext_ack *extack) { struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; @@ -699,6 +737,157 @@ static int esw_qos_vport_update_parent(struct mlx5_vport *vport, struct mlx5_esw return err; } +static void esw_qos_switch_vport_tcs_to_vport(struct mlx5_esw_sched_node *tc_arbiter_node, + struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vports_tc_node, *vport_tc_node, *tmp; + + vports_tc_node = list_first_entry(&tc_arbiter_node->children, struct mlx5_esw_sched_node, + entry); + + list_for_each_entry_safe(vport_tc_node, tmp, &vports_tc_node->children, entry) + esw_qos_vport_update_parent(vport_tc_node->vport, node, extack); +} + +static int esw_qos_switch_tc_arbiter_node_to_vports(struct mlx5_esw_sched_node *tc_arbiter_node, + struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) +{ + u32 parent_tsar_ix = node->parent ? node->parent->ix : node->esw->qos.root_tsar_ix; + int err; + + err = esw_qos_create_node_sched_elem(node->esw->dev, parent_tsar_ix, &node->ix); + if (err) { + NL_SET_ERR_MSG_MOD(extack, + "Failed to create scheduling element for vports node when disabliing vports TC QoS"); + return err; + } + + node->type = SCHED_NODE_TYPE_VPORTS_TSAR; + + /* Disable TC QoS for vports in the arbiter node. */ + esw_qos_switch_vport_tcs_to_vport(tc_arbiter_node, node, extack); + + return 0; +} + +static int esw_qos_switch_vports_node_to_tc_arbiter(struct mlx5_esw_sched_node *node, + struct mlx5_esw_sched_node *tc_arbiter_node, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vport_node, *tmp; + struct mlx5_vport *vport; + int err; + + /* Enable TC QoS for each vport in the node. */ + list_for_each_entry_safe(vport_node, tmp, &node->children, entry) { + vport = vport_node->vport; + err = esw_qos_vport_update_parent(vport, tc_arbiter_node, extack); + if (err) + goto err_out; + } + + /* Destroy the current vports node TSAR. */ + err = mlx5_destroy_scheduling_element_cmd(node->esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, + node->ix); + if (err) + goto err_out; + + return 0; +err_out: + /* Restore vports back into the node if an error occurs. */ + esw_qos_switch_vport_tcs_to_vport(tc_arbiter_node, node, NULL); + + return err; +} + +static struct mlx5_esw_sched_node *esw_qos_move_node(struct mlx5_esw_sched_node *curr_node) +{ + struct mlx5_esw_sched_node *new_node; + + new_node = __esw_qos_alloc_node(curr_node->esw, curr_node->ix, curr_node->type, NULL); + if (!IS_ERR(new_node)) + esw_qos_nodes_set_parent(&curr_node->children, new_node); + + return new_node; +} + +static int esw_qos_node_disable_tc_arbitration(struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *curr_node; + int err; + + if (node->type != SCHED_NODE_TYPE_TC_ARBITER_TSAR) + return 0; + + /* Allocate a new rate node to hold the current state, which will allow + * for restoring the vports back to this node after disabling TC arbitration. + */ + curr_node = esw_qos_move_node(node); + if (IS_ERR(curr_node)) { + NL_SET_ERR_MSG_MOD(extack, "Failed setting up vports node"); + return PTR_ERR(curr_node); + } + + /* Disable TC QoS for all vports, and assign them back to the node. */ + err = esw_qos_switch_tc_arbiter_node_to_vports(curr_node, node, extack); + if (err) + goto err_out; + + /* Clean up the TC arbiter node after disabling TC QoS for vports. */ + esw_qos_tc_arbiter_scheduling_teardown(curr_node, extack); + goto out; +err_out: + esw_qos_nodes_set_parent(&curr_node->children, node); +out: + __esw_qos_free_node(curr_node); + return err; +} + +static int esw_qos_node_enable_tc_arbitration(struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *curr_node; + int err; + + if (node->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) + return 0; + + /* Allocate a new node that will store the information of the current node. + * This will be used later to restore the node if necessary. + */ + curr_node = esw_qos_move_node(node); + if (IS_ERR(curr_node)) { + NL_SET_ERR_MSG_MOD(extack, "Failed setting up node TC QoS"); + return PTR_ERR(curr_node); + } + + /* Initialize the TC arbiter node for QoS management. + * This step prepares the node for handling Traffic Class arbitration. + */ + err = esw_qos_tc_arbiter_scheduling_setup(node, extack); + if (err) + goto err_setup; + + /* Enable TC QoS for each vport within the current node. */ + err = esw_qos_switch_vports_node_to_tc_arbiter(curr_node, node, extack); + if (err) + goto err_switch_vports; + goto out; + +err_switch_vports: + esw_qos_tc_arbiter_scheduling_teardown(node, NULL); + node->ix = curr_node->ix; + node->type = curr_node->type; +err_setup: + esw_qos_nodes_set_parent(&curr_node->children, node); +out: + __esw_qos_free_node(curr_node); + return err; +} + static u32 mlx5_esw_qos_lag_link_speed_get_locked(struct mlx5_core_dev *mdev) { struct ethtool_link_ksettings lksettings; @@ -824,6 +1013,30 @@ static int esw_qos_devlink_rate_to_mbps(struct mlx5_core_dev *mdev, const char * return 0; } +static bool esw_qos_validate_unsupported_tc_bw(struct mlx5_eswitch *esw, u32 *tc_bw) +{ + int i, num_tcs = esw_qos_num_tcs(esw->dev); + + for (i = num_tcs; i < IEEE_8021QAZ_MAX_TCS; i++) { + if (tc_bw[i]) + return false; + } + + return true; +} + +static bool esw_qos_tc_bw_disabled(u32 *tc_bw) +{ + int i; + + for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { + if (tc_bw[i]) + return false; + } + + return true; +} + int mlx5_esw_qos_init(struct mlx5_eswitch *esw) { if (esw->qos.domain) @@ -892,8 +1105,27 @@ int mlx5_esw_devlink_rate_leaf_tc_bw_set(struct devlink_rate *rate_leaf, void *p int mlx5_esw_devlink_rate_node_tc_bw_set(struct devlink_rate *rate_node, void *priv, u32 *tc_bw, struct netlink_ext_ack *extack) { - NL_SET_ERR_MSG_MOD(extack, "TC bandwidth shares are not supported on nodes"); - return -EOPNOTSUPP; + struct mlx5_esw_sched_node *node = priv; + struct mlx5_eswitch *esw = node->esw; + bool disable; + int err; + + if (!esw_qos_validate_unsupported_tc_bw(esw, tc_bw)) { + NL_SET_ERR_MSG_MOD(extack, "E-Switch traffic classes number is not supported"); + return -EOPNOTSUPP; + } + + disable = esw_qos_tc_bw_disabled(tc_bw); + esw_qos_lock(esw); + if (disable) { + err = esw_qos_node_disable_tc_arbitration(node, extack); + goto unlock; + } + + err = esw_qos_node_enable_tc_arbitration(node, extack); +unlock: + esw_qos_unlock(esw); + return err; } int mlx5_esw_devlink_rate_node_tx_share_set(struct devlink_rate *rate_node, void *priv, From patchwork Tue Dec 3 20:29:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892915 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2054.outbound.protection.outlook.com [40.107.220.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 193DC1E0DAF; Tue, 3 Dec 2024 20:31:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.54 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257866; cv=fail; b=aK5Gqqz88bJdW8lrXhY6nAYC4dyFNhT8bluHdKQl35mRmVMzwFU0SQUyEgo67JjBVSvi2A4xuwodxGelLiQEj2TSUUpbZubvtHRLoxEWySZ51HqF5OzOVsJS0S8K3nM2pjq6PRTXVNYRNX4cbEPQ2dsl/Va14HxolTfroLnd/bg= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257866; c=relaxed/simple; bh=pHPiHxkHVeAXSkJmkHECWEtJLRvdm2FGHKnjy3XcH74=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=YRclFiFIjE6t4jMLknCJ6Jt9sFTjrcHEwU5yC8CbR4whzeCSmQHrKDYHQyCdfSkSRwdtYwBuLgwhzjkNYNomkI27NtrW0JPAGjxdomjlX/2WAycfkzpcQTbCUmQMA06HyWXT5zRmxUZuNgnsGREoS8l4fx1tGqw2DJmpzQQP7d4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=aVQ93nIc; arc=fail smtp.client-ip=40.107.220.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="aVQ93nIc" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=K1h+/DFd2ZWLEk6XKWwH5zdd7A/qImEUpuC/eWeBBMOiQdtH8kB86dtZ0iC/io8ZXSb9Qqjce3tUD386P+eSRGAzX6KI1+HC/NROa6xaiMNWT8Z/I3rSqMRnaJyeBCip84x9gVLaZ5cVl/UY9/pzU6bJfTBe6yfHv1U7zKPswaPxUgk/0n4IewjrNOqrVVXlbDvtlb84QL/4rLFxB63vADq5T6KiGw3ifge/+AQ/ZBkE9rFc+9517ITu5C97kllPgmX1Cn0G+7mi0+TT8CWuXiho8LiAQPt7H5k2CDQbYslUf9xidMHfQA3pz6PuA6fmdT9Wum89S7kIYh8t/vS2ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QFQzAYTi2EezQBOcjrjkNtbQA0YlRS5M3RDhfwGiltw=; b=N8AbiNvmGEtpWWCaNquJ9J9o9zCGwEmI8+JYz7cUtmvVnpR9ELfCFE6mskyipHfR4yDFMnS5YXQfvSNcpg+/A63MabldZT9iq7pMC13/YjnGRlvxHT4yaVKO4hqu1iV0ijQg1Dvp2qliHSrkDWwVbB7zDbPSC8H3a0z5gzDuywKaTNccTV4EUV9q+8Qn9FrnKHGLhxU7ozB4sjiWL0VgOsDrHyE9nG19MD/sW9G6NqVw8GMz1ZRQVRpHojOAhTs1kedpIe76s+Ci/9aa4ddIz2Va7egTVud8PO4cCK4Njzlj0RIHNF9yL3AuIq+9iYWUtBGquqHm/cJBrZdeNVd7CA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QFQzAYTi2EezQBOcjrjkNtbQA0YlRS5M3RDhfwGiltw=; b=aVQ93nIcdWW0q6XZYwg0A9rfwZkRRe89OnljneM0xzsEya2I6Tbr+nYdQ5YL3szhZxyq6iFfgRKSDRsf84SvKM0iLXbknxvARs+3l6YNZT2LoqzrwXrlxiPdpFzNkEkGHSdpmdq+Opbvm4yTOCvsW1BbPscqvBihdAm1YPFS5QaE3vuWST504QQrZFZ8YLtfDFrkW6ldHi4vE4ai+n+41wHr3BhrgUyBDmLWBBMQj5era/tPrRtK7zQSF/Aj5hm3Nf6m7ssUr0PM0b/nZSctXPb4S5D1CsU/jrhr3eMWc60tAdAfiw87L5kfzE1Q4aLSqKPOli80yvRCwVIkOItdvQ== Received: from BN1PR13CA0017.namprd13.prod.outlook.com (2603:10b6:408:e2::22) by DM3PR12MB9351.namprd12.prod.outlook.com (2603:10b6:8:1ac::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.19; Tue, 3 Dec 2024 20:30:57 +0000 Received: from BN1PEPF00006000.namprd05.prod.outlook.com (2603:10b6:408:e2:cafe::43) by BN1PR13CA0017.outlook.office365.com (2603:10b6:408:e2::22) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8230.8 via Frontend Transport; Tue, 3 Dec 2024 20:30:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00006000.mail.protection.outlook.com (10.167.243.232) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:30:56 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:36 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:36 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:33 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Carolina Jubran , "Cosmin Ratiu" , Tariq Toukan Subject: [PATCH net-next V4 10/11] net/mlx5: Add traffic class scheduling support for vport QoS Date: Tue, 3 Dec 2024 22:29:23 +0200 Message-ID: <20241203202924.228440-11-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00006000:EE_|DM3PR12MB9351:EE_ X-MS-Office365-Filtering-Correlation-Id: 2199f4ad-3766-4a96-d462-08dd13d96457 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|376014|82310400026; X-Microsoft-Antispam-Message-Info: O3SncXZFKugBsQ+8PcFdjNJ1fBxFE//Ewwae6LdpWH+nwX2eahbz5cUd5kFrncQvjspD23Au2k3wkyTtjI/5QaF0sKSovCmJpu07pqvcqLME4FyUwFNDZqqNhfDofEn6+YbELTDN6JRFfqVWCEb2MRKLUTS/mwi+znGPKaXhPQoTynjPJBdG9/LJqW3akmhkv8hIpFEmQpYA9GdJCVzr7PuuF3Eh0ZUO5d85dbfnqpI8rde8z97pvbmNgf1Q6ozrUEjjFEq7qXF/DC532s++zc6fK8Sv6jREzi+jlgaTEpgzIQ/XV8/LFETZbjAiyapruWqcXHwUv7xE11aFu4sEFVtntlgiUrFoMyJCHzXCP4DDHFlrUeuwqV2flpgFn0bZ1Fg8GRDAUOjgjW10m6SSwAVznr0622D6n4dhO3BZvGiyqiC9TY/EY1MWI1+jdUXDq9kGQ+hwYJ5iSQzKmdxBtnF7WrOEAF5efPSJyGMJhtAWaZ57wtOxKusID+XYmkWfbcLO3JCRAzzWfQf9pfWc4rRaO5Gz2Jm+71eYoXMTiVO1Hh2mKTfWNV1Sk7slBlFK0nrxrn7lG5PCbDs4vHZrn3jeI+X/tGXFsd4UZSWFd1J4gEQPRzDxfpT3ztDSytLkn3cYj0sd1E7Ud1UM5/c/XSAyHRf7LwUKnIBSU8YtdnD2Jt53vq1OlyPgXQn/1LUGp789PYUBxT4fKb5/t7Z9aF7T7WaIV8uL5goE+Tf2lCLGFpBvqFelaZIQboXIuaJYdTYpUqFfVP9htXa1hP7KqYs1sbkM4WSHyOulCUkNtWWdnD/AB1hf0jKYkLCZ8GWTmVgDnOJljpH76uhT9e7IAB+sNtm71oqFwaOHFi6Zg6hEWdVBULdFSDRLv9/jzJy/cIkzFVCjHKLyfNeedTh/DiCk6MjSp31kWuA041aw6QlaS863NUkEJEfF3G5boe+oXOJE+E77mP/EotTvcfn9WItmiHwcU4axc38ds2z4Nu5DhNHiKGkeHJN3Mh6LGSgXoDftE8JsqIkPVK9oYf34L0um/7n97qdCxKtQXKW/nfqUoE+wrldJXr8m//tWypQYPP4khiEt4ynzrvRCFqfeP6AkJKIOBd5myJTV2xx440rtASrL0Z2U+73pscKzASghSXwzv7uJPS/T3+3HeLWtUWqe4z3NvAY293GHxBJS5HMMq6U8SCEMY1lJasTSmyG6e7/K9UWclu42fTcQ4I7sFv1sl8tBrIXXwCsqdcIheaEfRmdz735wvJ4L4nC8W4ECNmaE0fwJjBjJ9NHUB25KGpeaQh9DMhkkLTcnUqEOx7yQVeijofDyttfyr1YoaN9BcXajeTjDpah2GOJ/VThfLY+hCZiQE6rVywmvNyxK+f8XZC8Uf2hFLLXcRQNiknKMBboXltdpEfeV28wyqY8LtlK457CN9jmV+xRNfKOpNUGcIuQJwvTUJC2OjtnHIikD X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:30:56.6755 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2199f4ad-3766-4a96-d462-08dd13d96457 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00006000.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM3PR12MB9351 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce support for traffic class (TC) scheduling on vports by allowing the vport to own multiple TC scheduling nodes. This patch enables more granular control of QoS by defining three distinct QoS states for vports, each providing unique scheduling behavior: 1. Regular QoS: The `sched_node` represents the vport directly, handling QoS as a single scheduling entity. 2. TC QoS on the vport: The `sched_node` acts as a TC arbiter, enabling TC scheduling directly on the vport. 3. TC QoS on the parent node: The `sched_node` functions as a rate limiter, with TC arbitration enabled at the parent level, associating multiple scheduling nodes with each vport. Key changes include: - Added support for new scheduling elements, vport traffic class and rate limiter. - New helper functions for creating, destroying, and restoring vport TC scheduling nodes, handling transitions between regular QoS and TC arbitration states. - Updated `esw_qos_vport_enable()` and `esw_qos_vport_disable()` to support both regular QoS and TC arbitration states, ensuring consistent transitions between scheduling modes. - Introduced a `sched_nodes` array under `vport->qos` to store multiple TC scheduling nodes per vport, enabling finer control over per-TC QoS. - Enhanced `esw_qos_vport_update_parent()` to handle transitions between the three QoS states based on the current and new parent node types. This patch lays the groundwork for future support for configuring tc-bw on vports. Although the infrastructure is in place, full support for tc-bw is not yet implemented; attempts to set tc-bw on vports will return `-EOPNOTSUPP`. No functional changes are introduced at this stage. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 360 +++++++++++++++++- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 13 +- 2 files changed, 352 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index b17c3a82d175..afb00deaae16 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -65,12 +65,16 @@ enum sched_node_type { SCHED_NODE_TYPE_VPORTS_TSAR, SCHED_NODE_TYPE_VPORT, SCHED_NODE_TYPE_TC_ARBITER_TSAR, + SCHED_NODE_TYPE_RATE_LIMITER, + SCHED_NODE_TYPE_VPORT_TC, }; static const char * const sched_node_type_str[] = { [SCHED_NODE_TYPE_VPORTS_TSAR] = "vports TSAR", [SCHED_NODE_TYPE_VPORT] = "vport", [SCHED_NODE_TYPE_TC_ARBITER_TSAR] = "TC Arbiter TSAR", + [SCHED_NODE_TYPE_RATE_LIMITER] = "Rate Limiter", + [SCHED_NODE_TYPE_VPORT_TC] = "vport TC", }; struct mlx5_esw_sched_node { @@ -92,6 +96,8 @@ struct mlx5_esw_sched_node { struct list_head children; /* Valid only if this node is associated with a vport. */ struct mlx5_vport *vport; + /* Valid only when this node represents a traffic class. */ + u8 tc; }; static int esw_qos_num_tcs(struct mlx5_core_dev *dev) @@ -121,6 +127,14 @@ esw_qos_nodes_set_parent(struct list_head *nodes, struct mlx5_esw_sched_node *pa void mlx5_esw_qos_vport_qos_free(struct mlx5_vport *vport) { + if (vport->qos.sched_nodes) { + int i, num_tcs = esw_qos_num_tcs(vport->qos.sched_node->esw->dev); + + for (i = 0; i < num_tcs; i++) + kfree(vport->qos.sched_nodes[i]); + kfree(vport->qos.sched_nodes); + } + kfree(vport->qos.sched_node); memset(&vport->qos, 0, sizeof(vport->qos)); } @@ -145,11 +159,17 @@ mlx5_esw_qos_vport_get_parent(const struct mlx5_vport *vport) static void esw_qos_sched_elem_warn(struct mlx5_esw_sched_node *node, int err, const char *op) { switch (node->type) { + case SCHED_NODE_TYPE_VPORT_TC: + esw_warn(node->esw->dev, + "E-Switch %s %s scheduling element failed (vport=%d,tc=%d,err=%d)\n", + op, sched_node_type_str[node->type], node->vport->vport, node->tc, err); + break; case SCHED_NODE_TYPE_VPORT: esw_warn(node->esw->dev, "E-Switch %s %s scheduling element failed (vport=%d,err=%d)\n", op, sched_node_type_str[node->type], node->vport->vport, err); break; + case SCHED_NODE_TYPE_RATE_LIMITER: case SCHED_NODE_TYPE_TC_ARBITER_TSAR: case SCHED_NODE_TYPE_VPORTS_TSAR: esw_warn(node->esw->dev, @@ -243,6 +263,23 @@ static int esw_qos_sched_elem_config(struct mlx5_esw_sched_node *node, u32 max_r return 0; } +static int esw_qos_create_rate_limit_element(struct mlx5_esw_sched_node *node, + struct netlink_ext_ack *extack) +{ + u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; + + if (!mlx5_qos_element_type_supported(node->esw->dev, + SCHEDULING_CONTEXT_ELEMENT_TYPE_RATE_LIMIT, + SCHEDULING_HIERARCHY_E_SWITCH)) + return -EOPNOTSUPP; + + MLX5_SET(scheduling_context, sched_ctx, max_average_bw, node->max_rate); + MLX5_SET(scheduling_context, sched_ctx, element_type, + SCHEDULING_CONTEXT_ELEMENT_TYPE_RATE_LIMIT); + + return esw_qos_node_create_sched_element(node, sched_ctx, extack); +} + static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw, struct mlx5_esw_sched_node *parent) { @@ -379,6 +416,31 @@ static int esw_qos_vport_create_sched_element(struct mlx5_esw_sched_node *vport_ return esw_qos_node_create_sched_element(vport_node, sched_ctx, extack); } +static int esw_qos_vport_tc_create_sched_element(struct mlx5_esw_sched_node *vport_tc_node, + u32 rate_limit_elem_ix, + struct netlink_ext_ack *extack) +{ + u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; + struct mlx5_core_dev *dev = vport_tc_node->esw->dev; + void *attr; + + if (!mlx5_qos_element_type_supported(dev, + SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT_TC, + SCHEDULING_HIERARCHY_E_SWITCH)) + return -EOPNOTSUPP; + + MLX5_SET(scheduling_context, sched_ctx, element_type, + SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT_TC); + attr = MLX5_ADDR_OF(scheduling_context, sched_ctx, element_attributes); + MLX5_SET(vport_tc_element, attr, vport_number, vport_tc_node->vport->vport); + MLX5_SET(vport_tc_element, attr, traffic_class, vport_tc_node->tc); + MLX5_SET(scheduling_context, sched_ctx, max_bw_obj_id, rate_limit_elem_ix); + MLX5_SET(scheduling_context, sched_ctx, parent_element_id, vport_tc_node->parent->ix); + MLX5_SET(scheduling_context, sched_ctx, bw_share, vport_tc_node->bw_share); + + return esw_qos_node_create_sched_element(vport_tc_node, sched_ctx, extack); +} + static struct mlx5_esw_sched_node * __esw_qos_alloc_node(struct mlx5_eswitch *esw, u32 tsar_ix, enum sched_node_type type, struct mlx5_esw_sched_node *parent) @@ -575,12 +637,169 @@ static int esw_qos_tc_arbiter_scheduling_setup(struct mlx5_esw_sched_node *node, return -EOPNOTSUPP; } +static int esw_qos_create_vport_tc_sched_node(struct mlx5_vport *vport, + u32 rate_limit_elem_ix, + struct mlx5_esw_sched_node *vports_tc_node, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; + struct mlx5_esw_sched_node *vport_tc_node; + u8 tc = vports_tc_node->tc; + int err; + + vport_tc_node = __esw_qos_alloc_node(vport_node->esw, 0, SCHED_NODE_TYPE_VPORT_TC, + vports_tc_node); + if (!vport_tc_node) + return -ENOMEM; + + vport_tc_node->min_rate = vport_node->min_rate; + vport_tc_node->tc = tc; + vport_tc_node->vport = vport; + err = esw_qos_vport_tc_create_sched_element(vport_tc_node, rate_limit_elem_ix, extack); + if (err) + goto err_out; + + vport->qos.sched_nodes[tc] = vport_tc_node; + + return 0; +err_out: + __esw_qos_free_node(vport_tc_node); + return err; +} + +static void esw_qos_destroy_vport_tc_sched_elements(struct mlx5_vport *vport, + struct netlink_ext_ack *extack) +{ + int i, num_tcs = esw_qos_num_tcs(vport->qos.sched_node->esw->dev); + + for (i = 0; i < num_tcs; i++) { + if (vport->qos.sched_nodes[i]) + __esw_qos_destroy_node(vport->qos.sched_nodes[i], extack); + } + + kfree(vport->qos.sched_nodes); + vport->qos.sched_nodes = NULL; +} + +static int esw_qos_create_vport_tc_sched_elements(struct mlx5_vport *vport, + enum sched_node_type type, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; + struct mlx5_esw_sched_node *tc_arbiter_node, *vports_tc_node; + int err, num_tcs = esw_qos_num_tcs(vport_node->esw->dev); + u32 rate_limit_elem_ix; + + vport->qos.sched_nodes = kcalloc(num_tcs, sizeof(struct mlx5_esw_sched_node *), GFP_KERNEL); + if (!vport->qos.sched_nodes) { + NL_SET_ERR_MSG_MOD(extack, "Allocating the vport TC scheduling elements failed."); + return -ENOMEM; + } + + rate_limit_elem_ix = type == SCHED_NODE_TYPE_RATE_LIMITER ? vport_node->ix : 0; + tc_arbiter_node = type == SCHED_NODE_TYPE_RATE_LIMITER ? vport_node->parent : vport_node; + list_for_each_entry(vports_tc_node, &tc_arbiter_node->children, entry) { + err = esw_qos_create_vport_tc_sched_node(vport, rate_limit_elem_ix, vports_tc_node, + extack); + if (err) + goto err_create_vport_tc; + } + + return 0; + +err_create_vport_tc: + esw_qos_destroy_vport_tc_sched_elements(vport, NULL); + + return err; +} + +static int esw_qos_vport_tc_enable(struct mlx5_vport *vport, enum sched_node_type type, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; + int err; + + if (type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && + MLX5_CAP_QOS(vport_node->esw->dev, log_esw_max_sched_depth) < 2) { + NL_SET_ERR_MSG_MOD(extack, "Setting up TC Arbiter for a vport is not supported."); + return -EOPNOTSUPP; + } + + esw_assert_qos_lock_held(vport->dev->priv.eswitch); + + if (type == SCHED_NODE_TYPE_RATE_LIMITER) + err = esw_qos_create_rate_limit_element(vport_node, extack); + else + err = esw_qos_tc_arbiter_scheduling_setup(vport_node, extack); + if (err) + return err; + + /* Rate limiters impact multiple nodes not directly connected to them + * and are not direct members of the QoS hierarchy. + * Unlink it from the parent to reflect that. + */ + if (type == SCHED_NODE_TYPE_RATE_LIMITER) + list_del_init(&vport_node->entry); + + err = esw_qos_create_vport_tc_sched_elements(vport, type, extack); + if (err) + goto err_sched_nodes; + + return 0; + +err_sched_nodes: + if (type == SCHED_NODE_TYPE_RATE_LIMITER) { + esw_qos_node_destroy_sched_element(vport_node, NULL); + list_add_tail(&vport_node->entry, &vport_node->parent->children); + } else { + esw_qos_tc_arbiter_scheduling_teardown(vport_node, NULL); + } + return err; +} + +static void esw_qos_vport_tc_disable(struct mlx5_vport *vport, struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; + enum sched_node_type curr_type = vport_node->type; + + esw_qos_destroy_vport_tc_sched_elements(vport, extack); + + if (curr_type == SCHED_NODE_TYPE_RATE_LIMITER) + esw_qos_node_destroy_sched_element(vport_node, extack); + else + esw_qos_tc_arbiter_scheduling_teardown(vport_node, extack); +} + +static int esw_qos_set_vport_tcs_min_rate(struct mlx5_vport *vport, u32 min_rate, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; + int err, i, num_tcs = esw_qos_num_tcs(vport_node->esw->dev); + + for (i = 0; i < num_tcs; i++) { + err = esw_qos_set_node_min_rate(vport->qos.sched_nodes[i], min_rate, extack); + if (err) + goto err_out; + } + vport_node->min_rate = min_rate; + + return 0; +err_out: + for (--i; i >= 0; i--) + esw_qos_set_node_min_rate(vport->qos.sched_nodes[i], vport_node->min_rate, extack); + return err; +} + static void esw_qos_vport_disable(struct mlx5_vport *vport, struct netlink_ext_ack *extack) { struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node; struct mlx5_esw_sched_node *parent = vport_node->parent; + enum sched_node_type curr_type = vport_node->type; - esw_qos_node_destroy_sched_element(vport_node, extack); + if (curr_type == SCHED_NODE_TYPE_VPORT) + esw_qos_node_destroy_sched_element(vport_node, extack); + else + esw_qos_vport_tc_disable(vport, extack); vport_node->bw_share = 0; list_del_init(&vport_node->entry); @@ -589,7 +808,8 @@ static void esw_qos_vport_disable(struct mlx5_vport *vport, struct netlink_ext_a trace_mlx5_esw_vport_qos_destroy(vport_node->esw->dev, vport); } -static int esw_qos_vport_enable(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent, +static int esw_qos_vport_enable(struct mlx5_vport *vport, enum sched_node_type type, + struct mlx5_esw_sched_node *parent, struct netlink_ext_ack *extack) { int err; @@ -597,10 +817,14 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, struct mlx5_esw_sched_ esw_assert_qos_lock_held(vport->dev->priv.eswitch); esw_qos_node_set_parent(vport->qos.sched_node, parent); - err = esw_qos_vport_create_sched_element(vport->qos.sched_node, extack); + if (type == SCHED_NODE_TYPE_VPORT) + err = esw_qos_vport_create_sched_element(vport->qos.sched_node, extack); + else + err = esw_qos_vport_tc_enable(vport, type, extack); if (err) return err; + vport->qos.sched_node->type = type; esw_qos_normalize_min_rate(parent->esw, parent, extack); return 0; @@ -628,7 +852,7 @@ static int mlx5_esw_qos_vport_enable(struct mlx5_vport *vport, enum sched_node_t sched_node->min_rate = min_rate; sched_node->vport = vport; vport->qos.sched_node = sched_node; - err = esw_qos_vport_enable(vport, parent, extack); + err = esw_qos_vport_enable(vport, type, parent, extack); if (err) esw_qos_put(esw); @@ -680,6 +904,8 @@ static int mlx5_esw_qos_set_vport_min_rate(struct mlx5_vport *vport, u32 min_rat if (!vport_node) return mlx5_esw_qos_vport_enable(vport, SCHED_NODE_TYPE_VPORT, NULL, 0, min_rate, extack); + else if (vport_node->type == SCHED_NODE_TYPE_RATE_LIMITER) + return esw_qos_set_vport_tcs_min_rate(vport, min_rate, extack); else return esw_qos_set_node_min_rate(vport_node, min_rate, extack); } @@ -712,12 +938,59 @@ bool mlx5_esw_qos_get_vport_rate(struct mlx5_vport *vport, u32 *max_rate, u32 *m return enabled; } +static int esw_qos_vport_tc_check_type(enum sched_node_type curr_type, + enum sched_node_type new_type, + struct netlink_ext_ack *extack) +{ + if (curr_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && + new_type == SCHED_NODE_TYPE_RATE_LIMITER) { + NL_SET_ERR_MSG_MOD(extack, + "Cannot switch from vport-level TC arbitration to node-level TC arbitration"); + return -EOPNOTSUPP; + } + + if (curr_type == SCHED_NODE_TYPE_RATE_LIMITER && + new_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) { + NL_SET_ERR_MSG_MOD(extack, + "Cannot switch from node-level TC arbitration to vport-level TC arbitration"); + return -EOPNOTSUPP; + } + + return 0; +} + +static int esw_qos_vport_update(struct mlx5_vport *vport, enum sched_node_type type, + struct mlx5_esw_sched_node *parent, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *curr_parent = vport->qos.sched_node->parent; + enum sched_node_type curr_type = vport->qos.sched_node->type; + int err; + + esw_assert_qos_lock_held(vport->dev->priv.eswitch); + parent = parent ?: curr_parent; + if (curr_type == type && curr_parent == parent) + return 0; + + err = esw_qos_vport_tc_check_type(curr_type, type, extack); + if (err) + return err; + + esw_qos_vport_disable(vport, extack); + + err = esw_qos_vport_enable(vport, type, parent, extack); + if (err) + esw_qos_vport_enable(vport, curr_type, curr_parent, NULL); + + return err; +} + static int esw_qos_vport_update_parent(struct mlx5_vport *vport, struct mlx5_esw_sched_node *parent, struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw = vport->dev->priv.eswitch; struct mlx5_esw_sched_node *curr_parent; - int err; + enum sched_node_type type; esw_assert_qos_lock_held(esw); curr_parent = vport->qos.sched_node->parent; @@ -725,16 +998,17 @@ static int esw_qos_vport_update_parent(struct mlx5_vport *vport, struct mlx5_esw if (curr_parent == parent) return 0; - esw_qos_vport_disable(vport, extack); - - err = esw_qos_vport_enable(vport, parent, extack); - if (err) { - if (esw_qos_vport_enable(vport, curr_parent, NULL)) - esw_warn(parent->esw->dev, "vport restore QoS failed (vport=%d)\n", - vport->vport); - } + /* Set vport QoS type based on parent node type if different from default QoS; + * otherwise, use the vport's current QoS type. + */ + if (parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) + type = SCHED_NODE_TYPE_RATE_LIMITER; + else if (curr_parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) + type = SCHED_NODE_TYPE_VPORT; + else + type = vport->qos.sched_node->type; - return err; + return esw_qos_vport_update(vport, type, parent, extack); } static void esw_qos_switch_vport_tcs_to_vport(struct mlx5_esw_sched_node *tc_arbiter_node, @@ -1025,6 +1299,14 @@ static bool esw_qos_validate_unsupported_tc_bw(struct mlx5_eswitch *esw, u32 *tc return true; } +static bool esw_qos_vport_validate_unsupported_tc_bw(struct mlx5_vport *vport, u32 *tc_bw) +{ + struct mlx5_eswitch *esw = vport->qos.sched_node ? + vport->qos.sched_node->parent->esw : vport->dev->priv.eswitch; + + return esw_qos_validate_unsupported_tc_bw(esw, tc_bw); +} + static bool esw_qos_tc_bw_disabled(u32 *tc_bw) { int i; @@ -1098,8 +1380,44 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * int mlx5_esw_devlink_rate_leaf_tc_bw_set(struct devlink_rate *rate_leaf, void *priv, u32 *tc_bw, struct netlink_ext_ack *extack) { - NL_SET_ERR_MSG_MOD(extack, "TC bandwidth shares are not supported on leafs"); - return -EOPNOTSUPP; + struct mlx5_esw_sched_node *vport_node; + struct mlx5_vport *vport = priv; + struct mlx5_eswitch *esw; + bool disable; + int err = 0; + + esw = vport->dev->priv.eswitch; + if (!mlx5_esw_allowed(esw)) + return -EPERM; + + disable = esw_qos_tc_bw_disabled(tc_bw); + esw_qos_lock(esw); + + if (!esw_qos_vport_validate_unsupported_tc_bw(vport, tc_bw)) { + NL_SET_ERR_MSG_MOD(extack, "E-Switch traffic classes number is not supported"); + err = -EOPNOTSUPP; + goto unlock; + } + + vport_node = vport->qos.sched_node; + if (disable && !vport_node) + goto unlock; + + if (disable && vport_node->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) { + err = esw_qos_vport_update(vport, SCHED_NODE_TYPE_VPORT, NULL, extack); + goto unlock; + } + + if (!vport_node) { + err = mlx5_esw_qos_vport_enable(vport, SCHED_NODE_TYPE_TC_ARBITER_TSAR, NULL, 0, 0, + extack); + vport_node = vport->qos.sched_node; + } else { + err = esw_qos_vport_update(vport, SCHED_NODE_TYPE_TC_ARBITER_TSAR, NULL, extack); + } +unlock: + esw_qos_unlock(esw); + return err; } int mlx5_esw_devlink_rate_node_tc_bw_set(struct devlink_rate *rate_node, void *priv, @@ -1218,10 +1536,14 @@ int mlx5_esw_qos_vport_update_parent(struct mlx5_vport *vport, struct mlx5_esw_s } esw_qos_lock(esw); - if (!vport->qos.sched_node && parent) - err = mlx5_esw_qos_vport_enable(vport, SCHED_NODE_TYPE_VPORT, parent, 0, 0, extack); - else if (vport->qos.sched_node) + if (!vport->qos.sched_node && parent) { + enum sched_node_type type = parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR ? + SCHED_NODE_TYPE_RATE_LIMITER : SCHED_NODE_TYPE_VPORT; + + err = mlx5_esw_qos_vport_enable(vport, type, parent, 0, 0, extack); + } else if (vport->qos.sched_node) { err = esw_qos_vport_update_parent(vport, parent, extack); + } esw_qos_unlock(esw); return err; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index a83d41121db6..9b0be25631df 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -212,10 +212,19 @@ struct mlx5_vport { struct mlx5_vport_info info; - /* Protected with the E-Switch qos domain lock. */ + /* Protected with the E-Switch qos domain lock. The Vport QoS can + * either be disabled (sched_node is NULL) or in one of three states: + * 1. Regular QoS (sched_node is a vport node). + * 2. TC QoS enabled on the vport (sched_node is a TC arbiter). + * 3. TC QoS enabled on the vport's parent node + * (sched_node is a rate limit node). + * When TC is enabled in either mode, the vport owns vport TC scheduling nodes. + */ struct { - /* Vport scheduling element node. */ + /* Vport scheduling node. */ struct mlx5_esw_sched_node *sched_node; + /* Array of vport traffic class scheduling nodes. */ + struct mlx5_esw_sched_node **sched_nodes; } qos; u16 vport; From patchwork Tue Dec 3 20:29:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13892916 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2068.outbound.protection.outlook.com [40.107.244.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45E6220ADE0; Tue, 3 Dec 2024 20:31:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.68 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257869; cv=fail; b=G3V4uBpXox4yWhx1jFoxB8Q66nCV4ygqDV71QG6HMnv01eWlES/JB/gcymiSNb9C5SWeBMTCHibmN26gBDU1FtxKA1PBsuu9+Fou74R/izKUJXHwx2d5unso1VbGrMH5St8E4eUvXNamPwhEIV3oHpFEeId3znLElxkit2P8k8U= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733257869; c=relaxed/simple; bh=4cNIZDzLQfKsJJ0kQrHNCv75ROx+XfXm7fbjY1Jm25k=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=DLwL9gg/xbcdFOpbRT8jPfDgsVms/jmqRm4rtwqIqIBJzyZQ8fBbGqq3zVf+S4LZDQOye/ufGZGtr2GmgXWylNfs9dv5mR3R8WGUebMMSOJs0Sp7okApRWPB/0RX83lOalBLhJ+askk3su5TO7J8ml/BWlY5J0mg/9jcxt2wIIw= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=GUmoHukY; arc=fail smtp.client-ip=40.107.244.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="GUmoHukY" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Dz6v0flUgNaVcDx7qDXXkCDN5Itw7s88rbxr3l96+Q93wrkvgZLuc3GHT1A+L+Sc4Lhw+4M1os6BHAIVY/qDFr0BhMn1MYEhBtd/sXoSqDywHdRHkyKMkfDpAE87+uAz27HWgo6RsbbotcybC3cqRQTX1q0TCXQdj8j1OfHOfRwD5tXMUS+rgVJ4dvAJXKHzyqD68LAGtGJ8TAeAZHxFvMFAaa0nZMiErXeJJJvWAJ29d5E1RcxLspfLSrEO1jR4T3wJtsGqcTACRhnWE0tGxNITOdOgzTb9PL7ZDfJEFwNfYAH9RuZi5+6KVoOED5379knXIo0/C38T9HZED+WGiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BPvv3IafnPs+viKn9AmZlYH2Xt66noDxqyj57pvtwVk=; b=Y6sYTxCfl5s1pJDvHT0p3cjWgGHXMIdwJygIxlRyM3uZmd5Qwq5pmxzr8tCoIk7sCE/cNPpG8r87h601gh7pCCAi3HOPmW/+TnXdxbY6Pq5VxYQvFYi8xZdEqOxfK0V1xAMeFq0jyX/GgDddwnGnuVGP+uJ/fHKo/7IQ4rbhrCMmBmDBx3GiVo9zywVxfuk59TogarsiVuUlnHbIxfOz0pE5sK08EuoeCdjN79ZHEKnhVECzEwhXqpTSWaEYbmJmFbraldsJm+BTqO70s2k/A5oCR2RqlhmkpYG9bdCGxhfPkEYiPpQ9lmzc1cKdob1zOnMbUJxgTUAgtzKcQcxiyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BPvv3IafnPs+viKn9AmZlYH2Xt66noDxqyj57pvtwVk=; b=GUmoHukY8O422p+BueC/aY9JGAX7671OA68Hlx7J0mFbl9V4wOrVYwr2a9KZAzpJNY0vNmZsS+7z7si07DA05r6uo4/dIuan88yycJNJ0JEbDN3YL4X8O+biejd9GZf+3YhQ6SCY0+Q8oXpWh9NFtuR5AKRDJxcu7lszzXFQbi2kqlMFbLgP8YyGmhDBNuZw05ZeL2a40YYgHp3Yl4jCHyC0fKa/5X7AzjD1rlBgCTi5kgEGTQaXzodWMRZQafz3gq5Hp66ueW7VO2aCFchPg7sMDWNJvLJZfFOVg9uZGyr/chUmovqTa2TUVPIg1JNcyBucxMrpZ0L7SzEX4pZSGQ== Received: from BN1PR13CA0021.namprd13.prod.outlook.com (2603:10b6:408:e2::26) by PH7PR12MB5951.namprd12.prod.outlook.com (2603:10b6:510:1da::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8207.18; Tue, 3 Dec 2024 20:31:00 +0000 Received: from BN1PEPF00006000.namprd05.prod.outlook.com (2603:10b6:408:e2:cafe::31) by BN1PR13CA0021.outlook.office365.com (2603:10b6:408:e2::26) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8230.8 via Frontend Transport; Tue, 3 Dec 2024 20:31:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00006000.mail.protection.outlook.com (10.167.243.232) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8230.7 via Frontend Transport; Tue, 3 Dec 2024 20:31:00 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:40 -0800 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 3 Dec 2024 12:30:40 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 3 Dec 2024 12:30:36 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , , Carolina Jubran , "Cosmin Ratiu" , Tariq Toukan Subject: [PATCH net-next V4 11/11] net/mlx5: Manage TC arbiter nodes and implement full support for tc-bw Date: Tue, 3 Dec 2024 22:29:24 +0200 Message-ID: <20241203202924.228440-12-tariqt@nvidia.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20241203202924.228440-1-tariqt@nvidia.com> References: <20241203202924.228440-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00006000:EE_|PH7PR12MB5951:EE_ X-MS-Office365-Filtering-Correlation-Id: b97c0c7a-6e6a-43ba-4728-08dd13d96651 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: 3adcB4IW72kPlLvPD7Ha97kEKiwsDlMnnv3pXf1ykm9O9FdrpNQ86wex+/h5P+07lDEE1E7NSlSBoyUds/5O3QSo5HQOwmCIqxAkqn2ylySjREMPVonIRdyfQdwmhnJA2ctbuaxkTk+bL+CIOlOMGAFCx0vgpT7Q6FaqsxoA7hyRfbLNXUDCSfPfzNmsgCaleGYczuoOTdVizJoDe5knaFP7PDmf0sa58Q+XT00d42L5diTHW/ku6o6zg02L1uA4kmHT2JQFY/h3d4+EzynJyFKMAvRsfPX7zYuvX0ipNSdJdM+1PxIpZnaiakE2UVMELRe0hPplGeJJ/6pd6sAmC6ge+mRi4mGOW2hckV5A972gKPtNBku4CoQwI0VpVYM183XO6dy+FnSVjAW+Bh4zvt90CF10swCAjOR15E9AL1XDDxMXGOfJO7TVJT2+do0CEnOCRRflgZf2LCRBVJs3WeDfdLFunZhVjoV43T11Wj1oim+Qa9zZYd4bApc4nOUv7O4syJlatY2eiTS8aJXXSuiIDUkJzMHiqzA7WEKpHe30dL3zAOlp4TeIZHZ680XpIFbzuazr9WoTYEdYkX6ge0JTFeSIoc7SWQ/CPeekuK9rWyvV9qDxZ7pe4jD4Yc+Sru9zsaVQWB6xr1LyG4kggeABQy+lSHRa1L936v0nRcV8X4wHmzna0W3kqFhmKKD3GdWeQ7sei0+p+HVuO+ae8UiFZRJJrQ7sCbj3USIzVG7dS2SzyKiaY4JQbHcZ5tLiktEBK3XEWoQkiQXb2Piw9zCH5IOSB7JPyWkzZah1P5IF/3d8AewMUc1EpqLM+5nJAeHaI5IH5/3Tp22wGViv3AHUFxqmz40SJp7ipANP75lsYAQ6GbKZNPgTPDUXKCDzpfRVG7lg72uWJI6JpLusqywbfKf6/tRWyAI8t6gxQfVwGxtiof1OCKwaFqfpnqivUh8L35H0vsBRS4IX1XmmquZ2EciPmDuLJrjiSL1MHSV+JO9vj9g7QcwNjwU5d5DhsdhEID92Opyg6Qcei2KIO1o3rWJcx1FYqjUtuABSc34TklMopody31kNCAjdteTAt6VdR/bH6PdTZX61lzVzYlkGAp27uGsiZbTdLMsTchVSqMVKjF2wUlOMyBRE5kusJx81UVlEuTR6pyGvxF7wTe/q+LPSnOjvlM10CUVhedESJG5ViwMXaTUhBGY2xO9tsMYSIQ+looXnZa0yVGuK+olE7B3WFwKOGBZpO/KsSuQwzu02WINr1CRlkOHUEk8pPwbBtkYAPCHrctfKCxnOV0rce8UpLuF4F60cGRFv3c3gHbTDJUW+9agRpBfNpbbdfbrI5eOlCUpLj9jfMkNftTYuCP3sWcCFZLgowS6JZyj8agA8cQObw5f+NQJRssj5Yx8QSsObFYuetx0we0AM5DnwxhaLXe3XTKZxhO0PI+c5yQKPdaq38IEnUL30qTVy X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(1800799024)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2024 20:31:00.0037 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b97c0c7a-6e6a-43ba-4728-08dd13d96651 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00006000.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5951 X-Patchwork-Delegate: kuba@kernel.org From: Carolina Jubran Introduce support for managing Traffic Class (TC) arbiter nodes and associated vports TC nodes within the E-Switch QoS hierarchy. This patch adds support for the new scheduling node type, `SCHED_NODE_TYPE_VPORTS_TC_TSAR`, and implements full support for setting tc-bw on both vports and nodes. Key changes include: - Introduced the new scheduling node type, `SCHED_NODE_TYPE_VPORTS_TC_TSAR`, for managing vports within the TC arbiter node. - New helper functions for creating and destroying vports TC nodes under the TC arbiter. - Updated the minimum rate normalization function to skip nodes of type `SCHED_NODE_TYPE_VPORTS_TC_TSAR`. Vports TC TSARs have bandwidth shares configured on them but not minimum rates, so their `min_rate` cannot be normalized. - Implementation of `esw_qos_tc_arbiter_scheduling_setup()` and `esw_qos_tc_arbiter_scheduling_teardown()` for initializing and cleaning up TC arbiter scheduling elements. These functions now fully support tc-bw configuration on TC arbiter nodes. - Added `esw_qos_tc_arbiter_get_bw_shares()` and `esw_qos_set_tc_arbiter_bw_shares()` to handle the settings of bandwidth shares for vports traffic class TSARs. - Refactored `mlx5_esw_devlink_rate_node_tc_bw_set()` and `mlx5_esw_devlink_rate_leaf_tc_bw_set()` to fully support configuring tc-bw on devlink rate nodes and vports, respectively. Signed-off-by: Carolina Jubran Reviewed-by: Cosmin Ratiu Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 185 +++++++++++++++++- 1 file changed, 180 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index afb00deaae16..87c9789c2836 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -67,6 +67,7 @@ enum sched_node_type { SCHED_NODE_TYPE_TC_ARBITER_TSAR, SCHED_NODE_TYPE_RATE_LIMITER, SCHED_NODE_TYPE_VPORT_TC, + SCHED_NODE_TYPE_VPORTS_TC_TSAR, }; static const char * const sched_node_type_str[] = { @@ -75,6 +76,7 @@ static const char * const sched_node_type_str[] = { [SCHED_NODE_TYPE_TC_ARBITER_TSAR] = "TC Arbiter TSAR", [SCHED_NODE_TYPE_RATE_LIMITER] = "Rate Limiter", [SCHED_NODE_TYPE_VPORT_TC] = "vport TC", + [SCHED_NODE_TYPE_VPORTS_TC_TSAR] = "vports TC TSAR", }; struct mlx5_esw_sched_node { @@ -159,6 +161,11 @@ mlx5_esw_qos_vport_get_parent(const struct mlx5_vport *vport) static void esw_qos_sched_elem_warn(struct mlx5_esw_sched_node *node, int err, const char *op) { switch (node->type) { + case SCHED_NODE_TYPE_VPORTS_TC_TSAR: + esw_warn(node->esw->dev, + "E-Switch %s %s scheduling element failed (tc=%d,err=%d)\n", + op, sched_node_type_str[node->type], node->tc, err); + break; case SCHED_NODE_TYPE_VPORT_TC: esw_warn(node->esw->dev, "E-Switch %s %s scheduling element failed (vport=%d,tc=%d,err=%d)\n", @@ -344,7 +351,11 @@ static void esw_qos_normalize_min_rate(struct mlx5_eswitch *esw, if (node->esw != esw || node->ix == esw->qos.root_tsar_ix) continue; - esw_qos_update_sched_node_bw_share(node, divider, extack); + /* Vports TC TSARs don't have a minimum rate configured, + * so there's no need to update the bw_share on them. + */ + if (node->type != SCHED_NODE_TYPE_VPORTS_TC_TSAR) + esw_qos_update_sched_node_bw_share(node, divider, extack); if (list_empty(&node->children)) continue; @@ -476,6 +487,129 @@ static void esw_qos_destroy_node(struct mlx5_esw_sched_node *node, struct netlin __esw_qos_free_node(node); } +static int esw_qos_create_vports_tc_node(struct mlx5_esw_sched_node *parent, u8 tc, + struct netlink_ext_ack *extack) +{ + u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; + struct mlx5_core_dev *dev = parent->esw->dev; + struct mlx5_esw_sched_node *vports_tc_node; + void *attr; + int err; + + if (!mlx5_qos_element_type_supported(dev, + SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR, + SCHEDULING_HIERARCHY_E_SWITCH) || + !mlx5_qos_tsar_type_supported(dev, + TSAR_ELEMENT_TSAR_TYPE_DWRR, + SCHEDULING_HIERARCHY_E_SWITCH)) + return -EOPNOTSUPP; + + vports_tc_node = __esw_qos_alloc_node(parent->esw, 0, SCHED_NODE_TYPE_VPORTS_TC_TSAR, + parent); + if (!vports_tc_node) { + NL_SET_ERR_MSG_MOD(extack, "E-Switch alloc node failed"); + esw_warn(dev, "Failed to alloc vports TC node (tc=%d)\n", tc); + return -ENOMEM; + } + + attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes); + MLX5_SET(tsar_element, attr, tsar_type, TSAR_ELEMENT_TSAR_TYPE_DWRR); + MLX5_SET(tsar_element, attr, traffic_class, tc); + MLX5_SET(scheduling_context, tsar_ctx, parent_element_id, parent->ix); + MLX5_SET(scheduling_context, tsar_ctx, element_type, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR); + + err = esw_qos_node_create_sched_element(vports_tc_node, tsar_ctx, extack); + if (err) + goto err_create_sched_element; + + vports_tc_node->tc = tc; + + return 0; + +err_create_sched_element: + __esw_qos_free_node(vports_tc_node); + return err; +} + +static void +esw_qos_tc_arbiter_get_bw_shares(struct mlx5_esw_sched_node *tc_arbiter_node, u32 *tc_bw) +{ + struct mlx5_esw_sched_node *vports_tc_node; + + list_for_each_entry(vports_tc_node, &tc_arbiter_node->children, entry) + tc_bw[vports_tc_node->tc] = vports_tc_node->bw_share; +} + +static void esw_qos_set_tc_arbiter_bw_shares(struct mlx5_esw_sched_node *tc_arbiter_node, + u32 *tc_bw, struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vports_tc_node; + + list_for_each_entry(vports_tc_node, &tc_arbiter_node->children, entry) { + u32 bw_share; + u8 tc; + + tc = vports_tc_node->tc; + bw_share = tc_bw[tc] ?: MLX5_MIN_BW_SHARE; + esw_qos_sched_elem_config(vports_tc_node, 0, bw_share, extack); + } +} + +static void esw_qos_destroy_vports_tc_nodes(struct mlx5_esw_sched_node *tc_arbiter_node, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_sched_node *vports_tc_node, *tmp; + + list_for_each_entry_safe(vports_tc_node, tmp, &tc_arbiter_node->children, entry) + esw_qos_destroy_node(vports_tc_node, extack); +} + +static int esw_qos_create_vports_tc_nodes(struct mlx5_esw_sched_node *tc_arbiter_node, + struct netlink_ext_ack *extack) +{ + struct mlx5_eswitch *esw = tc_arbiter_node->esw; + int err, i, num_tcs = esw_qos_num_tcs(esw->dev); + + for (i = 0; i < num_tcs; i++) { + err = esw_qos_create_vports_tc_node(tc_arbiter_node, i, extack); + if (err) + goto err_tc_node_create; + } + + return 0; + +err_tc_node_create: + esw_qos_destroy_vports_tc_nodes(tc_arbiter_node, NULL); + return err; +} + +static int esw_qos_create_tc_arbiter_sched_elem(struct mlx5_esw_sched_node *tc_arbiter_node, + struct netlink_ext_ack *extack) +{ + u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; + u32 tsar_parent_ix; + void *attr; + + if (!mlx5_qos_tsar_type_supported(tc_arbiter_node->esw->dev, + TSAR_ELEMENT_TSAR_TYPE_TC_ARB, + SCHEDULING_HIERARCHY_E_SWITCH)) { + NL_SET_ERR_MSG_MOD(extack, + "E-Switch TC Arbiter scheduling element is not supported"); + return -EOPNOTSUPP; + } + + attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes); + MLX5_SET(tsar_element, attr, tsar_type, TSAR_ELEMENT_TSAR_TYPE_TC_ARB); + tsar_parent_ix = tc_arbiter_node->parent ? tc_arbiter_node->parent->ix : + tc_arbiter_node->esw->qos.root_tsar_ix; + MLX5_SET(scheduling_context, tsar_ctx, parent_element_id, tsar_parent_ix); + MLX5_SET(scheduling_context, tsar_ctx, element_type, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR); + MLX5_SET(scheduling_context, tsar_ctx, max_average_bw, tc_arbiter_node->max_rate); + MLX5_SET(scheduling_context, tsar_ctx, bw_share, tc_arbiter_node->bw_share); + + return esw_qos_node_create_sched_element(tc_arbiter_node, tsar_ctx, extack); +} + static struct mlx5_esw_sched_node * __esw_qos_create_vports_sched_node(struct mlx5_eswitch *esw, struct mlx5_esw_sched_node *parent, struct netlink_ext_ack *extack) @@ -539,6 +673,9 @@ static void __esw_qos_destroy_node(struct mlx5_esw_sched_node *node, struct netl { struct mlx5_eswitch *esw = node->esw; + if (node->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) + esw_qos_destroy_vports_tc_nodes(node, extack); + trace_mlx5_esw_node_qos_destroy(esw->dev, node, node->ix); esw_qos_destroy_node(node, extack); esw_qos_normalize_min_rate(esw, NULL, extack); @@ -628,13 +765,38 @@ static void esw_qos_put(struct mlx5_eswitch *esw) static void esw_qos_tc_arbiter_scheduling_teardown(struct mlx5_esw_sched_node *node, struct netlink_ext_ack *extack) -{} +{ + /* Clean up all Vports TC nodes within the TC arbiter node. */ + esw_qos_destroy_vports_tc_nodes(node, extack); + /* Destroy the scheduling element for the TC arbiter node itself. */ + esw_qos_node_destroy_sched_element(node, extack); +} static int esw_qos_tc_arbiter_scheduling_setup(struct mlx5_esw_sched_node *node, struct netlink_ext_ack *extack) { - NL_SET_ERR_MSG_MOD(extack, "TC arbiter elements are not supported."); - return -EOPNOTSUPP; + u32 curr_ix = node->ix; + int err; + + err = esw_qos_create_tc_arbiter_sched_elem(node, extack); + if (err) + return err; + /* Initialize the vports TC nodes within created TC arbiter TSAR. */ + err = esw_qos_create_vports_tc_nodes(node, extack); + if (err) + goto err_vports_tc_nodes; + + node->type = SCHED_NODE_TYPE_TC_ARBITER_TSAR; + + return 0; + +err_vports_tc_nodes: + /* If initialization fails, clean up the scheduling element + * for the TC arbiter node. + */ + esw_qos_node_destroy_sched_element(node, NULL); + node->ix = curr_ix; + return err; } static int esw_qos_create_vport_tc_sched_node(struct mlx5_vport *vport, @@ -965,6 +1127,7 @@ static int esw_qos_vport_update(struct mlx5_vport *vport, enum sched_node_type t { struct mlx5_esw_sched_node *curr_parent = vport->qos.sched_node->parent; enum sched_node_type curr_type = vport->qos.sched_node->type; + u32 curr_tc_bw[IEEE_8021QAZ_MAX_TCS] = {0}; int err; esw_assert_qos_lock_held(vport->dev->priv.eswitch); @@ -976,11 +1139,19 @@ static int esw_qos_vport_update(struct mlx5_vport *vport, enum sched_node_type t if (err) return err; + if (curr_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && curr_type == type) + esw_qos_tc_arbiter_get_bw_shares(vport->qos.sched_node, curr_tc_bw); + esw_qos_vport_disable(vport, extack); err = esw_qos_vport_enable(vport, type, parent, extack); - if (err) + if (err) { esw_qos_vport_enable(vport, curr_type, curr_parent, NULL); + extack = NULL; + } + + if (curr_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && curr_type == type) + esw_qos_set_tc_arbiter_bw_shares(vport->qos.sched_node, curr_tc_bw, extack); return err; } @@ -1415,6 +1586,8 @@ int mlx5_esw_devlink_rate_leaf_tc_bw_set(struct devlink_rate *rate_leaf, void *p } else { err = esw_qos_vport_update(vport, SCHED_NODE_TYPE_TC_ARBITER_TSAR, NULL, extack); } + if (!err) + esw_qos_set_tc_arbiter_bw_shares(vport_node, tc_bw, extack); unlock: esw_qos_unlock(esw); return err; @@ -1441,6 +1614,8 @@ int mlx5_esw_devlink_rate_node_tc_bw_set(struct devlink_rate *rate_node, void *p } err = esw_qos_node_enable_tc_arbitration(node, extack); + if (!err) + esw_qos_set_tc_arbiter_bw_shares(node, tc_bw, extack); unlock: esw_qos_unlock(esw); return err;