From patchwork Tue Jan 7 06:06:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928258 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2075.outbound.protection.outlook.com [40.107.237.75]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D59F1DED66 for ; Tue, 7 Jan 2025 06:08:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.75 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230128; cv=fail; b=UkDuqU3nYfcwgZku6fYvhhyMK5jGgZF2GrfdCQ8pR4AsOy0EjJDVuZ8Pxb0MTDXFivovlTVsslZ7IPadz9eqdwIvgzOI2km9+NNFCBWeyNI9CvTbYw0p5XuZ3Uezopv65dCpKr4Xac59UBPnBAPZLxqnYX/ZXp+8I+1pvXPGlaY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230128; c=relaxed/simple; bh=VjOWemfbuCn6cJ6+qKBcwDr9iAHDVeFDa8aSfl6MEZA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NXrklpK/U8+YWgVue15y8pIvZc8D6C7WQn6ABi02WyudnSLtku2zhcI71uzmaCYXf4aC9sIjWOuBGLYU/huwSOQ6VM/XQ8v1svEBRrUsOYsJEuP8OT66kKFaMleN6DF696fekWG+ZDZfiVEZE651K5sF2m/JhUL7RtUZaiUJ2GA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=JgKixNya; arc=fail smtp.client-ip=40.107.237.75 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="JgKixNya" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=bf5qxEmuGJKBs0fwI3TvkfJ7tbAVfzF0+Ty0DrhVHpIpaJQIivzEHbyxWG6FE6NmnzhO5FemNMBSgvXehFDI6JTj8/HBKvDH7xkJND0nyjQgeVGkGMbwZJ/xRZr2CGbmV4O4x8uOOdfwApdigUVqINgD2iXrZ+VOPLYMZmrlfjlVmzwZGLLTK4o1DL9TSYY3S5bEy0oDpUWUOBQqSawVmP3C5frND+8OCoHxNG23ohGVPWkm4vBeO/dP6ptCyF0+VR+qlqkUT5woe9V5f7dsKxBcl9TUQftg2+/d7bwgFLuYwbjXCqI/7dvoBa9FAJyxZ0a5CbyG7T8yrrO1hAofPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mc6wtSJ5nMqqFa3qM4dNgyaCDQjHgFNwd2+p1EqMjwc=; b=BFwTKaX9qVNwwFS3cIqfI5j/u77wTKsxDiuX0DkqDbKoHGxyItFagbEjSagToMy6vXfRSYAp7Q+OGTRpH6VhjnPHnRyDlfMkQ5VOE6y2Ksx4SmwNJgA8yYRQ1NeWNGDG4A9ug1OscQB/4bmXrhFeL2RcSmNMUxElxOvdm1oLwqNIeBCNGEKOAmu9VKjbdN2PvYU0btGFrG78bEf74v7K9mc/PTsUvIqXae7fRbBKtw1VSUZKWXw5LbWA11qUr/uKw0fbCbvrRhdzBZnqpncuo1ecJUXC/MXJhhwTCoN5MurwQCPlONpsA1s52ifEDwuaxLCEshlw9ajs/7GUN28rWQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mc6wtSJ5nMqqFa3qM4dNgyaCDQjHgFNwd2+p1EqMjwc=; b=JgKixNyaWCmP+a4tA6ntXAhXK4xJedubOtnxD8HJG278pQCJfajILLC0UxQatBpCOzXFDtGDlJ2DuHwo+WlSE/wkOd1vSgMWqr4XzPFwEpwr6e2FOFOfQQdvHy7mRQh+Ie4G5ATbBQ25c8YnQZDtjDbTGCHWPJRnCnq7X2ewzQs2bhdRP0w24hDLcu7IrjYpkGNZkpwJ4nRC3VEACJX+RyIQXFXiKUjq6c7krmRQqZ67c/ipdKDGqOEkSz4iyBgbGaG0DdHM+qsgoALU4cd2I4RfYH6kzCUzDUw1SXDcT69NHuUF/7WSvKWD3stdAmoxbdj3wMJLUCKcDRbBZ5TqZA== Received: from CH0P221CA0042.NAMP221.PROD.OUTLOOK.COM (2603:10b6:610:11d::26) by PH7PR12MB6907.namprd12.prod.outlook.com (2603:10b6:510:1b9::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Tue, 7 Jan 2025 06:08:35 +0000 Received: from CH2PEPF0000009F.namprd02.prod.outlook.com (2603:10b6:610:11d:cafe::bf) by CH0P221CA0042.outlook.office365.com (2603:10b6:610:11d::26) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.18 via Frontend Transport; Tue, 7 Jan 2025 06:08:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF0000009F.mail.protection.outlook.com (10.167.244.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:08:34 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:22 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:22 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:19 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 01/13] net/mlx5: fs, add HWS root namespace functions Date: Tue, 7 Jan 2025 08:06:56 +0200 Message-ID: <20250107060708.1610882-2-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009F:EE_|PH7PR12MB6907:EE_ X-MS-Office365-Filtering-Correlation-Id: 59cedafd-cf6a-4bfb-8584-08dd2ee1b80f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: 0ZfmuyPhbf3Y8sGgn8MncB/a64kTQj42VFuDU7A1vdzrclEBqaeM/XVPgxzrV4707F/hG+EOiB1j0ac5XcZ4ajFJDkGEX9hDTUns86FrnbAvEyBPS0QTfUzFurgagQNSjvQEFfnOo/PPXqs48P1J7E9CGXe/gjQC8wJhOgteY00BwQJjDhMME6/t6zMFtnekMhIARkAMPs3Zd6lcNdg0nXTu6oUATpBCXgD9BAnoQYwRYRBhuFj0rkHyWQMeavrAIkO6y0TLdA50oTR6lDtIyjgljSCF/DMrLZl5/Oxp5tjugWAQKNvkhk0B+EMcazp8pPyTBENht6HDjfkheDHTetGRwdIIBEZrjdK2l0uykAxkSCyigJb5GvfCxIVrSEDGHiDPovnqSP8388+Rr3sdMQw8IcWMCpJNWCtWwLB8RG5iH3GZbyqA/OyyS/9p7HcnPbLF27ZaZ3dQjgkpOHC/rs4G2Vb6tm2H9VN8ZdAL+fT6dXh1r7SjWr6UiTvZc8N+lWQZINaBugbEs86ZV/3GUBthPszyd1AcF3084r2/5glcmvM3Vo00PjNAELY8dz5XfpBChNJMqXGobEakhQRzV00Xxwn8uecTRLvdGqq9NJb6wbXeFqKVuPf1sgnDB2U/h4/4g5GQI6ma6aXYVTdaL2y7oiw+z8ACz3dyOz4cP7O61kH1XW3S/mea9F3AMz4f0PGUyYJKFs3R1fefxvtfOc88yN/t8oHzr46MimagqEKdYHqiRGfc1ulHXbyeBvi1brCWTapA2Ttgd9GbZP/K0rlbyMoER5IBf5BKwOtyA99ZOicXX7DCdjtwJ3QOWAv48hTFmx6f3BkWYAExPsW1BP6KWNHwwFa/M5ZIHsbwL4Bf67BrKuZpZS0PZ5gyTJFqmzORLFaIAPaA5XyeNU/D95YU/qDdLMuSxThzUT4G3RMxBLcVpn9iPJNoLbjHcZHxR5uOejh+L0jmHsM+4eziMxJ/pcePeqlR2W8Cnd1gY8q0c5ez96jViV5S4RPEHeXHEB+gAAkzE3JmiNgG3L1I/tiMejBaO6bCkr7oCqUDtqgCjHqEqyV30qX8JbUZqXS002l067dXxCYXIRGovsWqvTei6OKKu9ZGSXTOc1DNaWJBm8R+IierNaRuan/MfOYwuH7sL20z2orpBcefLpOAPDUs6CmWeuxxibpY8bzi076auKs9oMXWILxu81ayW+TWRjg/buMflaRFn1r6AtWHXMe2zp5h5pgUEBeskSCg7XCb0Udh07pYOO6+gWYDr9Q75yhGGW+YiLEZjJnpVJsS5z1cesP1mZclt5K8ULCrUXBZGKwGT+d1Huuxuk4Z5vZpF73i3GBPE6TgMM9u5jfJZPb/g+frAzS0zlWTA1+z3EyYkww4GWbddypKB6Ah/PirXczFjZO2poFQi8C5o6hCblwpKM7VOH4jKNJgbHqyZsMZlr4ussh60Gtp8GsohzQTob2L9mloU9dMjLjxjHe/5LnNGxBk0cSOfbC0MvhAxiI= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(376014)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:08:34.5460 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 59cedafd-cf6a-4bfb-8584-08dd2ee1b80f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009F.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6907 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add flow steering commands structure for HW steering. Implement create, destroy and set peer HW steering root namespace functions. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/Makefile | 4 +- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 9 ++- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 56 +++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 25 +++++++++ 4 files changed, 90 insertions(+), 4 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index 10a763e668ed..0008b22417c8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -151,8 +151,8 @@ mlx5_core-$(CONFIG_MLX5_HW_STEERING) += steering/hws/cmd.o \ steering/hws/bwc.o \ steering/hws/debug.o \ steering/hws/vport.o \ - steering/hws/bwc_complex.o - + steering/hws/bwc_complex.o \ + steering/hws/fs_hws.o # # SF device diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index bad2df0715ec..545fdfce7b52 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -38,6 +38,7 @@ #include #include #include +#include #define FDB_TC_MAX_CHAIN 3 #define FDB_FT_CHAIN (FDB_TC_MAX_CHAIN + 1) @@ -126,7 +127,8 @@ enum fs_fte_status { enum mlx5_flow_steering_mode { MLX5_FLOW_STEERING_MODE_DMFS, - MLX5_FLOW_STEERING_MODE_SMFS + MLX5_FLOW_STEERING_MODE_SMFS, + MLX5_FLOW_STEERING_MODE_HMFS }; enum mlx5_flow_steering_capabilty { @@ -293,7 +295,10 @@ struct mlx5_flow_group { struct mlx5_flow_root_namespace { struct mlx5_flow_namespace ns; enum mlx5_flow_steering_mode mode; - struct mlx5_fs_dr_domain fs_dr_domain; + union { + struct mlx5_fs_dr_domain fs_dr_domain; + struct mlx5_fs_hws_context fs_hws_context; + }; enum fs_flow_table_type table_type; struct mlx5_core_dev *dev; struct mlx5_flow_table *root_ft; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c new file mode 100644 index 000000000000..7a3c84b18d1e --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */ + +#include +#include +#include +#include "mlx5hws.h" + +#define MLX5HWS_CTX_MAX_NUM_OF_QUEUES 16 +#define MLX5HWS_CTX_QUEUE_SIZE 256 + +static int mlx5_cmd_hws_create_ns(struct mlx5_flow_root_namespace *ns) +{ + struct mlx5hws_context_attr hws_ctx_attr = {}; + + hws_ctx_attr.queues = min_t(int, num_online_cpus(), + MLX5HWS_CTX_MAX_NUM_OF_QUEUES); + hws_ctx_attr.queue_size = MLX5HWS_CTX_QUEUE_SIZE; + + ns->fs_hws_context.hws_ctx = + mlx5hws_context_open(ns->dev, &hws_ctx_attr); + if (!ns->fs_hws_context.hws_ctx) { + mlx5_core_err(ns->dev, "Failed to create hws flow namespace\n"); + return -EOPNOTSUPP; + } + return 0; +} + +static int mlx5_cmd_hws_destroy_ns(struct mlx5_flow_root_namespace *ns) +{ + return mlx5hws_context_close(ns->fs_hws_context.hws_ctx); +} + +static int mlx5_cmd_hws_set_peer(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_root_namespace *peer_ns, + u16 peer_vhca_id) +{ + struct mlx5hws_context *peer_ctx = NULL; + + if (peer_ns) + peer_ctx = peer_ns->fs_hws_context.hws_ctx; + mlx5hws_context_set_peer(ns->fs_hws_context.hws_ctx, peer_ctx, + peer_vhca_id); + return 0; +} + +static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { + .create_ns = mlx5_cmd_hws_create_ns, + .destroy_ns = mlx5_cmd_hws_destroy_ns, + .set_peer = mlx5_cmd_hws_set_peer, +}; + +const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void) +{ + return &mlx5_flow_cmds_hws; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h new file mode 100644 index 000000000000..a2e2935d7367 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */ + +#ifndef _MLX5_FS_HWS_ +#define _MLX5_FS_HWS_ + +#include "mlx5hws.h" + +struct mlx5_fs_hws_context { + struct mlx5hws_context *hws_ctx; +}; + +#ifdef CONFIG_MLX5_HW_STEERING + +const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); + +#else + +static inline const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void) +{ + return NULL; +} + +#endif /* CONFIG_MLX5_HWS_STEERING */ +#endif From patchwork Tue Jan 7 06:06:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928257 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2087.outbound.protection.outlook.com [40.107.92.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ABEC01DE4D6 for ; Tue, 7 Jan 2025 06:08:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.87 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230124; cv=fail; b=N4Y2oPSrzPU/KPrBwAcl5WRbBmAgyN8FWBBkyT7YcW0w8AHZHL0pMvW2Hp3jauKkg1SZ1+u4M4qKCQuZ2n1ZghVfwZ4S/TN89uo48yzRJ2okIwuftgw1hZpQSDPRpJaxMY2P1U3jX7/AmbVXjs9FKvO8qV3dprYjxB2c8PG/O1o= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230124; c=relaxed/simple; bh=eySTaEFbRljtfyH2mGrP37a7xPiT6DiC9Cy//ZCOlZg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Xrv16RNdos+/30aQmtYa9wkHggiB1LWIzpvWEvRhFUxb1IDgLSNKJ0Bt4/29tiMfAXcapff6ykIB2noOVFRi+GAr3WvuZj6HZS3gz0BJpEEa8/p9f1DM1vc/JOcC+ZPQYau8L4UDvT3tR829AU/hW1LeP10ezUmhdc95bwx5t98= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=iyqv8lwN; arc=fail smtp.client-ip=40.107.92.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="iyqv8lwN" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=oXdYE78Bnk3dwdvOV6latAYwkTTtDFci1AuwZzwfS0KLa5f7r78kiDKLaVZt9ZtMiGJKM8hlE0gc9UIosUuKSn86GNSQqdCrlLQqdpLAOdGfZa9mJ1FgX86a1jQFWrC2ocF78fBaPxgdN6hM1v+uGSO5qTrfcaMmqe4SFOow8cyYddiqynNfrOIXaBz1QrdCSo3sJyNEuVqCSdlnR1D3NL1rGSQN+MedWwur1gNtcheZCfZawCu4C6dgSl//tBcY7UgOzu7YBR4cgK57Gt0xe1lOhSrMPIbS20/Qx7lpKxk1ZChmelMZy8CSFZzu0bkugrd6eaz61CQug3ffAzlIbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VVIrj1HAoSKLGQpUvdMH+V1jSSTDV7Z7WCw5cDx+lKY=; b=YFLdALP0Z2O/WTrdq0JxnRY3753THUw4zF0M0nTer17auMT4f1Rm3sRTBvyp9BJbE/hnDGXE5ZCIZ7VdF39OadKhpjWRiW0PryLl7oOeaMv3+YC1PZkWCW+tUJZxzjU7ZRh0XgR49SGIYjMkiXXEC1puSvc7YYHyV/QRsGP6l10QGbloO0aIL5fOoONfjkkeuPOXkATI8NlD0uyEhCKp+rOxtf5W3Ra2jW1SHVkZplVJ0scpzTgG7twRtRTeWKN5UR5gHjesbQ19M5+0hsSJm5fSF1th4fME2Q3K4OXvxyVAvh/nIMxqkNNMLPdLIVr+pvqHT8REIB1EoiIlbEUCzg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VVIrj1HAoSKLGQpUvdMH+V1jSSTDV7Z7WCw5cDx+lKY=; b=iyqv8lwNFrBqId+akcx1e9F+ph8IxQ6S/vCg43163jhfgwvQuFA+TCzG1eVHyM6s7U3Wmdyspe9BGlHZ6dxCzHdrhq6RcpUMwtfL8GTAV8Y3enI++hNDSVhxhDMZhJ9duFg9tbGEk0QD30g7ah8RxfXoBON/9ph+uyDOUyWdcwAaXtlmbNboC5t+PPFsU5Q55YjwrCuCef2Ul3NOVlDIIqQqJZy4EfuAfYyjHlJA/FNfwnKr4LTtVXvx50kb3Wql51ibzM5g5J5Qf7FG9q3Sku8OdKu3+zHFKa2O3DCTithNXXrQ3ORRsbFZK5+qA9TlBr3h65E08FBOKRhqiW1hGQ== Received: from CH2PR05CA0059.namprd05.prod.outlook.com (2603:10b6:610:38::36) by SN7PR12MB6714.namprd12.prod.outlook.com (2603:10b6:806:272::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Tue, 7 Jan 2025 06:08:38 +0000 Received: from CH2PEPF0000009A.namprd02.prod.outlook.com (2603:10b6:610:38:cafe::f2) by CH2PR05CA0059.outlook.office365.com (2603:10b6:610:38::36) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.10 via Frontend Transport; Tue, 7 Jan 2025 06:08:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF0000009A.mail.protection.outlook.com (10.167.244.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:08:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:26 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:26 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:22 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 02/13] net/mlx5: fs, add HWS flow table API functions Date: Tue, 7 Jan 2025 08:06:57 +0200 Message-ID: <20250107060708.1610882-3-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009A:EE_|SN7PR12MB6714:EE_ X-MS-Office365-Filtering-Correlation-Id: 1bd25470-1d54-4d99-f0d9-08dd2ee1ba38 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|1800799024|376014; X-Microsoft-Antispam-Message-Info: BGO9ZaKCdK4XdH5R6KvCxcplEmWL4EJBjWy0SkQuFcHss+2u7q1/IYQHXoGAfMMv4iXzYQRaLMkjI6/+9bwwJ2dulgw8YzdY6qNnUmoNROLlIP1K8KPg00K0ZLrj9NjlO3Fj7fJ8I2Wx2RN3u75PHnPYx1+K1zIIxtSi4EluIzae3ixUWRrwCp4MwrYJu93vxHrLqWfp+a/EyL2cnVhJ8vSFqsb4h70ZWbx4a5M8KxItLnhlTMAHkuBM3zwVNgZGlR7RocksBvQVtP2gpRh2jLFrqVLtTlHtd8SywN5GE0bytzaktKNbiyPPR+wcHdBm6FFs4t/VYApYUreIDY2ShaPpzi3aViYg3bvTK+ELOFkg9EtrinHEq9ZvAHL97YtJQ7wZPnTh1g79rJzPuCtv0/dRB6zXZwoc5Lub4+MCyuG34qaJ9vSa7eP5eisdFozfVFzamrDCK16Ikic1yaf2hmrJ6dvTRLVBYvQZt+mxjSF7iNeeRpbp7qHfcU2LZVCAeM+FFlp4wffvB62HZ8QDThrH2Ht5QyY2pQ1SY+sNUt+roeM20s65jb/+LE5qjZspT/VPasx8WzSB/9fIbob4nhJYw6Qrf8kjkJychVgW7BX0EZh43VhsFLxKs9DzAWxDUlEU5VoSBiD6Gv8KcU27VBb7w6gsUdBq8wLg5QX/5NBohL6Tq6wkvcJUdbknU8KDznW1fXHnTByj3lLUUUTsJEiBAsbbKxmMQZOloUt6UaCXQZzpQiK3tmknQuqYRtfg9qLeT1DW6UZ/w/2KIfFVexdOL5BMG/JW1PI/d6Ny942N7WSJLXayRU16yRhfQ4sVIINsYMCdZ+IvDFnRu92GzyVk17m2Pn23QQbJBCHA7aRRFC2ycAPb+f+J63pu6gNKSoI0v8j8V6VvqZy879qtElFxGQF57ZORMtL61V+DI+Nj6FeXzW/VtqusWaeNtl8PPL8iW+JZhtVdQWH/BQukZqndKmo1UU2DR89KxELcqNebjV84DC4PskTDwaITLt8s2C1lrPrMb0g35hJvOBhy1MawBqLnj2nuYxubDJ9DzBPQkkh+H0r4+49oUVgwfjKbq6l8JnWWr39mGVe+sNSIkNuF8HNllmIorRKKGMwPcXL2B3BfDGbyXrVkjETgI++7V62CfpQDIa5zRaXur7Sdeuvn3jpbmBm2t/o9ilGyC4UKwQBPQg0Lh3fFHW6Pb0hzbXw/sZSEonUUJuaQfAuoI+dnfMVi2ypD9vIS1ZNEBzfU8QX+TsBGipjhExBIpn++rOy4ktlOsBXbx5qKGxhyoDL3Gm/3QlOKcRyrVa2BPsfaJIhSZhyiCIypfPmoojGiQ4PFzVvGEbx+AN1Bf0jQDSVmpcEMqt94l7AyRVhz3FZvIAumk0jKMaVOhdqo8P+qDqgM070NjhmsJTjtjQHjweNrgoBJPrw0/oAjB3u2MJ1VLzWUrPhAM1NjnrX0ra9P2XhwyeLlXtnXVd88RxkYXz3Ol7nJgil0ZuBojyPo8j0= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(1800799024)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:08:38.1729 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1bd25470-1d54-4d99-f0d9-08dd2ee1ba38 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009A.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6714 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add API functions to create, modify and destroy HW Steering flow tables. Modify table enables change, connect or disconnect default miss table. Add update root flow table API function. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 5 +- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 113 ++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 5 + 3 files changed, 122 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index 545fdfce7b52..e98266fb50ba 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -192,7 +192,10 @@ struct mlx5_flow_handle { /* Type of children is mlx5_flow_group */ struct mlx5_flow_table { struct fs_node node; - struct mlx5_fs_dr_table fs_dr_table; + union { + struct mlx5_fs_dr_table fs_dr_table; + struct mlx5_fs_hws_table fs_hws_table; + }; u32 id; u16 vport; unsigned int max_fte; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 7a3c84b18d1e..e24e86f1a895 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -44,7 +44,120 @@ static int mlx5_cmd_hws_set_peer(struct mlx5_flow_root_namespace *ns, return 0; } +static int set_ft_default_miss(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_table *next_ft) +{ + struct mlx5hws_table *next_tbl; + int err; + + if (!ns->fs_hws_context.hws_ctx) + return -EINVAL; + + /* if no change required, return */ + if (!next_ft && !ft->fs_hws_table.miss_ft_set) + return 0; + + next_tbl = next_ft ? next_ft->fs_hws_table.hws_table : NULL; + err = mlx5hws_table_set_default_miss(ft->fs_hws_table.hws_table, next_tbl); + if (err) { + mlx5_core_err(ns->dev, "Failed setting FT default miss (%d)\n", err); + return err; + } + ft->fs_hws_table.miss_ft_set = !!next_tbl; + return 0; +} + +static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_table_attr *ft_attr, + struct mlx5_flow_table *next_ft) +{ + struct mlx5hws_context *ctx = ns->fs_hws_context.hws_ctx; + struct mlx5hws_table_attr tbl_attr = {}; + struct mlx5hws_table *tbl; + int err; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->create_flow_table(ns, ft, ft_attr, + next_ft); + + if (ns->table_type != FS_FT_FDB) { + mlx5_core_err(ns->dev, "Table type %d not supported for HWS\n", + ns->table_type); + return -EOPNOTSUPP; + } + + tbl_attr.type = MLX5HWS_TABLE_TYPE_FDB; + tbl_attr.level = ft_attr->level; + tbl = mlx5hws_table_create(ctx, &tbl_attr); + if (!tbl) { + mlx5_core_err(ns->dev, "Failed creating hws flow_table\n"); + return -EINVAL; + } + + ft->fs_hws_table.hws_table = tbl; + ft->id = mlx5hws_table_get_id(tbl); + + if (next_ft) { + err = set_ft_default_miss(ns, ft, next_ft); + if (err) + goto destroy_table; + } + + ft->max_fte = INT_MAX; + + return 0; + +destroy_table: + mlx5hws_table_destroy(tbl); + ft->fs_hws_table.hws_table = NULL; + return err; +} + +static int mlx5_cmd_hws_destroy_flow_table(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft) +{ + int err; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->destroy_flow_table(ns, ft); + + err = set_ft_default_miss(ns, ft, NULL); + if (err) + mlx5_core_err(ns->dev, "Failed to disconnect next table (%d)\n", err); + + err = mlx5hws_table_destroy(ft->fs_hws_table.hws_table); + if (err) + mlx5_core_err(ns->dev, "Failed to destroy flow_table (%d)\n", err); + + return err; +} + +static int mlx5_cmd_hws_modify_flow_table(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_table *next_ft) +{ + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->modify_flow_table(ns, ft, next_ft); + + return set_ft_default_miss(ns, ft, next_ft); +} + +static int mlx5_cmd_hws_update_root_ft(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + u32 underlay_qpn, + bool disconnect) +{ + return mlx5_fs_cmd_get_fw_cmds()->update_root_ft(ns, ft, underlay_qpn, + disconnect); +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { + .create_flow_table = mlx5_cmd_hws_create_flow_table, + .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, + .modify_flow_table = mlx5_cmd_hws_modify_flow_table, + .update_root_ft = mlx5_cmd_hws_update_root_ft, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index a2e2935d7367..092a03f90084 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -10,6 +10,11 @@ struct mlx5_fs_hws_context { struct mlx5hws_context *hws_ctx; }; +struct mlx5_fs_hws_table { + struct mlx5hws_table *hws_table; + bool miss_ft_set; +}; + #ifdef CONFIG_MLX5_HW_STEERING const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); From patchwork Tue Jan 7 06:06:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928259 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2070.outbound.protection.outlook.com [40.107.95.70]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CFF21DE4D6 for ; Tue, 7 Jan 2025 06:08:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.70 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230132; cv=fail; b=V4zUygvU5GUk1FsZ7fDB4I8BOxwktApn7mspPeU9O6/OezK/ydWurY4Z4Ew8rixpUMl280z0XkswvB7GrFPi0JvMra0VLkhS0H1CN38WQrfwJnCh3eCmEhuAWLdoEZ2T4EVSFdsTWc1x3/Nh7jKPoq5qHfmaZ9xuCdznmcqFF/Q= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230132; c=relaxed/simple; bh=piw0xziaMd1gzppogfHr5ShqSPU4TRbfwt7ilORshvk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=uBroFIr66zS4wzkP5d5EZLUg/a204+XyBosQ8/OXfwMv64xk81k+Z4ib+Qr1sGGD2yac0MF5+OFeFN8GRUvaYAWS4+DBVf7CVkv6r7dQf7+1ed2bXz8K+OHz2VjAbhpnmMGuUBjYBBCuzwJvez95baDR4D2Qu3rl7iIKf+bdmUE= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=asdOT7Or; arc=fail smtp.client-ip=40.107.95.70 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="asdOT7Or" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=gTPnD3a/692Z50yoO0/r+2PeR755WHlk+MV+tIC0m2Dljj5rGNN0+xyMrrwpJ7qoBvRKxVypBt7GBhvD4MigSUOrzoRppSA/hMgrW/+WQ/5RzmPSdFEyaiijsUXKfaf3Z30o2spk3DPS9Iw5XwlqnNciQcH19fJfsyGr6OvBOzb5kASJvL/G1KSfQdIsqT1/Ih+xD2juGLwH6DivoLyj1JkB1vyLeIJjUdRw8PLNaabaquSyKZBWwHKYqCCGxaMy2BbZ3CSVuxHyJNFAT8/XVqfDPGP+2lzUgNU1N9NLHkGx+QfkWkCDYaqkzOk3Hd512wQPgHQ8mq/nGNBBjZvHlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XLpaJXcYjK1G7xPtNEsJTRTSzb7c8IIpG8c4WUNMpJ4=; b=urNk6MrB+lkaD8thKVSGcLeXVI5jOiw0c2IujBFLcQgQQjzHpilI92VZe9R3fWISoK1F9gThXgF7O+27pziGUTVmW8e7D1evgWKVFz4tQTpgbtM/okEZ9kKyov/PJJHteiCCav+PDiml3IKSwqv4tj3CNMIATsiHrLX8nMXYevP0Wq5MJAjlH7ag5zPf+KrxBeNFBB071BYMBpyX/4ipqAGJKKOZ8pouZM0MfXDXWXD1c3Z2n7CjWniRz31RqIymj4TusLNYPTCYq23OZZ2AxWwPNNvhqBkZGNkgUrn711IDi9Jt30MeYi0cAnjl0cvfv0herf88uQ00C3yuCwyo7A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XLpaJXcYjK1G7xPtNEsJTRTSzb7c8IIpG8c4WUNMpJ4=; b=asdOT7OrsNQ+o9M2iy418KfpT/QW7HvCcAHkLN1wlCgFronr1su7RdDocaL2OTifiFo5IHMVM4dTYSGrzCEBUiNSLgkizZ7XTvhwjW9C+DS4Nu/0H0fJzfgQcAInTd/L3CMbveNwBUZrgl7iUjKoPMdUNVyie6+8M1Ok6/fppLus40g3C8is8uoS1FQdFZfofgEm7RC5F1DfIOxiMFf2touza2244CT9vgwjXKv4xuaxeYDj4POpxJHrW2ei4cYF5eaA01f7uUCIy3Xtk8iy8RuJTZGFWsD9IaCaJEA/8lc21UGPXoOq/hKm0dRrMT0aHNa4vmrRtNJxGdQurD9XhQ== Received: from CH3P221CA0023.NAMP221.PROD.OUTLOOK.COM (2603:10b6:610:1e7::23) by CH3PR12MB8912.namprd12.prod.outlook.com (2603:10b6:610:169::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.10; Tue, 7 Jan 2025 06:08:45 +0000 Received: from CH3PEPF00000010.namprd04.prod.outlook.com (2603:10b6:610:1e7:cafe::45) by CH3P221CA0023.outlook.office365.com (2603:10b6:610:1e7::23) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.10 via Frontend Transport; Tue, 7 Jan 2025 06:08:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH3PEPF00000010.mail.protection.outlook.com (10.167.244.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:08:45 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:30 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:29 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:26 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 03/13] net/mlx5: fs, add HWS flow group API functions Date: Tue, 7 Jan 2025 08:06:58 +0200 Message-ID: <20250107060708.1610882-4-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH3PEPF00000010:EE_|CH3PR12MB8912:EE_ X-MS-Office365-Filtering-Correlation-Id: b76123aa-3929-48c5-8217-08dd2ee1be77 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|376014|1800799024; X-Microsoft-Antispam-Message-Info: EfySD68AVX6HhbYH2bd0VFdgoa4LqxQZIL7ZwnDE3Y9hX3zM+lGnUcZsB6kbqDEwL8djc0Jg9BSlvB8aCm+X1P4za1qrTBllMl3SP7cFGtaMHb248R9PZNexQ33CTv7qP9PgIfro8VxaUG3ed1fEUOVikkKEQ9A5suhuI3yDtPURwMfV4oMmvC2Spygb9zAHKo+GVTfGd7K8ghPwqhLTKp4v1pwmHtINYQdhPVh9ahuV+f+ecizjJ/bNQrUEe3IbOJBskWVeiKxqby/Yis12Gf+xM9Kh4YbP8Qy6aU6tokBDts9eiZ+rDCG1VCQppMTk4/+buHQ19WSqyeCJjRw/PzSFBk5DfQJLj/IOki+iKKat+7P/s/Yu8BJf7Qc9f72iz1L5bTZ+nICy8x1wP79Ns5u5WoPKazXAmShDUz3TNI7Kj6Lckt9jPC0D1H0tfpVZ1f9smqWzH4X0cuj4j/kwqdv7p+gjt50VG15mjcWveeCoB2l+3v0wFfvbDoGU6MdHzzk9yCupynRMjMzDwYm47CX+zb+OvO2cfeDfR+eX6b5RDYoXA+KWhuthpsEXi4ncWfOZtnf59ohzPGGBtNVee1SDtyvnL7YwT+NsNydxcMZ5Mua01NqgbfxUQi5lhdFLw6n2btUE8zOn7otQdMBJbuxD56ZWY4yFFrrfUmSv3WQXcE2Vt5I9CBJT1uqtVk4NAy+jRvxE/jWzcsgsF0hNRviopiJcdiw+lQcuF/5iml/1PKtKXBxT4qo8teJkFgKWvUXchvHsPtxZ32IytUfF2CsFtFAVRnWUER8jT7amsXZEI8vgNQDbs/vrbHsq5yWWQ1/G67xW5jqrwT0c4NUBzoClQDEtlSLeCIxWlhCWZm1F7Rb6Am4kHSDvyG+fKng4NxAQdbnJbHJNxPGdCP9ZnyRYI7RWANwGgrfKrVRdJJFv4B1quSyD+DwVVrM6G1WicB3bOx3JfvoaDJR+a0J2EtpuuoCCBFSUq+857YdgGz4AC5IxoBudHshqnqA9Ulat/AmVuTlL7KVtC/rMPURhczd7Rv6huLFemHhmOfY1x0fFQ2/TfZ9dU/kZ2mzEjyA24K+aqH/mDorM6DUW1pQXeMHgrS8rm19LGbbh+S4FxYDze/5pQFeRGFrGjIPELXPMnVCGLZWjYU7ULjjI9C8OrMv8/GO6COy4RyV8a6LzAuR9Cc236BiQe/OQE+AmRiovm771LXvG4tjHJuTbMybup15xUg9k9kqVe37X2dP1J0K40J7hYMBV+ax5rPaHSbxhIK27+tZ+sWiLtaUh9simR1NUAdG5SpkrE7nKkF+MQiZC/5GZ4TfFoE/RfxxBbGzmVReBnb20RP/37OZ8LrFyvcPghDIikh7LwvPP7wrJM0nDfEHq95Ev0JIBYrWVzVdFYcGwqvL2rcgeoJ19mfq2rbyBxU11nFsalTZPAjH7xZTV2/FIBRs26y52a5JqqhCelNFSMFgy19OgS4buK8a8s7djITlBhpfeoypQa/EoAY4= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(376014)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:08:45.2782 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b76123aa-3929-48c5-8217-08dd2ee1be77 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH3PEPF00000010.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8912 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add API functions to create and destroy HW Steering flow groups. Each flow group consists of a Backward Compatible (BWC) HW Steering matcher which holds the flow group match criteria. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 5 ++- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 42 +++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 4 ++ 3 files changed, 50 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index e98266fb50ba..bbe3741b7868 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -285,7 +285,10 @@ struct mlx5_flow_group_mask { /* Type of children is fs_fte */ struct mlx5_flow_group { struct fs_node node; - struct mlx5_fs_dr_matcher fs_dr_matcher; + union { + struct mlx5_fs_dr_matcher fs_dr_matcher; + struct mlx5_fs_hws_matcher fs_hws_matcher; + }; struct mlx5_flow_group_mask mask; u32 start_index; u32 max_ftes; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index e24e86f1a895..c8064bc8a86c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -153,11 +153,53 @@ static int mlx5_cmd_hws_update_root_ft(struct mlx5_flow_root_namespace *ns, disconnect); } +static int mlx5_cmd_hws_create_flow_group(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, u32 *in, + struct mlx5_flow_group *fg) +{ + struct mlx5hws_match_parameters mask; + struct mlx5hws_bwc_matcher *matcher; + u8 match_criteria_enable; + u32 priority; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->create_flow_group(ns, ft, in, fg); + + mask.match_buf = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); + mask.match_sz = sizeof(fg->mask.match_criteria); + + match_criteria_enable = MLX5_GET(create_flow_group_in, in, + match_criteria_enable); + priority = MLX5_GET(create_flow_group_in, in, start_flow_index); + matcher = mlx5hws_bwc_matcher_create(ft->fs_hws_table.hws_table, + priority, match_criteria_enable, + &mask); + if (!matcher) { + mlx5_core_err(ns->dev, "Failed creating matcher\n"); + return -EINVAL; + } + + fg->fs_hws_matcher.matcher = matcher; + return 0; +} + +static int mlx5_cmd_hws_destroy_flow_group(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_group *fg) +{ + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->destroy_flow_group(ns, ft, fg); + + return mlx5hws_bwc_matcher_destroy(fg->fs_hws_matcher.matcher); +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, .modify_flow_table = mlx5_cmd_hws_modify_flow_table, .update_root_ft = mlx5_cmd_hws_update_root_ft, + .create_flow_group = mlx5_cmd_hws_create_flow_group, + .destroy_flow_group = mlx5_cmd_hws_destroy_flow_group, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index 092a03f90084..da8094c66cd5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -15,6 +15,10 @@ struct mlx5_fs_hws_table { bool miss_ft_set; }; +struct mlx5_fs_hws_matcher { + struct mlx5hws_bwc_matcher *matcher; +}; + #ifdef CONFIG_MLX5_HW_STEERING const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); From patchwork Tue Jan 7 06:06:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928260 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E36A1DED59 for ; Tue, 7 Jan 2025 06:08:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.40 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230142; cv=fail; b=lrgARFKNZtyghcJbJUA1VhiF31IzQVwxq5xiST8jjjJgnJaJjc075P2V4p686vx5E4hGniOLrUpQ2xX2av3Wrsrh25r5yMmuNSa7yfetmFEdG4VsmY/TCPE8jcA2nZN/as6xbIYmAgsBV27QYxCLtukeLZVlinCbjxrvwaV4mtM= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230142; c=relaxed/simple; bh=GBGzpFCaU/JGfEvnnqmCiDlYC70bgYT9Ns7pAMmLXrg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=doBsrj4CNPQ2AlBxRpktnhzV9/sEYYs4CNMX6fzdkdx16QtWT75PyP2tbRLFpUtmHGMxcnrGgy9yXcXxmCh6z1/0W6egrjk+fzLV9JeCW3VPoJsFFW35aThnA6z+GI2/otiu6n1iGOAZHzMsO+m2MMmsZWMZppe/HXIvs/Q0o0A= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=K/QU6ChI; arc=fail smtp.client-ip=40.107.236.40 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="K/QU6ChI" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=m76JfP1g/ZCeFnYr2KOM4HtCOosDYjpfIYehDA6mUbOahbpoMyxlMlkkyGhbI/5YgWjAAjg1HSdMIq0HplSvTe78gYs0Vs63DsIy8QSO0J8xUZ2npJcjCTYq6nNBaxQ4/kalCmcfGBUvdYJcwDJuFs5XZ7UuZJkcDcuEBSci7tCtdazifBANxYIjM2dsK7OMfurMFBfX8j/VbyvbaBv0ovX+h7eFufeLk76im4oo2C7upLx7n+fj2Q6zEmIWZWQFqPBjkF9v9tpBpNVm4T9j5p8tmxrH/MBhmTP9AL6m4bVYkzfMNl8zi2e8Jb2g1Ni4NZ8S8OIlS5eMEFbZZoKZQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lceO3jRRjj49NZ8C542iqWishb4F962jWXemQFvdMP8=; b=q+TmskT/HzNDuQD/yjuXP8/TQd0WMzltqJzqHzNqYnABOElshRqYp/s5c6Sjezwmp0Urj4GZNfvdmeFuGKmlWWWjvbxNgr7F1hDUYqsO/kQwsdrBcjkpA05QsfdxUVujim4Z2YOjT4TFdHEVWTjmC0dbhfQJTBbXpNLyGxx9XNOhEiwQjJKIC6PJ+C1U9firMwk9MM5QxLXgf9vXaX/NBRnrGgDFSpy2lTlZr5QjDYOVHH/a1F649p6+asJhtHhQPOCB8o7VUMYizepLWKDW7nUixnMQ59qFfCU/74BQZwVxxhxXvAJ+rsBytuvRwBBFKji/8KW55tZrrZYdRuf3QA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lceO3jRRjj49NZ8C542iqWishb4F962jWXemQFvdMP8=; b=K/QU6ChIQOojOwBGjqeUeG9xP9p/ksU3f1hEAboNTGUbTn8C7Wn9MhAyZXp9vjKPojV5N2Gt+5Icm3pHJ6cPhSvgrKoadF2Rie5iHm5RRsOVrdnurDRJlj9Wt61osrbkXXvn4MzZJCvnjuq68dhbVpxyQoRyusNEV7cdcghHdNEOaq5TdMQAO6cjBeQExMlricWL/QqHKpGaBqJT7ZBbYBv7D6SY59lycbsX5VlKYuzEDNoWottMwLxVzzLpc8Pl32eQfzwZjcFbUF3QfWgrt04TBfRZo/ySVbzSzgjMO7cOyOcT7J6pTa8fcrXT6xpGWwDUJhppI53IORuoqsKN6g== Received: from CH5P223CA0020.NAMP223.PROD.OUTLOOK.COM (2603:10b6:610:1f3::9) by CH0PR12MB8487.namprd12.prod.outlook.com (2603:10b6:610:18c::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.18; Tue, 7 Jan 2025 06:08:50 +0000 Received: from CH1PEPF0000AD79.namprd04.prod.outlook.com (2603:10b6:610:1f3:cafe::a8) by CH5P223CA0020.outlook.office365.com (2603:10b6:610:1f3::9) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.18 via Frontend Transport; Tue, 7 Jan 2025 06:08:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD79.mail.protection.outlook.com (10.167.244.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:08:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:34 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:33 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:30 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 04/13] net/mlx5: fs, add HWS actions pool Date: Tue, 7 Jan 2025 08:06:59 +0200 Message-ID: <20250107060708.1610882-5-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD79:EE_|CH0PR12MB8487:EE_ X-MS-Office365-Filtering-Correlation-Id: bbbe1807-e4f8-4c54-b0be-08dd2ee1c16f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: alztym7x3UFXs8DTgmxp3gIINwuoQXiUB9gLrMEn/O7W/WxxXN+IxJ7B44JZILFk7cQJlbK1ZkKZy2NsYsFIeVKoWz7UfOJGeG5E/nyHKpPoPeOOTv6l3IyxdHMmy2bX06/51ITrjJYzJw7cJTkTwK26JVX+6sLQyulAVHEIGJqVXamHbbXJ501oiTT49grYXugKQgNANhDWDgP1dowLf2PkLnH+fuDxfXc8vFdvrRNvjzPa9b/YWnDwK+xeKRtE9rqfYoQDOGdOHaVgA9JouBUX7PasUB6py5ONbT6C9ZYjJlMeOtHXbAvc+Ovm/JTfOF67sLEIwA8xbsgkuDqytslOFZeJCYhhmEPOlGGKz/8xKoEInjGrv1NQ+Qk0WzlA5jK8zz2WA3LdXCboafuAcmY/fLwlmSi7tmFzB7PdKk16Z2xu8fWAKOHiJ/0IE7wXV6fPuK2gxbTb7IIM2VLA4gwLWrDncKovwdKW/sHjvWD3alSRPEJIC69FEziAQOcOjpp658fo1pG/6ZJmAJnL+dXDEoMOdosrH4QVNx2zlek4G1D3zJwR3OP63oExw8A/qTo4xERiN5TaUE2uIgllS0ZBxJ7HHsyMlgKz5mz7hYuMhUJ3K8MhENIV5Jo93/NgchUQN9A8tH09mNgf3Z7cfYW6zpM4b4ifkZOirSnXOvLyMXqygZ10lowZJhdqvZZrOusVzAJuYYTshrchQztTt3OEju8XNlmk4fattvyVDr/q8ERuLXNYG6rkIiBo+KtDQE7U8zHh/DTp12FCf6mINbhP80aEBGsTSBfH/N2SqF0zJlfcKtEfgnqj/MpkaDrtdu33YVv1esRv0eZgDSl4MO02BYN1/tJbbpd1kDjZVoJRdWTbAezoQMN3Mgyw4cFMqf/Q4Iv4ubxXNYgwrsMcZdAiKKOBE3wDNlayyvMrnmugNKJqPHG+jwjOr0vwiUUVwKJJRDsR6taOSyPxZabrA+tZ3mX6t1tBn/it3RjUibqCTkTdhutSUX1rIcy4v//iOk1lQ7/oCbNaq9YtPSrr4wZnBsy6uaia/OonmB6WcekXCUgEaXeSbfnUAa4HwlcERcgT4WSyOs+2mq3ReXKzb3sgylzaQNAC8HcX6QPhjTClvNlktDsCXhfR74eKEGBenulo/FscvQzoCKUEVzyZQVRFnATQkOp7eMac4m7Dop6l4nRCNrI7WNeJ6lU+DITncxTrlgbIwPY68r+rSdfpSlh8Hgh9/gxXzDnGivVwtxyf7bYkOJEoBObbUvBzo6euKMjBt5PuQnNabnrAkvOZwGSfb2AdIBlngsM4iP1yDVR/lyI35H002kpIXbp7oWMHin1Rz5z7eO14nn9BR+zKyYMBcwEFflOwIz2Bf8q5EC22AOEsKNsGH8ZIApeZgNy1Sle7XlnvhYge3X9ibgohffrDHpOcxw27mYgTqX536YGQHq/N5VrMgzBTboHeHp7dWZ52966+qxzmRV0MyCxTU6lQBUbQMUQ3owFOLeVvDyQ= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(376014)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:08:50.2757 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bbbe1807-e4f8-4c54-b0be-08dd2ee1c16f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD79.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB8487 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh The HW Steering actions pool will help utilize the option in HW Steering to share steering actions among different rules. Create pool on root namespace creation and add few HW Steering actions that don't depend on the steering rule itself and thus can be shared between rules, created on same namespace: tag, pop_vlan, push_vlan, drop, decap l2. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 58 +++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 9 +++ 2 files changed, 67 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index c8064bc8a86c..eeaf4a84aafc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -9,9 +9,60 @@ #define MLX5HWS_CTX_MAX_NUM_OF_QUEUES 16 #define MLX5HWS_CTX_QUEUE_SIZE 256 +static int init_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; + struct mlx5hws_action_reformat_header reformat_hdr = {}; + struct mlx5hws_context *ctx = fs_ctx->hws_ctx; + enum mlx5hws_action_type action_type; + + hws_pool->tag_action = mlx5hws_action_create_tag(ctx, flags); + if (!hws_pool->tag_action) + return -ENOMEM; + hws_pool->pop_vlan_action = mlx5hws_action_create_pop_vlan(ctx, flags); + if (!hws_pool->pop_vlan_action) + goto destroy_tag; + hws_pool->push_vlan_action = mlx5hws_action_create_push_vlan(ctx, flags); + if (!hws_pool->push_vlan_action) + goto destroy_pop_vlan; + hws_pool->drop_action = mlx5hws_action_create_dest_drop(ctx, flags); + if (!hws_pool->drop_action) + goto destroy_push_vlan; + action_type = MLX5HWS_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; + hws_pool->decapl2_action = + mlx5hws_action_create_reformat(ctx, action_type, 1, + &reformat_hdr, 0, flags); + if (!hws_pool->decapl2_action) + goto destroy_drop; + return 0; + +destroy_drop: + mlx5hws_action_destroy(hws_pool->drop_action); +destroy_push_vlan: + mlx5hws_action_destroy(hws_pool->push_vlan_action); +destroy_pop_vlan: + mlx5hws_action_destroy(hws_pool->pop_vlan_action); +destroy_tag: + mlx5hws_action_destroy(hws_pool->tag_action); + return -ENOMEM; +} + +static void cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) +{ + struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; + + mlx5hws_action_destroy(hws_pool->decapl2_action); + mlx5hws_action_destroy(hws_pool->drop_action); + mlx5hws_action_destroy(hws_pool->push_vlan_action); + mlx5hws_action_destroy(hws_pool->pop_vlan_action); + mlx5hws_action_destroy(hws_pool->tag_action); +} + static int mlx5_cmd_hws_create_ns(struct mlx5_flow_root_namespace *ns) { struct mlx5hws_context_attr hws_ctx_attr = {}; + int err; hws_ctx_attr.queues = min_t(int, num_online_cpus(), MLX5HWS_CTX_MAX_NUM_OF_QUEUES); @@ -23,11 +74,18 @@ static int mlx5_cmd_hws_create_ns(struct mlx5_flow_root_namespace *ns) mlx5_core_err(ns->dev, "Failed to create hws flow namespace\n"); return -EOPNOTSUPP; } + err = init_hws_actions_pool(&ns->fs_hws_context); + if (err) { + mlx5_core_err(ns->dev, "Failed to init hws actions pool\n"); + mlx5hws_context_close(ns->fs_hws_context.hws_ctx); + return err; + } return 0; } static int mlx5_cmd_hws_destroy_ns(struct mlx5_flow_root_namespace *ns) { + cleanup_hws_actions_pool(&ns->fs_hws_context); return mlx5hws_context_close(ns->fs_hws_context.hws_ctx); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index da8094c66cd5..256be4234d92 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -6,8 +6,17 @@ #include "mlx5hws.h" +struct mlx5_fs_hws_actions_pool { + struct mlx5hws_action *tag_action; + struct mlx5hws_action *pop_vlan_action; + struct mlx5hws_action *push_vlan_action; + struct mlx5hws_action *drop_action; + struct mlx5hws_action *decapl2_action; +}; + struct mlx5_fs_hws_context { struct mlx5hws_context *hws_ctx; + struct mlx5_fs_hws_actions_pool hws_pool; }; struct mlx5_fs_hws_table { From patchwork Tue Jan 7 06:07:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928266 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2076.outbound.protection.outlook.com [40.107.236.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 22EE41DE2C3 for ; Tue, 7 Jan 2025 06:09:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230146; cv=fail; b=SrWWLboiRlSxRY14XB+l0cbuFx1x82dON+SMbRitJhpt5m63GyXs3sZ5/flyZYwLbx2otqd0Lj9gJvEmhQkqUMXL/tym7fTb1tUPj3z0fdWVgtzco3OPOpTboYn1SEe4Y3MNmmgLWDWL+3+5/xsqQjUBihFCsRJefpNkM5A6cVA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230146; c=relaxed/simple; bh=qUL3qLSVyAoLDnM9Fao5TWX9oeT8zaK/XdjKyBk8Oiw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=glKe9tXPpx4wtKPdjmJF2e0mSYq4T70LYlr7PlgYv53n0e8u/aEFnhqLppLXa5YoCbgDLC93A7arnOHNBCmtY6AzzA6lzmEP1Go6sR5YXXRthS6S92c6GY7GjRrkVMkm1tP0Z1q52tvKeqCWvpNcEd+3AhK5Pr12wulTtDYpLs8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Aoi8fbT5; arc=fail smtp.client-ip=40.107.236.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Aoi8fbT5" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=GJZiOFlyOdRnZQz0Wrz5THx4shaOoeBaBRKFg/M+fbwGjMK5fJ6bj9UqisPGW5RTVxvMMlcUqyeylu0EgJbxaLtQs+TFG1UmdQbJ6hujR6kLMG+biofpVoutioVPtXFbr3nahbWX2pvT46X6BWagrecttE1izCj3fTDdWBbx0ng5VELg6Jg1EtrDKp8X14iuZp+dS+FjoQJfDKjMKckWo7bgUsut3c8uE10745RhbRgP/vs5HGcoN2v7nqzepzT5omhEpzt8qDO/L3WKWUCMAbynnYdInmaFgPkIzlL3YR+3gne5MaN0x/vrgz7BhDqMjVWFujBFfre51qDtjafFsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5LIgU4xF0ql5Mus89WuEm9u85iUyX9tZrUgzfYXq5ww=; b=iElwaKA8KnRFqB1+g1b13WsiVEhjUvU2T157YqdT3XKRGKIRnXrV17Qc9iY7C7SGHJa+rF1NrAmmrM+OWa+gL3ere27W4HyDQa7ksJoMz9NNJ9mkFKjsv7fNFyO1QLpqPfBTL9zjq+SvJwvfeXJ6Fapl7KhcoqCw9ClZYDcIsSO6YaM3W9XeoQEs0mrXm1b9ISjL3pMsQxEK02RmIbaLBWJmtxhuWlmQXjHjCvetE+ULO4/jd4nn7R3SZe4PKI0xpzRWbdZWchbG/L8S/agyph5OXUCO3VgfmfHymec/dS6g1GZmmXr+x5T7Jl/z7eaTMfHc0UKNQpHkBgFIpDxRdQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5LIgU4xF0ql5Mus89WuEm9u85iUyX9tZrUgzfYXq5ww=; b=Aoi8fbT5KJIiAApm55lVHmthJoBe8fStsQTD10vX8wnvQs9YTN0iqGQdYQoY0y2mgLhxvfvXK/Mb0boBeC8h3b4mOt9kQps2p1Tj5d+maPxOXeogO8HpMBYvjzOA1YF3oI1j2NRXca1ZTuUs51ezzN6hcCJKoMtuGrSrzMh56rhqkHfreD69S7cCtCK7Hpgu7t2Kf6Gn/TQaSZkurlXqXmfAWDVOiCavr5rEHLgtxR9b8/pQtc5ns03V23hhOk0D6lvpf5rdXcViJRzIqNBQV9JnPxUPO02Juw9VPda7cbrlWNCE4Qgre/104czuaGjqx3KL5nYveqv+7ZWDLiUTIw== Received: from CH5P223CA0015.NAMP223.PROD.OUTLOOK.COM (2603:10b6:610:1f3::15) by CH3PR12MB8306.namprd12.prod.outlook.com (2603:10b6:610:12c::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.18; Tue, 7 Jan 2025 06:08:53 +0000 Received: from CH1PEPF0000AD79.namprd04.prod.outlook.com (2603:10b6:610:1f3:cafe::86) by CH5P223CA0015.outlook.office365.com (2603:10b6:610:1f3::15) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.10 via Frontend Transport; Tue, 7 Jan 2025 06:08:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD79.mail.protection.outlook.com (10.167.244.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:08:53 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:38 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:37 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:34 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 05/13] net/mlx5: fs, add HWS packet reformat API function Date: Tue, 7 Jan 2025 08:07:00 +0200 Message-ID: <20250107060708.1610882-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD79:EE_|CH3PR12MB8306:EE_ X-MS-Office365-Filtering-Correlation-Id: e2edd533-c538-4d8f-9a9c-08dd2ee1c340 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: O8reRZLIUva+rqd4w4oJzezR5SmDGUviDjXnwAGAQ9fl/KqMQTZOeAS46ls+ZKXER/zlKIOJxIoQe0AvzIyNCZ7r//OIw1I+cwNy5lLxhWn4iIQIp7hg8iZh5JqmNBBl59DmXYA78VOp+4aDFDWINTTOmS05YRDKW8xhPt5A0G/xdtM5lUaVigC2isV1QGZzsXPieNb1dEQCDkl2PV/8X9KANvh4X7bR0kMZQIgJ6abZ9TBTJ6LzspBHXUEIz4f8ZKzAsNTZWKAZm/NYk4SnzJX6e9zktmzedzkNs9yDP2ZNM08kTufie1W9Mps6/ldjSDXDUwnRNdNO6GdtI54IANh3+pYdzGIppm84Ld2d9/ENxAPJ0vlCTWE/L51O9SkvwnXXqdBYUDfWMoGEA17USL/PwL0EO8yP89LiR2Mh2hrK9bC8k5SG1Pj3KjQDKlJ/cQ+Hp5irL8yBrO1XXtWmoe4K1iKtvZjO8sso3A1l9uf52EdQdBBCVstGB+WuPOGJXZMVosNdFP57f+vS0EX3VqGqCLMlbmTWpjjPoaLOwPY6GvtEosXN0l2s2Xp0D3n5WysSWP1vNaoO4Dsy/3frQ1t2pNCxSmcBt15oMUy1CKpgOJRVYPRLBimCRKgenRneFDYepHEIYGD/36GqM4TlQT/HcLCJlrLSJSkIcMk12IfwAdBPwsx+Ypn/3B4AFbsSRnx4tdWcQNqe8TBoVrYTQWyEfcdEGQ2sGfNn/pg1FhbD9TxV69SqPWrMOX8ts4N7alu8e4ksamjbQ30M3AvIdhPp0uR2SsFiZgo+pPbCJmrndZ0kYwS+nSqm6O540uPduytL3IO15lESbeLURu10ieSEMMv5/ptJFcwtPa3Qk92cT+1chtU76VD2Lt+4RURMrzHLJE0zE7lPU8PUKJxTJJMXYsdpxFEwlV9stnToe8GX4IyOLs6pZccQCswAqMD2wgShWsOAe1KUm3J+y0KrlLoXZC0lMQ6s9dUR+i8S47iLQwJwojBtX+c+E6t8eoHkZWTIuU7iWSk27QS1b1ry03d8eOWk6mbOAF4GTXk5bt5p2VIKNaHMl/m0mwq/tcSuH3Lhelj+VxaSvhlXMDjVel+tWsyj6cIv6P0V6tvyUuudg5tXnv6TaIsMHt+wMW8tr7q01RMT6uVrcc6uVynP0hkDDt2txSFElSn1DG9JBy+iK7OD+NVODyDDaowY3+CBWUWC2CXRGRKr9vkafIfIikPjwZpJJPhxCfH7iAwOGgmKQKFgZlZahTHeq57JCFm2V7/TZzU/5GPpPTsBayE7O1STMMAiFJtau35LJXCvYAaK4luMs5IdPoPyk5zg1wCCLMkQHug6G2biSMwR6WWU94SbeDnxkHThJk0088w+23qPoyKVGsNRyA/i6XaZVGsEQPiiax4L65RxUMgQhKq5DAHn2myxsPPfkeMxJ/xIAzoJShlG4+y++zRClVEo7SjSINQD6owHz2v954oTQPAjKeuP5b7F1uRJI11QVng1ID8= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:08:53.3226 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e2edd533-c538-4d8f-9a9c-08dd2ee1c340 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD79.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8306 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add packet reformat alloc and dealloc API functions to provide packet reformat actions for steering rules. Add HWS action pools for each of the following packet reformat types: - decapl3: decapsulate l3 tunnel to l2 - encapl2: encapsulate l2 to tunnel l2 - encapl3: encapsulate l2 to tunnel l3 - insert_hdr: insert header In addition cache remove header action for remove vlan header as this is currently the only use case of remove header action in the driver. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/Makefile | 1 + .../net/ethernet/mellanox/mlx5/core/fs_core.h | 1 + .../ethernet/mellanox/mlx5/core/fs_counters.c | 5 +- .../net/ethernet/mellanox/mlx5/core/fs_pool.c | 5 +- .../net/ethernet/mellanox/mlx5/core/fs_pool.h | 5 +- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 289 +++++++++++++++++- .../mellanox/mlx5/core/steering/hws/fs_hws.h | 12 + .../mlx5/core/steering/hws/fs_hws_pools.c | 238 +++++++++++++++ .../mlx5/core/steering/hws/fs_hws_pools.h | 48 +++ include/linux/mlx5/mlx5_ifc.h | 1 + 10 files changed, 594 insertions(+), 11 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index 0008b22417c8..d9a8817bb33c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -152,6 +152,7 @@ mlx5_core-$(CONFIG_MLX5_HW_STEERING) += steering/hws/cmd.o \ steering/hws/debug.o \ steering/hws/vport.o \ steering/hws/bwc_complex.o \ + steering/hws/fs_hws_pools.o \ steering/hws/fs_hws.o # diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index bbe3741b7868..9b0575a61362 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -75,6 +75,7 @@ struct mlx5_pkt_reformat { enum mlx5_flow_resource_owner owner; union { struct mlx5_fs_dr_action fs_dr_action; + struct mlx5_fs_hws_action fs_hws_action; u32 id; }; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c index d8e1c4ebd364..94d9caacd50f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c @@ -449,7 +449,8 @@ static void mlx5_fc_init(struct mlx5_fc *counter, struct mlx5_fc_bulk *bulk, counter->id = id; } -static struct mlx5_fs_bulk *mlx5_fc_bulk_create(struct mlx5_core_dev *dev) +static struct mlx5_fs_bulk *mlx5_fc_bulk_create(struct mlx5_core_dev *dev, + void *pool_ctx) { enum mlx5_fc_bulk_alloc_bitmask alloc_bitmask; struct mlx5_fc_bulk *fc_bulk; @@ -518,7 +519,7 @@ static const struct mlx5_fs_pool_ops mlx5_fc_pool_ops = { static void mlx5_fc_pool_init(struct mlx5_fs_pool *fc_pool, struct mlx5_core_dev *dev) { - mlx5_fs_pool_init(fc_pool, dev, &mlx5_fc_pool_ops); + mlx5_fs_pool_init(fc_pool, dev, &mlx5_fc_pool_ops, NULL); } static void mlx5_fc_pool_cleanup(struct mlx5_fs_pool *fc_pool) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.c index b891d7b9e3e0..f6c226664602 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.c @@ -56,11 +56,12 @@ static int mlx5_fs_bulk_release_index(struct mlx5_fs_bulk *fs_bulk, int index) } void mlx5_fs_pool_init(struct mlx5_fs_pool *pool, struct mlx5_core_dev *dev, - const struct mlx5_fs_pool_ops *ops) + const struct mlx5_fs_pool_ops *ops, void *pool_ctx) { WARN_ON_ONCE(!ops || !ops->bulk_destroy || !ops->bulk_create || !ops->update_threshold); pool->dev = dev; + pool->pool_ctx = pool_ctx; mutex_init(&pool->pool_lock); INIT_LIST_HEAD(&pool->fully_used); INIT_LIST_HEAD(&pool->partially_used); @@ -91,7 +92,7 @@ mlx5_fs_pool_alloc_new_bulk(struct mlx5_fs_pool *fs_pool) struct mlx5_core_dev *dev = fs_pool->dev; struct mlx5_fs_bulk *new_bulk; - new_bulk = fs_pool->ops->bulk_create(dev); + new_bulk = fs_pool->ops->bulk_create(dev, fs_pool->pool_ctx); if (new_bulk) fs_pool->available_units += new_bulk->bulk_len; fs_pool->ops->update_threshold(fs_pool); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.h index 3b149863260c..f04ec3107498 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.h @@ -21,7 +21,8 @@ struct mlx5_fs_pool; struct mlx5_fs_pool_ops { int (*bulk_destroy)(struct mlx5_core_dev *dev, struct mlx5_fs_bulk *bulk); - struct mlx5_fs_bulk * (*bulk_create)(struct mlx5_core_dev *dev); + struct mlx5_fs_bulk * (*bulk_create)(struct mlx5_core_dev *dev, + void *pool_ctx); void (*update_threshold)(struct mlx5_fs_pool *pool); }; @@ -44,7 +45,7 @@ void mlx5_fs_bulk_cleanup(struct mlx5_fs_bulk *fs_bulk); int mlx5_fs_bulk_get_free_amount(struct mlx5_fs_bulk *bulk); void mlx5_fs_pool_init(struct mlx5_fs_pool *pool, struct mlx5_core_dev *dev, - const struct mlx5_fs_pool_ops *ops); + const struct mlx5_fs_pool_ops *ops, void *pool_ctx); void mlx5_fs_pool_cleanup(struct mlx5_fs_pool *pool); int mlx5_fs_pool_acquire_index(struct mlx5_fs_pool *fs_pool, struct mlx5_fs_pool_index *pool_index); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index eeaf4a84aafc..723865140b2e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -4,22 +4,30 @@ #include #include #include +#include "fs_hws_pools.h" #include "mlx5hws.h" #define MLX5HWS_CTX_MAX_NUM_OF_QUEUES 16 #define MLX5HWS_CTX_QUEUE_SIZE 256 -static int init_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) +static struct mlx5hws_action * +create_action_remove_header_vlan(struct mlx5hws_context *ctx); +static void destroy_pr_pool(struct mlx5_fs_pool *pool, struct xarray *pr_pools, + unsigned long index); + +static int init_hws_actions_pool(struct mlx5_core_dev *dev, + struct mlx5_fs_hws_context *fs_ctx) { u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; struct mlx5hws_action_reformat_header reformat_hdr = {}; struct mlx5hws_context *ctx = fs_ctx->hws_ctx; enum mlx5hws_action_type action_type; + int err = -ENOMEM; hws_pool->tag_action = mlx5hws_action_create_tag(ctx, flags); if (!hws_pool->tag_action) - return -ENOMEM; + return err; hws_pool->pop_vlan_action = mlx5hws_action_create_pop_vlan(ctx, flags); if (!hws_pool->pop_vlan_action) goto destroy_tag; @@ -35,8 +43,27 @@ static int init_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) &reformat_hdr, 0, flags); if (!hws_pool->decapl2_action) goto destroy_drop; + hws_pool->remove_hdr_vlan_action = create_action_remove_header_vlan(ctx); + if (!hws_pool->remove_hdr_vlan_action) + goto destroy_decapl2; + err = mlx5_fs_hws_pr_pool_init(&hws_pool->insert_hdr_pool, dev, 0, + MLX5HWS_ACTION_TYP_INSERT_HEADER); + if (err) + goto destroy_remove_hdr; + err = mlx5_fs_hws_pr_pool_init(&hws_pool->dl3tnltol2_pool, dev, 0, + MLX5HWS_ACTION_TYP_REFORMAT_TNL_L3_TO_L2); + if (err) + goto cleanup_insert_hdr; + xa_init(&hws_pool->el2tol3tnl_pools); + xa_init(&hws_pool->el2tol2tnl_pools); return 0; +cleanup_insert_hdr: + mlx5_fs_hws_pr_pool_cleanup(&hws_pool->insert_hdr_pool); +destroy_remove_hdr: + mlx5hws_action_destroy(hws_pool->remove_hdr_vlan_action); +destroy_decapl2: + mlx5hws_action_destroy(hws_pool->decapl2_action); destroy_drop: mlx5hws_action_destroy(hws_pool->drop_action); destroy_push_vlan: @@ -45,13 +72,24 @@ static int init_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) mlx5hws_action_destroy(hws_pool->pop_vlan_action); destroy_tag: mlx5hws_action_destroy(hws_pool->tag_action); - return -ENOMEM; + return err; } static void cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) { struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; - + struct mlx5_fs_pool *pool; + unsigned long i; + + xa_for_each(&hws_pool->el2tol2tnl_pools, i, pool) + destroy_pr_pool(pool, &hws_pool->el2tol2tnl_pools, i); + xa_destroy(&hws_pool->el2tol2tnl_pools); + xa_for_each(&hws_pool->el2tol3tnl_pools, i, pool) + destroy_pr_pool(pool, &hws_pool->el2tol3tnl_pools, i); + xa_destroy(&hws_pool->el2tol3tnl_pools); + mlx5_fs_hws_pr_pool_cleanup(&hws_pool->dl3tnltol2_pool); + mlx5_fs_hws_pr_pool_cleanup(&hws_pool->insert_hdr_pool); + mlx5hws_action_destroy(hws_pool->remove_hdr_vlan_action); mlx5hws_action_destroy(hws_pool->decapl2_action); mlx5hws_action_destroy(hws_pool->drop_action); mlx5hws_action_destroy(hws_pool->push_vlan_action); @@ -74,7 +112,7 @@ static int mlx5_cmd_hws_create_ns(struct mlx5_flow_root_namespace *ns) mlx5_core_err(ns->dev, "Failed to create hws flow namespace\n"); return -EOPNOTSUPP; } - err = init_hws_actions_pool(&ns->fs_hws_context); + err = init_hws_actions_pool(ns->dev, &ns->fs_hws_context); if (err) { mlx5_core_err(ns->dev, "Failed to init hws actions pool\n"); mlx5hws_context_close(ns->fs_hws_context.hws_ctx); @@ -251,6 +289,245 @@ static int mlx5_cmd_hws_destroy_flow_group(struct mlx5_flow_root_namespace *ns, return mlx5hws_bwc_matcher_destroy(fg->fs_hws_matcher.matcher); } +static struct mlx5hws_action * +create_action_remove_header_vlan(struct mlx5hws_context *ctx) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5hws_action_remove_header_attr remove_hdr_vlan = {}; + + /* MAC anchor not supported in HWS reformat, use VLAN anchor */ + remove_hdr_vlan.anchor = MLX5_REFORMAT_CONTEXT_ANCHOR_VLAN_START; + remove_hdr_vlan.offset = 0; + remove_hdr_vlan.size = sizeof(struct vlan_hdr); + return mlx5hws_action_create_remove_header(ctx, &remove_hdr_vlan, flags); +} + +static struct mlx5hws_action * +get_action_remove_header_vlan(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_pkt_reformat_params *params) +{ + if (!params || + params->param_0 != MLX5_REFORMAT_CONTEXT_ANCHOR_MAC_START || + params->param_1 != offsetof(struct vlan_ethhdr, h_vlan_proto) || + params->size != sizeof(struct vlan_hdr)) + return NULL; + + return fs_ctx->hws_pool.remove_hdr_vlan_action; +} + +static int +verify_insert_header_params(struct mlx5_core_dev *mdev, + struct mlx5_pkt_reformat_params *params) +{ + if ((!params->data && params->size) || (params->data && !params->size) || + MLX5_CAP_GEN_2(mdev, max_reformat_insert_size) < params->size || + MLX5_CAP_GEN_2(mdev, max_reformat_insert_offset) < params->param_1) { + mlx5_core_err(mdev, "Invalid reformat params for INSERT_HDR\n"); + return -EINVAL; + } + if (params->param_0 != MLX5_FS_INSERT_HDR_VLAN_ANCHOR || + params->param_1 != MLX5_FS_INSERT_HDR_VLAN_OFFSET || + params->size != MLX5_FS_INSERT_HDR_VLAN_SIZE) { + mlx5_core_err(mdev, "Only vlan insert header supported\n"); + return -EOPNOTSUPP; + } + return 0; +} + +static int verify_encap_decap_params(struct mlx5_core_dev *dev, + struct mlx5_pkt_reformat_params *params) +{ + if (params->param_0 || params->param_1) { + mlx5_core_err(dev, "Invalid reformat params\n"); + return -EINVAL; + } + return 0; +} + +static struct mlx5_fs_pool * +get_pr_encap_pool(struct mlx5_core_dev *dev, struct xarray *pr_pools, + enum mlx5hws_action_type reformat_type, size_t size) +{ + struct mlx5_fs_pool *pr_pool; + unsigned long index = size; + int err; + + pr_pool = xa_load(pr_pools, index); + if (pr_pool) + return pr_pool; + + pr_pool = kzalloc(sizeof(*pr_pool), GFP_KERNEL); + if (!pr_pool) + return ERR_PTR(-ENOMEM); + err = mlx5_fs_hws_pr_pool_init(pr_pool, dev, size, reformat_type); + if (err) + goto free_pr_pool; + err = xa_insert(pr_pools, index, pr_pool, GFP_KERNEL); + if (err) + goto cleanup_pr_pool; + return pr_pool; + +cleanup_pr_pool: + mlx5_fs_hws_pr_pool_cleanup(pr_pool); +free_pr_pool: + kfree(pr_pool); + return ERR_PTR(err); +} + +static void destroy_pr_pool(struct mlx5_fs_pool *pool, struct xarray *pr_pools, + unsigned long index) +{ + xa_erase(pr_pools, index); + mlx5_fs_hws_pr_pool_cleanup(pool); + kfree(pool); +} + +static int +mlx5_cmd_hws_packet_reformat_alloc(struct mlx5_flow_root_namespace *ns, + struct mlx5_pkt_reformat_params *params, + enum mlx5_flow_namespace_type namespace, + struct mlx5_pkt_reformat *pkt_reformat) +{ + struct mlx5_fs_hws_context *fs_ctx = &ns->fs_hws_context; + struct mlx5_fs_hws_actions_pool *hws_pool; + struct mlx5hws_action *hws_action = NULL; + struct mlx5_fs_hws_pr *pr_data = NULL; + struct mlx5_fs_pool *pr_pool = NULL; + struct mlx5_core_dev *dev = ns->dev; + u8 hdr_idx = 0; + int err; + + if (!params) + return -EINVAL; + + hws_pool = &fs_ctx->hws_pool; + + switch (params->type) { + case MLX5_REFORMAT_TYPE_L2_TO_VXLAN: + case MLX5_REFORMAT_TYPE_L2_TO_NVGRE: + case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL: + if (verify_encap_decap_params(dev, params)) + return -EINVAL; + pr_pool = get_pr_encap_pool(dev, &hws_pool->el2tol2tnl_pools, + MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2, + params->size); + if (IS_ERR(pr_pool)) + return PTR_ERR(pr_pool); + break; + case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL: + if (verify_encap_decap_params(dev, params)) + return -EINVAL; + pr_pool = get_pr_encap_pool(dev, &hws_pool->el2tol3tnl_pools, + MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L3, + params->size); + if (IS_ERR(pr_pool)) + return PTR_ERR(pr_pool); + break; + case MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2: + if (verify_encap_decap_params(dev, params)) + return -EINVAL; + pr_pool = &hws_pool->dl3tnltol2_pool; + hdr_idx = params->size == ETH_HLEN ? + MLX5_FS_DL3TNLTOL2_MAC_HDR_IDX : + MLX5_FS_DL3TNLTOL2_MAC_VLAN_HDR_IDX; + break; + case MLX5_REFORMAT_TYPE_INSERT_HDR: + err = verify_insert_header_params(dev, params); + if (err) + return err; + pr_pool = &hws_pool->insert_hdr_pool; + break; + case MLX5_REFORMAT_TYPE_REMOVE_HDR: + hws_action = get_action_remove_header_vlan(fs_ctx, params); + if (!hws_action) + mlx5_core_err(dev, "Only vlan remove header supported\n"); + break; + default: + mlx5_core_err(ns->dev, "Packet-reformat not supported(%d)\n", + params->type); + return -EOPNOTSUPP; + } + + if (pr_pool) { + pr_data = mlx5_fs_hws_pr_pool_acquire_pr(pr_pool); + if (IS_ERR_OR_NULL(pr_data)) + return !pr_data ? -EINVAL : PTR_ERR(pr_data); + hws_action = pr_data->bulk->hws_action; + if (!hws_action) { + mlx5_core_err(dev, + "Failed allocating packet-reformat action\n"); + err = -EINVAL; + goto release_pr; + } + pr_data->data = kmemdup(params->data, params->size, GFP_KERNEL); + if (!pr_data->data) { + err = -ENOMEM; + goto release_pr; + } + pr_data->hdr_idx = hdr_idx; + pr_data->data_size = params->size; + pkt_reformat->fs_hws_action.pr_data = pr_data; + } + + pkt_reformat->owner = MLX5_FLOW_RESOURCE_OWNER_SW; + pkt_reformat->fs_hws_action.hws_action = hws_action; + return 0; + +release_pr: + if (pr_pool && pr_data) + mlx5_fs_hws_pr_pool_release_pr(pr_pool, pr_data); + return err; +} + +static void mlx5_cmd_hws_packet_reformat_dealloc(struct mlx5_flow_root_namespace *ns, + struct mlx5_pkt_reformat *pkt_reformat) +{ + struct mlx5_fs_hws_actions_pool *hws_pool = &ns->fs_hws_context.hws_pool; + struct mlx5_core_dev *dev = ns->dev; + struct mlx5_fs_hws_pr *pr_data; + struct mlx5_fs_pool *pr_pool; + + if (pkt_reformat->reformat_type == MLX5_REFORMAT_TYPE_REMOVE_HDR) + return; + + if (!pkt_reformat->fs_hws_action.pr_data) { + mlx5_core_err(ns->dev, "Failed release packet-reformat\n"); + return; + } + pr_data = pkt_reformat->fs_hws_action.pr_data; + + switch (pkt_reformat->reformat_type) { + case MLX5_REFORMAT_TYPE_L2_TO_VXLAN: + case MLX5_REFORMAT_TYPE_L2_TO_NVGRE: + case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL: + pr_pool = get_pr_encap_pool(dev, &hws_pool->el2tol2tnl_pools, + MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2, + pr_data->data_size); + break; + case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL: + pr_pool = get_pr_encap_pool(dev, &hws_pool->el2tol2tnl_pools, + MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2, + pr_data->data_size); + break; + case MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2: + pr_pool = &hws_pool->dl3tnltol2_pool; + break; + case MLX5_REFORMAT_TYPE_INSERT_HDR: + pr_pool = &hws_pool->insert_hdr_pool; + break; + default: + mlx5_core_err(ns->dev, "Unknown packet-reformat type\n"); + return; + } + if (!pkt_reformat->fs_hws_action.pr_data || IS_ERR(pr_pool)) { + mlx5_core_err(ns->dev, "Failed release packet-reformat\n"); + return; + } + kfree(pr_data->data); + mlx5_fs_hws_pr_pool_release_pr(pr_pool, pr_data); + pkt_reformat->fs_hws_action.pr_data = NULL; +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, @@ -258,6 +535,8 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .update_root_ft = mlx5_cmd_hws_update_root_ft, .create_flow_group = mlx5_cmd_hws_create_flow_group, .destroy_flow_group = mlx5_cmd_hws_destroy_flow_group, + .packet_reformat_alloc = mlx5_cmd_hws_packet_reformat_alloc, + .packet_reformat_dealloc = mlx5_cmd_hws_packet_reformat_dealloc, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index 256be4234d92..2292eb08ef24 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -5,6 +5,7 @@ #define _MLX5_FS_HWS_ #include "mlx5hws.h" +#include "fs_hws_pools.h" struct mlx5_fs_hws_actions_pool { struct mlx5hws_action *tag_action; @@ -12,6 +13,11 @@ struct mlx5_fs_hws_actions_pool { struct mlx5hws_action *push_vlan_action; struct mlx5hws_action *drop_action; struct mlx5hws_action *decapl2_action; + struct mlx5hws_action *remove_hdr_vlan_action; + struct mlx5_fs_pool insert_hdr_pool; + struct mlx5_fs_pool dl3tnltol2_pool; + struct xarray el2tol3tnl_pools; + struct xarray el2tol2tnl_pools; }; struct mlx5_fs_hws_context { @@ -24,6 +30,12 @@ struct mlx5_fs_hws_table { bool miss_ft_set; }; +struct mlx5_fs_hws_action { + struct mlx5hws_action *hws_action; + struct mlx5_fs_pool *fs_pool; + struct mlx5_fs_hws_pr *pr_data; +}; + struct mlx5_fs_hws_matcher { struct mlx5hws_bwc_matcher *matcher; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c new file mode 100644 index 000000000000..14f732f3f09c --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c @@ -0,0 +1,238 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */ + +#include +#include "fs_hws_pools.h" + +#define MLX5_FS_HWS_DEFAULT_BULK_LEN 65536 +#define MLX5_FS_HWS_POOL_MAX_THRESHOLD BIT(18) +#define MLX5_FS_HWS_POOL_USED_BUFF_RATIO 10 + +static struct mlx5hws_action * +dl3tnltol2_bulk_action_create(struct mlx5hws_context *ctx) +{ + struct mlx5hws_action_reformat_header reformat_hdr[2] = {}; + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + enum mlx5hws_action_type reformat_type; + u32 log_bulk_size; + + reformat_type = MLX5HWS_ACTION_TYP_REFORMAT_TNL_L3_TO_L2; + reformat_hdr[MLX5_FS_DL3TNLTOL2_MAC_HDR_IDX].sz = ETH_HLEN; + reformat_hdr[MLX5_FS_DL3TNLTOL2_MAC_VLAN_HDR_IDX].sz = ETH_HLEN + VLAN_HLEN; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + return mlx5hws_action_create_reformat(ctx, reformat_type, 2, + reformat_hdr, log_bulk_size, flags); +} + +static struct mlx5hws_action * +el2tol3tnl_bulk_action_create(struct mlx5hws_context *ctx, size_t data_size) +{ + struct mlx5hws_action_reformat_header reformat_hdr = {}; + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + enum mlx5hws_action_type reformat_type; + u32 log_bulk_size; + + reformat_type = MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L3; + reformat_hdr.sz = data_size; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + return mlx5hws_action_create_reformat(ctx, reformat_type, 1, + &reformat_hdr, log_bulk_size, flags); +} + +static struct mlx5hws_action * +el2tol2tnl_bulk_action_create(struct mlx5hws_context *ctx, size_t data_size) +{ + struct mlx5hws_action_reformat_header reformat_hdr = {}; + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + enum mlx5hws_action_type reformat_type; + u32 log_bulk_size; + + reformat_type = MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; + reformat_hdr.sz = data_size; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + return mlx5hws_action_create_reformat(ctx, reformat_type, 1, + &reformat_hdr, log_bulk_size, flags); +} + +static struct mlx5hws_action * +insert_hdr_bulk_action_create(struct mlx5hws_context *ctx) +{ + struct mlx5hws_action_insert_header insert_hdr = {}; + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + u32 log_bulk_size; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + insert_hdr.hdr.sz = MLX5_FS_INSERT_HDR_VLAN_SIZE; + insert_hdr.anchor = MLX5_FS_INSERT_HDR_VLAN_ANCHOR; + insert_hdr.offset = MLX5_FS_INSERT_HDR_VLAN_OFFSET; + + return mlx5hws_action_create_insert_header(ctx, 1, &insert_hdr, + log_bulk_size, flags); +} + +static struct mlx5hws_action * +pr_bulk_action_create(struct mlx5_core_dev *dev, + struct mlx5_fs_hws_pr_pool_ctx *pr_pool_ctx) +{ + struct mlx5_flow_root_namespace *root_ns; + struct mlx5hws_context *ctx; + size_t encap_data_size; + + root_ns = mlx5_get_root_namespace(dev, MLX5_FLOW_NAMESPACE_FDB); + if (!root_ns || root_ns->mode != MLX5_FLOW_STEERING_MODE_HMFS) + return NULL; + + ctx = root_ns->fs_hws_context.hws_ctx; + if (!ctx) + return NULL; + + encap_data_size = pr_pool_ctx->encap_data_size; + switch (pr_pool_ctx->reformat_type) { + case MLX5HWS_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: + return dl3tnltol2_bulk_action_create(ctx); + case MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: + return el2tol3tnl_bulk_action_create(ctx, encap_data_size); + case MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: + return el2tol2tnl_bulk_action_create(ctx, encap_data_size); + case MLX5HWS_ACTION_TYP_INSERT_HEADER: + return insert_hdr_bulk_action_create(ctx); + default: + return NULL; + } + return NULL; +} + +static struct mlx5_fs_bulk * +mlx5_fs_hws_pr_bulk_create(struct mlx5_core_dev *dev, void *pool_ctx) +{ + struct mlx5_fs_hws_pr_pool_ctx *pr_pool_ctx; + struct mlx5_fs_hws_pr_bulk *pr_bulk; + int bulk_len; + int i; + + if (!pool_ctx) + return NULL; + pr_pool_ctx = pool_ctx; + bulk_len = MLX5_FS_HWS_DEFAULT_BULK_LEN; + pr_bulk = kvzalloc(struct_size(pr_bulk, prs_data, bulk_len), GFP_KERNEL); + if (!pr_bulk) + return NULL; + + if (mlx5_fs_bulk_init(dev, &pr_bulk->fs_bulk, bulk_len)) + goto free_pr_bulk; + + for (i = 0; i < bulk_len; i++) { + pr_bulk->prs_data[i].bulk = pr_bulk; + pr_bulk->prs_data[i].offset = i; + } + + pr_bulk->hws_action = pr_bulk_action_create(dev, pr_pool_ctx); + if (!pr_bulk->hws_action) + goto cleanup_fs_bulk; + + return &pr_bulk->fs_bulk; + +cleanup_fs_bulk: + mlx5_fs_bulk_cleanup(&pr_bulk->fs_bulk); +free_pr_bulk: + kvfree(pr_bulk); + return NULL; +} + +static int +mlx5_fs_hws_pr_bulk_destroy(struct mlx5_core_dev *dev, struct mlx5_fs_bulk *fs_bulk) +{ + struct mlx5_fs_hws_pr_bulk *pr_bulk; + + pr_bulk = container_of(fs_bulk, struct mlx5_fs_hws_pr_bulk, fs_bulk); + if (mlx5_fs_bulk_get_free_amount(fs_bulk) < fs_bulk->bulk_len) { + mlx5_core_err(dev, "Freeing bulk before all reformats were released\n"); + return -EBUSY; + } + + mlx5hws_action_destroy(pr_bulk->hws_action); + mlx5_fs_bulk_cleanup(fs_bulk); + kvfree(pr_bulk); + + return 0; +} + +static void mlx5_hws_pool_update_threshold(struct mlx5_fs_pool *hws_pool) +{ + hws_pool->threshold = min_t(int, MLX5_FS_HWS_POOL_MAX_THRESHOLD, + hws_pool->used_units / MLX5_FS_HWS_POOL_USED_BUFF_RATIO); +} + +static const struct mlx5_fs_pool_ops mlx5_fs_hws_pr_pool_ops = { + .bulk_create = mlx5_fs_hws_pr_bulk_create, + .bulk_destroy = mlx5_fs_hws_pr_bulk_destroy, + .update_threshold = mlx5_hws_pool_update_threshold, +}; + +int mlx5_fs_hws_pr_pool_init(struct mlx5_fs_pool *pr_pool, + struct mlx5_core_dev *dev, size_t encap_data_size, + enum mlx5hws_action_type reformat_type) +{ + struct mlx5_fs_hws_pr_pool_ctx *pr_pool_ctx; + + if (reformat_type != MLX5HWS_ACTION_TYP_INSERT_HEADER && + reformat_type != MLX5HWS_ACTION_TYP_REFORMAT_TNL_L3_TO_L2 && + reformat_type != MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L3 && + reformat_type != MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) + return -EOPNOTSUPP; + + pr_pool_ctx = kzalloc(sizeof(*pr_pool_ctx), GFP_KERNEL); + if (!pr_pool_ctx) + return -ENOMEM; + pr_pool_ctx->reformat_type = reformat_type; + pr_pool_ctx->encap_data_size = encap_data_size; + mlx5_fs_pool_init(pr_pool, dev, &mlx5_fs_hws_pr_pool_ops, pr_pool_ctx); + return 0; +} + +void mlx5_fs_hws_pr_pool_cleanup(struct mlx5_fs_pool *pr_pool) +{ + struct mlx5_fs_hws_pr_pool_ctx *pr_pool_ctx; + + mlx5_fs_pool_cleanup(pr_pool); + pr_pool_ctx = pr_pool->pool_ctx; + if (!pr_pool_ctx) + return; + kfree(pr_pool_ctx); +} + +struct mlx5_fs_hws_pr * +mlx5_fs_hws_pr_pool_acquire_pr(struct mlx5_fs_pool *pr_pool) +{ + struct mlx5_fs_pool_index pool_index = {}; + struct mlx5_fs_hws_pr_bulk *pr_bulk; + int err; + + err = mlx5_fs_pool_acquire_index(pr_pool, &pool_index); + if (err) + return ERR_PTR(err); + pr_bulk = container_of(pool_index.fs_bulk, struct mlx5_fs_hws_pr_bulk, + fs_bulk); + return &pr_bulk->prs_data[pool_index.index]; +} + +void mlx5_fs_hws_pr_pool_release_pr(struct mlx5_fs_pool *pr_pool, + struct mlx5_fs_hws_pr *pr_data) +{ + struct mlx5_fs_bulk *fs_bulk = &pr_data->bulk->fs_bulk; + struct mlx5_fs_pool_index pool_index = {}; + struct mlx5_core_dev *dev = pr_pool->dev; + + pool_index.fs_bulk = fs_bulk; + pool_index.index = pr_data->offset; + if (mlx5_fs_pool_release_index(pr_pool, &pool_index)) + mlx5_core_warn(dev, "Attempted to release packet reformat which is not acquired\n"); +} + +struct mlx5hws_action *mlx5_fs_hws_pr_get_action(struct mlx5_fs_hws_pr *pr_data) +{ + return pr_data->bulk->hws_action; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h new file mode 100644 index 000000000000..93ec5b3b76fe --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2024 NVIDIA Corporation & Affiliates */ + +#ifndef __MLX5_FS_HWS_POOLS_H__ +#define __MLX5_FS_HWS_POOLS_H__ + +#include +#include "fs_pool.h" +#include "fs_core.h" + +#define MLX5_FS_INSERT_HDR_VLAN_ANCHOR MLX5_REFORMAT_CONTEXT_ANCHOR_MAC_START +#define MLX5_FS_INSERT_HDR_VLAN_OFFSET offsetof(struct vlan_ethhdr, h_vlan_proto) +#define MLX5_FS_INSERT_HDR_VLAN_SIZE sizeof(struct vlan_hdr) + +enum { + MLX5_FS_DL3TNLTOL2_MAC_HDR_IDX = 0, + MLX5_FS_DL3TNLTOL2_MAC_VLAN_HDR_IDX, +}; + +struct mlx5_fs_hws_pr { + struct mlx5_fs_hws_pr_bulk *bulk; + u32 offset; + u8 hdr_idx; + u8 *data; + size_t data_size; +}; + +struct mlx5_fs_hws_pr_bulk { + struct mlx5_fs_bulk fs_bulk; + struct mlx5hws_action *hws_action; + struct mlx5_fs_hws_pr prs_data[]; +}; + +struct mlx5_fs_hws_pr_pool_ctx { + enum mlx5hws_action_type reformat_type; + size_t encap_data_size; +}; + +int mlx5_fs_hws_pr_pool_init(struct mlx5_fs_pool *pr_pool, + struct mlx5_core_dev *dev, size_t encap_data_size, + enum mlx5hws_action_type reformat_type); +void mlx5_fs_hws_pr_pool_cleanup(struct mlx5_fs_pool *pr_pool); + +struct mlx5_fs_hws_pr *mlx5_fs_hws_pr_pool_acquire_pr(struct mlx5_fs_pool *pr_pool); +void mlx5_fs_hws_pr_pool_release_pr(struct mlx5_fs_pool *pr_pool, + struct mlx5_fs_hws_pr *pr_data); +struct mlx5hws_action *mlx5_fs_hws_pr_get_action(struct mlx5_fs_hws_pr *pr_data); +#endif /* __MLX5_FS_HWS_POOLS_H__ */ diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 370f533da107..bb99a35fc6a2 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -7025,6 +7025,7 @@ struct mlx5_ifc_alloc_packet_reformat_context_out_bits { enum { MLX5_REFORMAT_CONTEXT_ANCHOR_MAC_START = 0x1, + MLX5_REFORMAT_CONTEXT_ANCHOR_VLAN_START = 0x2, MLX5_REFORMAT_CONTEXT_ANCHOR_IP_START = 0x7, MLX5_REFORMAT_CONTEXT_ANCHOR_TCP_UDP_START = 0x9, }; From patchwork Tue Jan 7 06:07:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928261 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2043.outbound.protection.outlook.com [40.107.244.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 971D769D2B for ; Tue, 7 Jan 2025 06:09:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.43 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230144; cv=fail; b=UJoK9NxDd1iMCx/FWYJYX0lC+rfBQ747V8OGIb+7zdBlDsQDABs+//TyqQarwzUPLr7ic/2QjrGPMy5DypYq0YJei2MjQgUZpFnQp4tDOxv0geAUWctSbj6dsxKxeDa+uwjdyBjJAyZunaaatbbk6exDR0hx3GOHQ6eKuk8E7oU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230144; c=relaxed/simple; bh=6u8nE7L/hS4hodbinCkZBpGsCXJjJHqFNgYEjPFAv+0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nQSRcDiIPKm3oP57JjVsiQLB7T3MSpDo4Y7V+3m5zEPK6ne26vznhJld4xB+8LlJ0L77T00Gnu7IUy8XlPDW+XEW6NpW466R1x5AFrmjdpQu0g5L2u0hT4alHlC8MFsIbP0V9ShfCAm3utBH6GRe5n5eNWfSoGtfd9/VaZuql3M= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=SEYsmw4K; arc=fail smtp.client-ip=40.107.244.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="SEYsmw4K" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=kBa3E0nkg8p+KqNTqVW7ZA1/Zsfe7Ns5svoOehaTbFR2nenC5OYS3xEsz7L+xR72oS20PpRHkM0AEk9AnJuCpn0u5F6JSKt+oPcdvHH8BQjsQLwb13NXeWCgGGh16MraeLaW40AQ2oTEsO6LnjGlE+hV0j22IKTUwrRkIhH7giItrdnKHY2FCtkfc+ToGnrRkUjSI5v6kBYu5F5SsBZC+S+wFwny/RxI1xNnaizrRd87DOnfyfFwKQKrcImfiRZ7SuB8m9dJWCOWHcsr3ZlqJ2ryNkbQdhzXI1kWJUrO00RJxvVe5kk/DtGiOcu1eiRYEJFZDWoxBZwB88c8AIgq5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=z2lgxWEXg/bCsL1sm1k3dV0ALNYEqH9UykBYybiVDqo=; b=BhtYaRGtsNQpqFTBhfbp01XUikm6LrZwDWPEn7bbDCG8p0kiWGvDJMTk2cEEG6AbDozLzxx7nZLsc2tz/TsVsauimvLrTm0e2BV/OAtIva/H5ZYciTJJq/6QUSaMx/CAcVutH0BUJe8SPrQuxdYR2e79J0gYDjy/w3SNJuHrSdsw8U/XdUsOnkynqVhH+WfWyZKKoYNMqmEOkoZbj7kvXAc8bEebdOB1jWNxnkdT+kV7MlW2zAWhrdrZ0iy6ye2eJFKVxqrTXcE1h4GmoFAHGaS4yFbPZehgn4g7TzdsIKMdPdNk89XNuA+qrvhwybWMjxj37UUHopTc4KqhyqCIxA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=z2lgxWEXg/bCsL1sm1k3dV0ALNYEqH9UykBYybiVDqo=; b=SEYsmw4K6vIhNz3l1hEjqhlML2WCtDqUu7yXG62h9finBFDRFjmWOlFeVzBsvwJgEZD7qSqEQEH7VDXNNTDjpmTgvK8HoDW9+e85edq42kbuO7eGjUbwSIlOw+JTDOkj4q/fKxq06IZUUNBSSVBXRU9ekNHyBEVbv3MuHcJZBuqFGJ8usODXw1qmcz4FkKZoQhhDu9gLipSvlkdJoyu233xrvnxjpbbF6OI9fuhJbmv4QPnTDkZ978XxVFDQW4nm/4ilL86NS8qf95rnS3RblUY2paliOzwML/xH8Mz0RDFlm4pxtRhb05Wk/ndcXchU1cRqSbyIbbQndbMGDgwYCQ== Received: from CH2PR14CA0052.namprd14.prod.outlook.com (2603:10b6:610:56::32) by SN7PR12MB6714.namprd12.prod.outlook.com (2603:10b6:806:272::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Tue, 7 Jan 2025 06:08:55 +0000 Received: from CH2PEPF0000009E.namprd02.prod.outlook.com (2603:10b6:610:56:cafe::7f) by CH2PR14CA0052.outlook.office365.com (2603:10b6:610:56::32) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.18 via Frontend Transport; Tue, 7 Jan 2025 06:08:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF0000009E.mail.protection.outlook.com (10.167.244.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:08:54 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:41 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:41 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:38 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 06/13] net/mlx5: fs, add HWS modify header API function Date: Tue, 7 Jan 2025 08:07:01 +0200 Message-ID: <20250107060708.1610882-7-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009E:EE_|SN7PR12MB6714:EE_ X-MS-Office365-Filtering-Correlation-Id: 4196af44-a667-487b-da46-08dd2ee1c407 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|1800799024|376014; X-Microsoft-Antispam-Message-Info: 719SSzXQkpyJlCic/MzmaACFLlDJm+DoThlXDeji+aMzQwu3ewF51Z5/T/9l2rLPuV5mtcEhgUdT+1WFFh4pW0GkmdS+KcMd94bHYMVlXFti7YtOK7eK1STacQZRIYnMK+NCYR/+6ClZCGcHFqwu59XGzF3uq8sVLScEZVnDCPzlPBDm+/Nu0u1JxoEyxOy4e6QbPNdOCgcsZ7b9fjeXIhSSSlGfrwp74y9GJhwGx1mMvOZiRxH1A4Acl4rxFByTZbawY0dQoIRMYkMv52yDvo1uFr61G3XiNpWcK61qUJDN1oS6+GmC0g+IRIjeizRVKRXskzBXNet/XjRzLqVpkECMqVYkwWAU7hZlT8raaso7LhZ79dZxnOrNaw+hPbCDkwwXQ0DX6AAiJX2Lqwy2RlfRc0uOxpw0LjnW9TpBbwGALbYnIklEiWFP4i4oXkrCnjGZqRpGAl/FWmbx5SBvVDd2ZSbwI8SbLXRN6p+1PGrHWkCebOvXbdJ27vmVRdRH5doBUgqbMahySUMb0PYfMqdnLve4y1h/uFVQG7a9bU5xmlS7Uo7aVWCp80FUXz1bpwIdy2h/fZCjQ/d1mX7rU/R1ApkkgXkIbYCaagPpQibw3LeK8a4ffXaXpSniXGY2IeuuRgWTOAQ7u9SJLghnsDJiat5z72EGu94yllyjy0IMJiNGnFY7Qlex907raVoh96GHkdVwnqLwcR1btCIZ4WiXSG5xOVH25N/qRfVtL0DhHtXfzLsG4W+o1qlhX2nZ7o2Y4FNC37av8ryLDeO8YYT2WvKthFYxUxpD9RJAD/pzDjGQCpE7P+pPjNQ3S0OmUeGa9q7+m+mN0tfFavb3fmw7iwYqyl99/FO5bKeV9xOyegTQq1b3BNdXmsk7gxVO+jiNxxl8q+9jVtqF7t+9AqPpnYE3NICkzRWp6di2Qhqy0RNfLNat7Knd2cI5AbRiA1wCNaFRp+lknL8bJUTYTOqpVvccVejzSt6hRMZwZN9sM9yh0HomFtSYmQ337k04n2iNnF1znjmx1SUGSo2U4maiqkUAc+L2D+8O3MtH8OwcP/TVsFH++OizpNEh1asNyN9yDksF3vLl7FghZkse4pmXNH9LeiKRyo0gTtaAbv8wi3d1L0+fRbygtRJ1JQbB3PW9Esm+UbLOu4ksQX+OsyyOz/9QxdhkPhlooquOvy/8cPX23n5ePylalwAJLRGDm54d2tYxiTImrfHhJ3vQEbQWpGwfqZ0eld4Y6rAFA7TbrclRA9uk1wYIlElsP8FgRjLH20ZqKGhkfNRTT61tnQXYWkxLgjmf9ZO6D2dBMfjrnLnktMYgt6x2yuWsXPuMvWDST5drsTVxT7y3n0rNLToDtEwD6OsqFqH5Iu1disY1qetJkHvNeKbzG6F+90dArxSDVCRr2oYN3nJ/Nx5i3Wiq3KVTVa4S2VQNI/yL29w3g58hXwljt20m7a1/hlahY8omGBfnbrRhvQ4cxLbMb3eD9cEl0lL1iqXdtNuQ2IU= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(1800799024)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:08:54.6284 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4196af44-a667-487b-da46-08dd2ee1c407 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6714 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add modify header alloc and dealloc API functions to provide modify header actions for steering rules. Use fs hws pools to get actions from shared bulks of modify header actions. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 1 + .../mellanox/mlx5/core/steering/hws/fs_hws.c | 117 +++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 2 + .../mlx5/core/steering/hws/fs_hws_pools.c | 164 ++++++++++++++++++ .../mlx5/core/steering/hws/fs_hws_pools.h | 22 +++ 5 files changed, 306 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index 9b0575a61362..06ec48f51b6d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -65,6 +65,7 @@ struct mlx5_modify_hdr { enum mlx5_flow_resource_owner owner; union { struct mlx5_fs_dr_action fs_dr_action; + struct mlx5_fs_hws_action fs_hws_action; u32 id; }; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 723865140b2e..a75e5ce168c7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -14,6 +14,8 @@ static struct mlx5hws_action * create_action_remove_header_vlan(struct mlx5hws_context *ctx); static void destroy_pr_pool(struct mlx5_fs_pool *pool, struct xarray *pr_pools, unsigned long index); +static void destroy_mh_pool(struct mlx5_fs_pool *pool, struct xarray *mh_pools, + unsigned long index); static int init_hws_actions_pool(struct mlx5_core_dev *dev, struct mlx5_fs_hws_context *fs_ctx) @@ -56,6 +58,7 @@ static int init_hws_actions_pool(struct mlx5_core_dev *dev, goto cleanup_insert_hdr; xa_init(&hws_pool->el2tol3tnl_pools); xa_init(&hws_pool->el2tol2tnl_pools); + xa_init(&hws_pool->mh_pools); return 0; cleanup_insert_hdr: @@ -81,6 +84,9 @@ static void cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) struct mlx5_fs_pool *pool; unsigned long i; + xa_for_each(&hws_pool->mh_pools, i, pool) + destroy_mh_pool(pool, &hws_pool->mh_pools, i); + xa_destroy(&hws_pool->mh_pools); xa_for_each(&hws_pool->el2tol2tnl_pools, i, pool) destroy_pr_pool(pool, &hws_pool->el2tol2tnl_pools, i); xa_destroy(&hws_pool->el2tol2tnl_pools); @@ -528,6 +534,115 @@ static void mlx5_cmd_hws_packet_reformat_dealloc(struct mlx5_flow_root_namespace pkt_reformat->fs_hws_action.pr_data = NULL; } +static struct mlx5_fs_pool * +create_mh_pool(struct mlx5_core_dev *dev, + struct mlx5hws_action_mh_pattern *pattern, + struct xarray *mh_pools, unsigned long index) +{ + struct mlx5_fs_pool *pool; + int err; + + pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return ERR_PTR(-ENOMEM); + err = mlx5_fs_hws_mh_pool_init(pool, dev, pattern); + if (err) + goto free_pool; + err = xa_insert(mh_pools, index, pool, GFP_KERNEL); + if (err) + goto cleanup_pool; + return pool; + +cleanup_pool: + mlx5_fs_hws_mh_pool_cleanup(pool); +free_pool: + kfree(pool); + return ERR_PTR(err); +} + +static void destroy_mh_pool(struct mlx5_fs_pool *pool, struct xarray *mh_pools, + unsigned long index) +{ + xa_erase(mh_pools, index); + mlx5_fs_hws_mh_pool_cleanup(pool); + kfree(pool); +} + +static int mlx5_cmd_hws_modify_header_alloc(struct mlx5_flow_root_namespace *ns, + u8 namespace, u8 num_actions, + void *modify_actions, + struct mlx5_modify_hdr *modify_hdr) +{ + struct mlx5_fs_hws_actions_pool *hws_pool = &ns->fs_hws_context.hws_pool; + struct mlx5hws_action_mh_pattern pattern = {}; + struct mlx5_fs_hws_mh *mh_data = NULL; + struct mlx5hws_action *hws_action; + struct mlx5_fs_pool *pool; + unsigned long i, cnt = 0; + bool known_pattern; + int err; + + pattern.sz = MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto) * num_actions; + pattern.data = modify_actions; + + known_pattern = false; + xa_for_each(&hws_pool->mh_pools, i, pool) { + if (mlx5_fs_hws_mh_pool_match(pool, &pattern)) { + known_pattern = true; + break; + } + cnt++; + } + + if (!known_pattern) { + pool = create_mh_pool(ns->dev, &pattern, &hws_pool->mh_pools, cnt); + if (IS_ERR(pool)) + return PTR_ERR(pool); + } + mh_data = mlx5_fs_hws_mh_pool_acquire_mh(pool); + if (IS_ERR(mh_data)) { + err = PTR_ERR(mh_data); + goto destroy_pool; + } + hws_action = mh_data->bulk->hws_action; + mh_data->data = kmemdup(pattern.data, pattern.sz, GFP_KERNEL); + if (!mh_data->data) { + err = -ENOMEM; + goto release_mh; + } + modify_hdr->fs_hws_action.mh_data = mh_data; + modify_hdr->fs_hws_action.fs_pool = pool; + modify_hdr->owner = MLX5_FLOW_RESOURCE_OWNER_SW; + modify_hdr->fs_hws_action.hws_action = hws_action; + + return 0; + +release_mh: + mlx5_fs_hws_mh_pool_release_mh(pool, mh_data); +destroy_pool: + if (!known_pattern) + destroy_mh_pool(pool, &hws_pool->mh_pools, cnt); + return err; +} + +static void mlx5_cmd_hws_modify_header_dealloc(struct mlx5_flow_root_namespace *ns, + struct mlx5_modify_hdr *modify_hdr) +{ + struct mlx5_fs_hws_mh *mh_data; + struct mlx5_fs_pool *pool; + + if (!modify_hdr->fs_hws_action.fs_pool || !modify_hdr->fs_hws_action.mh_data) { + mlx5_core_err(ns->dev, "Failed release modify-header\n"); + return; + } + + mh_data = modify_hdr->fs_hws_action.mh_data; + kfree(mh_data->data); + pool = modify_hdr->fs_hws_action.fs_pool; + mlx5_fs_hws_mh_pool_release_mh(pool, mh_data); + modify_hdr->fs_hws_action.mh_data = NULL; +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, @@ -537,6 +652,8 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .destroy_flow_group = mlx5_cmd_hws_destroy_flow_group, .packet_reformat_alloc = mlx5_cmd_hws_packet_reformat_alloc, .packet_reformat_dealloc = mlx5_cmd_hws_packet_reformat_dealloc, + .modify_header_alloc = mlx5_cmd_hws_modify_header_alloc, + .modify_header_dealloc = mlx5_cmd_hws_modify_header_dealloc, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index 2292eb08ef24..db2d53fbf9d0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -18,6 +18,7 @@ struct mlx5_fs_hws_actions_pool { struct mlx5_fs_pool dl3tnltol2_pool; struct xarray el2tol3tnl_pools; struct xarray el2tol2tnl_pools; + struct xarray mh_pools; }; struct mlx5_fs_hws_context { @@ -34,6 +35,7 @@ struct mlx5_fs_hws_action { struct mlx5hws_action *hws_action; struct mlx5_fs_pool *fs_pool; struct mlx5_fs_hws_pr *pr_data; + struct mlx5_fs_hws_mh *mh_data; }; struct mlx5_fs_hws_matcher { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c index 14f732f3f09c..60dc0aaccbba 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c @@ -236,3 +236,167 @@ struct mlx5hws_action *mlx5_fs_hws_pr_get_action(struct mlx5_fs_hws_pr *pr_data) { return pr_data->bulk->hws_action; } + +static struct mlx5hws_action * +mh_bulk_action_create(struct mlx5hws_context *ctx, + struct mlx5hws_action_mh_pattern *pattern) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + u32 log_bulk_size; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + return mlx5hws_action_create_modify_header(ctx, 1, pattern, + log_bulk_size, flags); +} + +static struct mlx5_fs_bulk * +mlx5_fs_hws_mh_bulk_create(struct mlx5_core_dev *dev, void *pool_ctx) +{ + struct mlx5hws_action_mh_pattern *pattern; + struct mlx5_flow_root_namespace *root_ns; + struct mlx5_fs_hws_mh_bulk *mh_bulk; + struct mlx5hws_context *ctx; + int bulk_len; + int i; + + root_ns = mlx5_get_root_namespace(dev, MLX5_FLOW_NAMESPACE_FDB); + if (!root_ns || root_ns->mode != MLX5_FLOW_STEERING_MODE_HMFS) + return NULL; + + ctx = root_ns->fs_hws_context.hws_ctx; + if (!ctx) + return NULL; + + if (!pool_ctx) + return NULL; + pattern = pool_ctx; + bulk_len = MLX5_FS_HWS_DEFAULT_BULK_LEN; + mh_bulk = kvzalloc(struct_size(mh_bulk, mhs_data, bulk_len), GFP_KERNEL); + if (!mh_bulk) + return NULL; + + if (mlx5_fs_bulk_init(dev, &mh_bulk->fs_bulk, bulk_len)) + goto free_mh_bulk; + + for (i = 0; i < bulk_len; i++) { + mh_bulk->mhs_data[i].bulk = mh_bulk; + mh_bulk->mhs_data[i].offset = i; + } + + mh_bulk->hws_action = mh_bulk_action_create(ctx, pattern); + if (!mh_bulk->hws_action) + goto cleanup_fs_bulk; + + return &mh_bulk->fs_bulk; + +cleanup_fs_bulk: + mlx5_fs_bulk_cleanup(&mh_bulk->fs_bulk); +free_mh_bulk: + kvfree(mh_bulk); + return NULL; +} + +static int +mlx5_fs_hws_mh_bulk_destroy(struct mlx5_core_dev *dev, + struct mlx5_fs_bulk *fs_bulk) +{ + struct mlx5_fs_hws_mh_bulk *mh_bulk; + + mh_bulk = container_of(fs_bulk, struct mlx5_fs_hws_mh_bulk, fs_bulk); + if (mlx5_fs_bulk_get_free_amount(fs_bulk) < fs_bulk->bulk_len) { + mlx5_core_err(dev, "Freeing bulk before all modify header were released\n"); + return -EBUSY; + } + + mlx5hws_action_destroy(mh_bulk->hws_action); + mlx5_fs_bulk_cleanup(fs_bulk); + kvfree(mh_bulk); + + return 0; +} + +static const struct mlx5_fs_pool_ops mlx5_fs_hws_mh_pool_ops = { + .bulk_create = mlx5_fs_hws_mh_bulk_create, + .bulk_destroy = mlx5_fs_hws_mh_bulk_destroy, + .update_threshold = mlx5_hws_pool_update_threshold, +}; + +int mlx5_fs_hws_mh_pool_init(struct mlx5_fs_pool *fs_hws_mh_pool, + struct mlx5_core_dev *dev, + struct mlx5hws_action_mh_pattern *pattern) +{ + struct mlx5hws_action_mh_pattern *pool_pattern; + + pool_pattern = kzalloc(sizeof(*pool_pattern), GFP_KERNEL); + if (!pool_pattern) + return -ENOMEM; + pool_pattern->data = kmemdup(pattern->data, pattern->sz, GFP_KERNEL); + if (!pool_pattern->data) { + kfree(pool_pattern); + return -ENOMEM; + } + pool_pattern->sz = pattern->sz; + mlx5_fs_pool_init(fs_hws_mh_pool, dev, &mlx5_fs_hws_mh_pool_ops, + pool_pattern); + return 0; +} + +void mlx5_fs_hws_mh_pool_cleanup(struct mlx5_fs_pool *fs_hws_mh_pool) +{ + struct mlx5hws_action_mh_pattern *pool_pattern; + + mlx5_fs_pool_cleanup(fs_hws_mh_pool); + pool_pattern = fs_hws_mh_pool->pool_ctx; + if (!pool_pattern) + return; + kfree(pool_pattern->data); + kfree(pool_pattern); +} + +struct mlx5_fs_hws_mh * +mlx5_fs_hws_mh_pool_acquire_mh(struct mlx5_fs_pool *mh_pool) +{ + struct mlx5_fs_pool_index pool_index = {}; + struct mlx5_fs_hws_mh_bulk *mh_bulk; + int err; + + err = mlx5_fs_pool_acquire_index(mh_pool, &pool_index); + if (err) + return ERR_PTR(err); + mh_bulk = container_of(pool_index.fs_bulk, struct mlx5_fs_hws_mh_bulk, + fs_bulk); + return &mh_bulk->mhs_data[pool_index.index]; +} + +void mlx5_fs_hws_mh_pool_release_mh(struct mlx5_fs_pool *mh_pool, + struct mlx5_fs_hws_mh *mh_data) +{ + struct mlx5_fs_bulk *fs_bulk = &mh_data->bulk->fs_bulk; + struct mlx5_fs_pool_index pool_index = {}; + struct mlx5_core_dev *dev = mh_pool->dev; + + pool_index.fs_bulk = fs_bulk; + pool_index.index = mh_data->offset; + if (mlx5_fs_pool_release_index(mh_pool, &pool_index)) + mlx5_core_warn(dev, "Attempted to release modify header which is not acquired\n"); +} + +bool mlx5_fs_hws_mh_pool_match(struct mlx5_fs_pool *mh_pool, + struct mlx5hws_action_mh_pattern *pattern) +{ + struct mlx5hws_action_mh_pattern *pool_pattern; + int num_actions, i; + + pool_pattern = mh_pool->pool_ctx; + if (WARN_ON_ONCE(!pool_pattern)) + return false; + + if (pattern->sz != pool_pattern->sz) + return false; + num_actions = pattern->sz / MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto); + for (i = 0; i < num_actions; i++) + if ((__force __be32)pattern->data[i] != + (__force __be32)pool_pattern->data[i]) + return false; + return true; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h index 93ec5b3b76fe..eda17031aef0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h @@ -36,6 +36,19 @@ struct mlx5_fs_hws_pr_pool_ctx { size_t encap_data_size; }; +struct mlx5_fs_hws_mh { + struct mlx5_fs_hws_mh_bulk *bulk; + u32 offset; + u8 *data; +}; + +struct mlx5_fs_hws_mh_bulk { + struct mlx5_fs_bulk fs_bulk; + struct mlx5_fs_pool *mh_pool; + struct mlx5hws_action *hws_action; + struct mlx5_fs_hws_mh mhs_data[]; +}; + int mlx5_fs_hws_pr_pool_init(struct mlx5_fs_pool *pr_pool, struct mlx5_core_dev *dev, size_t encap_data_size, enum mlx5hws_action_type reformat_type); @@ -45,4 +58,13 @@ struct mlx5_fs_hws_pr *mlx5_fs_hws_pr_pool_acquire_pr(struct mlx5_fs_pool *pr_po void mlx5_fs_hws_pr_pool_release_pr(struct mlx5_fs_pool *pr_pool, struct mlx5_fs_hws_pr *pr_data); struct mlx5hws_action *mlx5_fs_hws_pr_get_action(struct mlx5_fs_hws_pr *pr_data); +int mlx5_fs_hws_mh_pool_init(struct mlx5_fs_pool *fs_hws_mh_pool, + struct mlx5_core_dev *dev, + struct mlx5hws_action_mh_pattern *pattern); +void mlx5_fs_hws_mh_pool_cleanup(struct mlx5_fs_pool *fs_hws_mh_pool); +struct mlx5_fs_hws_mh *mlx5_fs_hws_mh_pool_acquire_mh(struct mlx5_fs_pool *mh_pool); +void mlx5_fs_hws_mh_pool_release_mh(struct mlx5_fs_pool *mh_pool, + struct mlx5_fs_hws_mh *mh_data); +bool mlx5_fs_hws_mh_pool_match(struct mlx5_fs_pool *mh_pool, + struct mlx5hws_action_mh_pattern *pattern); #endif /* __MLX5_FS_HWS_POOLS_H__ */ From patchwork Tue Jan 7 06:07:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928267 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2057.outbound.protection.outlook.com [40.107.220.57]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C6411DF756 for ; Tue, 7 Jan 2025 06:09:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.57 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230147; cv=fail; b=R+Kt7Hsa8bd/4wklEVF6SsyQP6zXRLppuwrFPt2ymNm/bDOMzB/5QvjtIXtiD5SBZJ84cSd9ZT0vpWbPmnzLxw2CztPAJdUy4E3kQFqMzAbBTWyNX5PjHR/4xM0msDJGKSnirn38O6yrq2GYvPez690KjkA9ZPWXOCmpeMmfCjQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230147; c=relaxed/simple; bh=QOTBDEcHABONaOB1Kvg3quVCCG81AYZ5Bf4yMlxil74=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Mx/M6uKO9qNvHTntwSQz7fM70Xshy5nybUagxB+ODWh4qZc+kB2SKfw8JHaJXnr01254qAGjHHtkfsV24SWHmDnSPriCWgW01jFp9HAtjz/zpzQ5TCM9FgQFwHSiLxy8RZtPBPiFS8tXKtRf8hYgR1mzt+qSC7hmhsn7NP8RHdk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=dE2V5vlt; arc=fail smtp.client-ip=40.107.220.57 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="dE2V5vlt" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=jak5x0bbiZIURcqzhpebBovsohp645SykRHlQFPeMxmwXrpNk9M7Ndx3z1+qdhfZbju+BMfkYzXagn3W1sktuWyHODCR+h3fB/9XifsO2lVejF6DWJOmsoqd9MnnVG2J8luvxnNKwWaXXN2ZxBW+eecpq91NemsfzL6ob/kesNbVNFr9fH4xCIMFsEly7lSPc+dXRBIxO2miu53VHdaxd4N8RIuXzUNNRjLJLTlb02vy1HXEVVO7xrwKghTrbEH7AdWaeNgPgs5kUg3IWVSlL5g5Y6wwHcdS3dxe/J8bhYmzFiTEylEKUakHbBSnSyE49TaUjAGuOVu9trIOr2TvfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0YntWqBXOod2aeOIYa9KSopRutzYbFSo0UXnSKkrTv0=; b=nkGo0pu91SLMQvFJZ/80UsYG2oSc6ojEVtL09lKY79sM/IL2fwP1rTiOcJr/sBOqfaCTewi4tZkHcahjVauKEKpMDJ6JU5WHu+dpSov8A7XHGsakpluaxCrPAZ8c+L+xFlk8lAcD3WegsamDZ/DDwsCU8XZhgHvOMEeNaZNtSC/gPf+9uGEquTwdE+RIxDIkW1zWO2OBZy5X1jJBx3R/7VZDdLg6qFXORujbystNaom+F4Oldlgvq8z9eQmVDa3zJ6+yozj6T20Kc1qFVl5QN+fxzPqF4NGryqk6zhmpmxBBjWMeBHSXFY3NtGBhGlOmeurNtpPJSFcqHRE4deabsA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0YntWqBXOod2aeOIYa9KSopRutzYbFSo0UXnSKkrTv0=; b=dE2V5vltCJ39P588wi+3MqB2DYrXaSnb9DDKlakx+2g0tDGGSumfYJ3zf0i8DPGPtX4yd9CPIrAw4RNn27zmtUdEQidFT7lufb9bq9xf0oVJ4EKwvZH9m+j5WQVU72qg8XZ6e4RJUOlvBMU9uRP4SSLayAjhRunumcy7q/kyyZCD254iQ4i5UyDl0Il4jSSvg35qHqur7o5R1qSYqXJ5/HSUE48Mkh4wCUQ852ORITj0sOoHjcdnXndf3vmT72nD2mF/pCVrbFzlt+jzQwGXAymzyD6wHOqMzHBB8GUgcxgOeV3VKsgzOnoY086v9U39gtSLj4yro0qnKG7nigfkrw== Received: from CH0P221CA0022.NAMP221.PROD.OUTLOOK.COM (2603:10b6:610:11c::22) by SA0PR12MB7462.namprd12.prod.outlook.com (2603:10b6:806:24b::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.18; Tue, 7 Jan 2025 06:08:57 +0000 Received: from CH2PEPF0000009C.namprd02.prod.outlook.com (2603:10b6:610:11c:cafe::f1) by CH0P221CA0022.outlook.office365.com (2603:10b6:610:11c::22) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.18 via Frontend Transport; Tue, 7 Jan 2025 06:08:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF0000009C.mail.protection.outlook.com (10.167.244.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:08:57 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:45 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:45 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:41 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 07/13] net/mlx5: fs, manage flow counters HWS action sharing by refcount Date: Tue, 7 Jan 2025 08:07:02 +0200 Message-ID: <20250107060708.1610882-8-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009C:EE_|SA0PR12MB7462:EE_ X-MS-Office365-Filtering-Correlation-Id: eef99e0d-a689-4842-de5e-08dd2ee1c581 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|376014|82310400026; X-Microsoft-Antispam-Message-Info: t7EuO4p1SU/9eJqh5XFmt+TonTouOU6LPZop7c7HdkrzG+ILkDME/0F6EZnJ3vnmLsQHWTOZKtA9efCVXDXUa4HpI5p1fpLv8dR2AJ+c3qFm3Np3k+T8uZ1i7lhLqo2x1+S7XBV1AvaS/Sk7rL3JaZQjh7jCq9TzEKWFnZ+rUxbSORPKjD2sloxv3b1zVskRLPq4Me9D8ySQnqm7h3Mi7QkBfkb0F4IsVSezf+hSInc6aGD/rYVhnUw72u/ZD3+BHSKJSu4wNoZUsVABKIjuRD4XOSiRk9e4SKuTmhBgg4aA5JGjAf4xwSd/MDUIXaRi7e4TmAJm/J6KkGfbZlDxJwIrD2tNDOW2tyFZQCsuTNYtR8/W3TT3mWe9H3exXMiLtOUMBqACK2Rbwdjx/h5UxGlDEFUJrfjOOX9cNLhBDyXCYOj2jPFbDd4PAjfGzCswrtc5nqdKcgL36ws7B9dWJY6UMcVp1Jk61nOhL2WygjlNSNT5T3AwLmXQAdXtmKozjUi+ERkEVnwk9+T5Zo/JJrSJs/+Qlo7cEIOCseK0j88S48yuCzk5XugfSbVvhRqzIzatDA+duEsDq1NFlZta7nL+Jt7Gqk3I+b48bmK3kYmK2CBuXGzUO6o3uOQAQccMnPMq9l4He3VwHtvYEGFEU7u936K286MDzz54m1K2K+7a5PfJoHzE72bBykc4uBc+LIGfoxdXg3ynfpWrKoTO8d3HWJzyf5FqOr9BjKkuw6mxnB8kZftLB9yRQvpYDaTcb8sX4yCzPfbVJ5M5NY2dA6lF+NeaKCKcMZ6y85OOeBbSR3J19t7QzOeip/KUvSynge7cGC6buSiVCFCau8Es0ebSfW7ooIsEKjNjaRjRQ5gMNPGn1Z2vb11aDa6dSW6e6rpapWd0K2VjyI0uTR+OJ0MZ6uhfeilhlZoy54pv6LzT0NrUEYQtkBW/UfaraXBVGuH914qOUWYzVWkrdMRan/LgrmzW7NXnCzKbbIFkko3Pu6A8IB9YRNkZM7wU5WJcbhHcdJtEavNc/+QjCFMa+ndhILHv/PVubFDLmKAltG8rKDNJ5sZ4SeL1DpArmI2HAxTeLLVZ84pJbkbZcAiKLDgbPCUDE8/yPNlkHuGB6AiHS3Tg+XshvC0Rdn9d4Av9orPTc39B+VxHz7JpZzsWlJ6sUTDSDg87HEvd9lMVWfnEeoiXvsZJhzaWpFIfp/1Z7M4oZfMP6LHUrFyrOU3/+hosFiCqEGI7s2RSd0VcbwMtTEJiKC/Df9E8/IRLYT+3boBszyrn6p3Db40KDgg650+1HQ1riievefPBc1L30oW8uh9hqvX4mh2TsjfgYJVmfqd+LUAcagrm0rXz/Q3vzxJ0MVG3jKhW6NRBo48RijTGXT0tm6vft+f4E5DlL7PJunr6RhebcMIbYjWq800ZTJMrT6+1XV8Us6ORbT0tKCTEiSztdLoP1/DSE8EiHwR0/OgmUcqJxy8I4RMgr8okRqyLFIL6oh74nImxyDoYn3g= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:08:57.1057 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: eef99e0d-a689-4842-de5e-08dd2ee1c581 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB7462 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Multiple flow counters can utilize a single Hardware Steering (HWS) action for Hardware Steering rules. Given that these counter bulks are not exclusively created for Hardware Steering, but also serve purposes such as statistics gathering and other steering modes, it's more efficient to create the HWS action only when it's first needed by a Hardware Steering rule. This approach allows for better resource management through the use of a reference count, rather than automatically creating an HWS action for every bulk of flow counters. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 36 ++++++++++++++ .../ethernet/mellanox/mlx5/core/fs_counters.c | 37 ++++----------- .../mlx5/core/steering/hws/fs_hws_pools.c | 47 +++++++++++++++++++ .../mlx5/core/steering/hws/fs_hws_pools.h | 3 ++ 4 files changed, 94 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index 06ec48f51b6d..b6543a53d7c3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -316,6 +316,42 @@ struct mlx5_flow_root_namespace { const struct mlx5_flow_cmds *cmds; }; +enum mlx5_fc_type { + MLX5_FC_TYPE_ACQUIRED = 0, + MLX5_FC_TYPE_LOCAL, +}; + +struct mlx5_fc_cache { + u64 packets; + u64 bytes; + u64 lastuse; +}; + +struct mlx5_fc { + u32 id; + bool aging; + enum mlx5_fc_type type; + struct mlx5_fc_bulk *bulk; + struct mlx5_fc_cache cache; + /* last{packets,bytes} are used for calculating deltas since last reading. */ + u64 lastpackets; + u64 lastbytes; +}; + +struct mlx5_fc_bulk_hws_data { + struct mlx5hws_action *hws_action; + struct mutex lock; /* protects hws_action */ + refcount_t hws_action_refcount; +}; + +struct mlx5_fc_bulk { + struct mlx5_fs_bulk fs_bulk; + u32 base_id; + struct mlx5_fc_bulk_hws_data hws_data; + struct mlx5_fc fcs[]; +}; + +u32 mlx5_fc_get_base_id(struct mlx5_fc *counter); int mlx5_init_fc_stats(struct mlx5_core_dev *dev); void mlx5_cleanup_fc_stats(struct mlx5_core_dev *dev); void mlx5_fc_queue_stats_work(struct mlx5_core_dev *dev, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c index 94d9caacd50f..492775d3d193 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c @@ -44,28 +44,6 @@ #define MLX5_FC_POOL_MAX_THRESHOLD BIT(18) #define MLX5_FC_POOL_USED_BUFF_RATIO 10 -enum mlx5_fc_type { - MLX5_FC_TYPE_ACQUIRED = 0, - MLX5_FC_TYPE_LOCAL, -}; - -struct mlx5_fc_cache { - u64 packets; - u64 bytes; - u64 lastuse; -}; - -struct mlx5_fc { - u32 id; - bool aging; - enum mlx5_fc_type type; - struct mlx5_fc_bulk *bulk; - struct mlx5_fc_cache cache; - /* last{packets,bytes} are used for calculating deltas since last reading. */ - u64 lastpackets; - u64 lastbytes; -}; - struct mlx5_fc_stats { struct xarray counters; @@ -434,13 +412,7 @@ void mlx5_fc_update_sampling_interval(struct mlx5_core_dev *dev, fc_stats->sampling_interval); } -/* Flow counter bluks */ - -struct mlx5_fc_bulk { - struct mlx5_fs_bulk fs_bulk; - u32 base_id; - struct mlx5_fc fcs[]; -}; +/* Flow counter bulks */ static void mlx5_fc_init(struct mlx5_fc *counter, struct mlx5_fc_bulk *bulk, u32 id) @@ -449,6 +421,11 @@ static void mlx5_fc_init(struct mlx5_fc *counter, struct mlx5_fc_bulk *bulk, counter->id = id; } +u32 mlx5_fc_get_base_id(struct mlx5_fc *counter) +{ + return counter->bulk->base_id; +} + static struct mlx5_fs_bulk *mlx5_fc_bulk_create(struct mlx5_core_dev *dev, void *pool_ctx) { @@ -474,6 +451,8 @@ static struct mlx5_fs_bulk *mlx5_fc_bulk_create(struct mlx5_core_dev *dev, for (i = 0; i < bulk_len; i++) mlx5_fc_init(&fc_bulk->fcs[i], fc_bulk, base_id + i); + refcount_set(&fc_bulk->hws_data.hws_action_refcount, 0); + mutex_init(&fc_bulk->hws_data.lock); return &fc_bulk->fs_bulk; fs_bulk_cleanup: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c index 60dc0aaccbba..692fd2d2c0ac 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c @@ -400,3 +400,50 @@ bool mlx5_fs_hws_mh_pool_match(struct mlx5_fs_pool *mh_pool, return false; return true; } + +struct mlx5hws_action *mlx5_fc_get_hws_action(struct mlx5hws_context *ctx, + struct mlx5_fc *counter) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_fc_bulk *fc_bulk = counter->bulk; + struct mlx5_fc_bulk_hws_data *fc_bulk_hws; + + fc_bulk_hws = &fc_bulk->hws_data; + /* try avoid locking if not necessary */ + if (refcount_inc_not_zero(&fc_bulk_hws->hws_action_refcount)) + return fc_bulk_hws->hws_action; + + mutex_lock(&fc_bulk_hws->lock); + if (refcount_inc_not_zero(&fc_bulk_hws->hws_action_refcount)) { + mutex_unlock(&fc_bulk_hws->lock); + return fc_bulk_hws->hws_action; + } + fc_bulk_hws->hws_action = + mlx5hws_action_create_counter(ctx, fc_bulk->base_id, flags); + if (!fc_bulk_hws->hws_action) { + mutex_unlock(&fc_bulk_hws->lock); + return NULL; + } + refcount_set(&fc_bulk_hws->hws_action_refcount, 1); + mutex_unlock(&fc_bulk_hws->lock); + + return fc_bulk_hws->hws_action; +} + +void mlx5_fc_put_hws_action(struct mlx5_fc *counter) +{ + struct mlx5_fc_bulk_hws_data *fc_bulk_hws = &counter->bulk->hws_data; + + /* try avoid locking if not necessary */ + if (refcount_dec_not_one(&fc_bulk_hws->hws_action_refcount)) + return; + + mutex_lock(&fc_bulk_hws->lock); + if (!refcount_dec_and_test(&fc_bulk_hws->hws_action_refcount)) { + mutex_unlock(&fc_bulk_hws->lock); + return; + } + mlx5hws_action_destroy(fc_bulk_hws->hws_action); + fc_bulk_hws->hws_action = NULL; + mutex_unlock(&fc_bulk_hws->lock); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h index eda17031aef0..cde8176c981a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h @@ -67,4 +67,7 @@ void mlx5_fs_hws_mh_pool_release_mh(struct mlx5_fs_pool *mh_pool, struct mlx5_fs_hws_mh *mh_data); bool mlx5_fs_hws_mh_pool_match(struct mlx5_fs_pool *mh_pool, struct mlx5hws_action_mh_pattern *pattern); +struct mlx5hws_action *mlx5_fc_get_hws_action(struct mlx5hws_context *ctx, + struct mlx5_fc *counter); +void mlx5_fc_put_hws_action(struct mlx5_fc *counter); #endif /* __MLX5_FS_HWS_POOLS_H__ */ From patchwork Tue Jan 7 06:07:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928268 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2073.outbound.protection.outlook.com [40.107.244.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B1F81D619D for ; Tue, 7 Jan 2025 06:09:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.73 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230152; cv=fail; b=mg7O+530fWnuh4RaVEAiKVr4lLnlYPXULXVHk08bhM/xUrPAtZHYQLd4YAVU2g7RXiKrsJxSW4vnixEdnIVU+mF82vlcwNz9EELwGg5vzLI5M54JWTFiRo2Tabglol8N60cE8IxNbXxq0kxY4ubRT2ASSA4XYovBRCohqB+GR5M= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230152; c=relaxed/simple; bh=y3r0+mSkE117w8/dj70UTB5Vo5ObPWexy1kMuHJe20Y=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rL1nq5ytElad85YqmkZXJmzjouAxAjRmC+ic07Jh7w5DIeAVvFHgt8p4J7K/mMPt82BUA2fQfzEzMCjSWfJtI0i6Yfa6iiMX/Nzx208evTgzdrmFok49ErHrf+s2lEGTNXbqRPcZlAqBgLJsME+QjFLMRedFxzSVTstMoPuA0yI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=T/ihwTQl; arc=fail smtp.client-ip=40.107.244.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="T/ihwTQl" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=dIcX/s0f75x2D/hSVcpVUcTGzQDFh0yXPuCgkPf0s9YlkIcJBiSM7fxWXpGfD2K8VArmlQM/o6vJlpGQXDDK8xmM/XgAxt0zcNLijPh8K5H8YO36nbhUHOHdPC1jVVEDCedWv9likqRT2Q76NaQGQbUGFgC5kUcwccuit1E4zQ0Va9ysPMxC6f9mzxjwGJYI0eATqtW868eXLEG3Dec+GN+7wJSjMvscvb82cjBKuFXnme5VpEdK7t+1gFHNnlw4PK4Zztl8roRRP89IWYAbiFdHq/FEJfFaXsoKi1nPaGm2vRLaCW6w6NT4yHPViK79MOAWfMTVn/Ee/vaaeDpwXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZH4cfsAUqLNN0R4iZ2w8NbsRaeZVdAUJKqtsDclcKFg=; b=apln5Yrz1h2U7X5VSbK9oge+n5981vKgNTHQtLLR8JgrpgyHViyVX6RnKVMvcAyYszIcndxX8jo6zXIXKXjNUt1duTkh1NsrbCWpUbHd5fXRE3b7401+2vBsYBahRHOAgKjSJvDBTmIFrMBcSBpoSh1x07uSLpKljDm82IqxdbPmB6Pf7MQtOzi04k/vYVn1fFqAPrfg6UxIlObdb6DuQazwOB2Fbhw7+7tx4uh+9/dTqyM2ujhxarDMi725QwfTQp6lAj/rqmr4NAuw2c4EOFgIzy3NZN9TZDXcZZjoyi/7CB60rKAJ01E8+G3uhxn5RluZoZoBuZu8Qdw4V7J3SQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZH4cfsAUqLNN0R4iZ2w8NbsRaeZVdAUJKqtsDclcKFg=; b=T/ihwTQljKjMYgXJvj7G45Hb6n5Xw8AC83bi3q7Pf8lVP1rnSp/dQ9dIY4N1m9Mc6gZKBXTFWVmuxfhmscIx1vwwFnaP7w6qP9Vd61eKRzg8sslG4omNQI0nJWgEfso20sAQAN4PENxkH4PCKzulCPTwrPWBiX3d1SVM21Qn6qWjhOHxAYGdcwGyt1nu9foTmSZv4iIq1f1wJrqkxsTRgcswCIL5JyhX9ajkVJzsC72DLu0b5CNPME63QkLhkR7KBDJMwxF0DVnCOCITJ5IsWE/3PXcP3YA1cTRT9P/5TVRFVWrN3p4yMMy85y9owTRYcudYFnnwR+2uvk6eq3BBRg== Received: from CH5P223CA0023.NAMP223.PROD.OUTLOOK.COM (2603:10b6:610:1f3::28) by CY5PR12MB6106.namprd12.prod.outlook.com (2603:10b6:930:29::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8293.20; Tue, 7 Jan 2025 06:09:02 +0000 Received: from CH1PEPF0000AD79.namprd04.prod.outlook.com (2603:10b6:610:1f3:cafe::a2) by CH5P223CA0023.outlook.office365.com (2603:10b6:610:1f3::28) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.17 via Frontend Transport; Tue, 7 Jan 2025 06:09:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD79.mail.protection.outlook.com (10.167.244.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:09:02 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:49 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:48 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:45 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 08/13] net/mlx5: fs, add dest table cache Date: Tue, 7 Jan 2025 08:07:03 +0200 Message-ID: <20250107060708.1610882-9-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD79:EE_|CY5PR12MB6106:EE_ X-MS-Office365-Filtering-Correlation-Id: 602aaaae-bf55-482b-9377-08dd2ee1c8ae X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|376014|1800799024|36860700013; X-Microsoft-Antispam-Message-Info: zzKczP3Km08dFhM0BWRFsMXwASo5wYZzFCDBKYmEcb6k2DGupzD8aWpRRZNgSV+P0cTIn5APTY005yZeyNvj9gT9kZaFzVBt/Xc3ySrpF+MWA1GkceG7eN4aD5jUw0IivAfspcsq3xr/qF9p1DF97jt71yL+1veUm+4byqjYgqzkWBQiSc/0HC5dI1nOZKJHjacvr4SqcwakiqUdEZg+msD+DXfMBeG8Ili+H+FT6VC4hR4j2gycDBwmANqY/7AF0F6X5jYwptUMMazDPUsk5e/wFR7se041LSH1AhJ9dJpwjNSJCjTKcDwLGMDQtBWImImD6xGj3Xk8A8czXDTGLuk5UJ/JYPjpSAGN9OLsIdaP3kfjJFYwJp/WJxMU16q6UxcabqQsPSPJr0TSiqYzLUXtDwaI/vslbYt9wkKoVBmzGtDfuBBRpiop5MHE9V5j7cNz103SX5lbjuhKwG7R2lZ4TW49sEFbD5lWeIq9FCjWmnB/utRB7i0mts8gCsfhA8dyNykwxVRyoj9D+A/1ntbrzVZArslVjN7A2pw7lvwrCqHm91M8w/K/v8LnIdcvd0o+wd+ZNoIsnb62j5EOLfHC/QxoZj4czE0+kuR5mj1v/b3pp3SLTTvl3Uxsdw0tv6oSfQxW9YupEzNbIBfJJSZyzZ/+6UjF1tR4pLhMoRSCnWPWt4NWbo1qBvgBbxGtmsY51aDKsFRvFKjHaTR0nfXcWNKd0FysijqDe9A1dfPNCpcip+ca7ZR3a7L4qAwpjc1FArGcns6EFNelQF4S3xAOoMqTS8oEKkVTMJvEe6PDSzti3o+VMdW8rDZ5UJEvxRtP7oKv1+72soAd8gsjohWmWJKmtZyJB2V9mx+kP49+HTpOEnanDBo/cTJhXraui4UiNsxpSzjOnQg3XZt1paJKg43kasKQgQXuzmYBocsdl7JiYBOJRn6tAov1I4N2TReokLu0oljDmS7esKubK0wbgLeowCh7IoSJ+/CFMaXmsV8Gg3ExGZSSht3zEtAO9yxl5Esh2O354GCEOrzQv5m7EC0IcYc77q8JFy/bQ9j06sd5cybiBfcOyg2mvCwJ2GmmdIk1LNB3AROdrKvSug0dOFdWvaAhc5A+r0dMoZ5Msegqam1Jc289pl5VgPahQNO9dJY9LSRFgJUHV7U27W5gYzRvxqJD+WzijoLD+d9CNiWdKBFlpixD8R8trp8oh24I0LInKgD14soYe47K2L6np6QCtKdu4WSHJfbXBO/0WL7xwfVk1BKlguZAPSiLsZuoeBVT3KrZ84BLGXNstxs43sMno6RPaNyNkURERwqboXUmanIxLbV5Jh7hKlI/UC5Zv5L9SYhz9WAd8Fovj3g7qVoDtb2eKIBdd+mcv6ifMd/ozlkG5TvPADPbQDua5ubyH833Zl9c1A382hbKr1FQffCb8V3OTS9ku2mOnoLXoCDp2RsvQapBKcV3jvtKxee+tnDVMkFo04o3Zs3Ujw== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(376014)(1800799024)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:09:02.4319 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 602aaaae-bf55-482b-9377-08dd2ee1c8ae X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD79.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6106 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add cache of destination flow table HWS action per HWS table. For each flow table created cache a destination action towards this table. The cached action will be used on the downstream patch whenever a rule requires such action. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 68 ++++++++++++++++++- .../mellanox/mlx5/core/steering/hws/fs_hws.h | 1 + 2 files changed, 66 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index a75e5ce168c7..6ee902999a01 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -59,6 +59,7 @@ static int init_hws_actions_pool(struct mlx5_core_dev *dev, xa_init(&hws_pool->el2tol3tnl_pools); xa_init(&hws_pool->el2tol2tnl_pools); xa_init(&hws_pool->mh_pools); + xa_init(&hws_pool->table_dests); return 0; cleanup_insert_hdr: @@ -84,6 +85,7 @@ static void cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) struct mlx5_fs_pool *pool; unsigned long i; + xa_destroy(&hws_pool->table_dests); xa_for_each(&hws_pool->mh_pools, i, pool) destroy_mh_pool(pool, &hws_pool->mh_pools, i); xa_destroy(&hws_pool->mh_pools); @@ -170,6 +172,50 @@ static int set_ft_default_miss(struct mlx5_flow_root_namespace *ns, return 0; } +static int add_flow_table_dest_action(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_fs_hws_context *fs_ctx = &ns->fs_hws_context; + struct mlx5hws_action *dest_ft_action; + struct xarray *dests_xa; + int err; + + dest_ft_action = mlx5hws_action_create_dest_table_num(fs_ctx->hws_ctx, + ft->id, flags); + if (!dest_ft_action) { + mlx5_core_err(ns->dev, "Failed creating dest table action\n"); + return -ENOMEM; + } + + dests_xa = &fs_ctx->hws_pool.table_dests; + err = xa_insert(dests_xa, ft->id, dest_ft_action, GFP_KERNEL); + if (err) + mlx5hws_action_destroy(dest_ft_action); + return err; +} + +static int del_flow_table_dest_action(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft) +{ + struct mlx5_fs_hws_context *fs_ctx = &ns->fs_hws_context; + struct mlx5hws_action *dest_ft_action; + struct xarray *dests_xa; + int err; + + dests_xa = &fs_ctx->hws_pool.table_dests; + dest_ft_action = xa_erase(dests_xa, ft->id); + if (!dest_ft_action) { + mlx5_core_err(ns->dev, "Failed to erase dest ft action\n"); + return -ENOENT; + } + + err = mlx5hws_action_destroy(dest_ft_action); + if (err) + mlx5_core_err(ns->dev, "Failed to destroy dest ft action\n"); + return err; +} + static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns, struct mlx5_flow_table *ft, struct mlx5_flow_table_attr *ft_attr, @@ -180,9 +226,16 @@ static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns, struct mlx5hws_table *tbl; int err; - if (mlx5_fs_cmd_is_fw_term_table(ft)) - return mlx5_fs_cmd_get_fw_cmds()->create_flow_table(ns, ft, ft_attr, - next_ft); + if (mlx5_fs_cmd_is_fw_term_table(ft)) { + err = mlx5_fs_cmd_get_fw_cmds()->create_flow_table(ns, ft, ft_attr, + next_ft); + if (err) + return err; + err = add_flow_table_dest_action(ns, ft); + if (err) + mlx5_fs_cmd_get_fw_cmds()->destroy_flow_table(ns, ft); + return err; + } if (ns->table_type != FS_FT_FDB) { mlx5_core_err(ns->dev, "Table type %d not supported for HWS\n", @@ -209,8 +262,13 @@ static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns, ft->max_fte = INT_MAX; + err = add_flow_table_dest_action(ns, ft); + if (err) + goto clear_ft_miss; return 0; +clear_ft_miss: + set_ft_default_miss(ns, ft, NULL); destroy_table: mlx5hws_table_destroy(tbl); ft->fs_hws_table.hws_table = NULL; @@ -222,6 +280,10 @@ static int mlx5_cmd_hws_destroy_flow_table(struct mlx5_flow_root_namespace *ns, { int err; + err = del_flow_table_dest_action(ns, ft); + if (err) + mlx5_core_err(ns->dev, "Failed to remove dest action (%d)\n", err); + if (mlx5_fs_cmd_is_fw_term_table(ft)) return mlx5_fs_cmd_get_fw_cmds()->destroy_flow_table(ns, ft); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index db2d53fbf9d0..c9807abd6c25 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -19,6 +19,7 @@ struct mlx5_fs_hws_actions_pool { struct xarray el2tol3tnl_pools; struct xarray el2tol2tnl_pools; struct xarray mh_pools; + struct xarray table_dests; }; struct mlx5_fs_hws_context { From patchwork Tue Jan 7 06:07:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928271 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2064.outbound.protection.outlook.com [40.107.220.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BAB3218641 for ; Tue, 7 Jan 2025 06:09:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.64 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230165; cv=fail; b=ilv6okhr67zOYbwKjH4swTpfnEUn5dbgnWLRiJ5Pvy78qQ0IViOyi8WKzu6BWGdmBnFaCjMtvuEmOaEcKuyYo14nJu1sPsVz6Gqe8btCa27skGQQtNKf3opyvfaWVPDXVQLZnD4ZzWXX0i1cAFQHto2wQF2uwT7kf7rQ0nbBpb0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230165; c=relaxed/simple; bh=p2FWQFKyBm34MyTDfOavmW64cFGOd3FaV7TCaBdwUqU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=obwZbIZjCYn/nmkkrgLsJ4zu3VpBQcoqZOPhd24IUJgnMzmDt8kjgHl3plRCDcz+SJ08blONoxMvRQD1lhlCkzNRVyrz5YpegOMuyh+Y2PMfuDM9sPGWApXnxMWcpxBdTyRvkL1YqwJGQiIwB/1P458BneMpc+neYo2ynLVojJo= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=JCpa6ykk; arc=fail smtp.client-ip=40.107.220.64 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="JCpa6ykk" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=N/IVD5dWMQi22B6SHJM7plaRVKsWQb/3bOD1v4keizIhRe8fcJQ177kr3+qBMYVnGsnl9/2KWVqVeWNFhPGTcFEumtL+A/f+ZxwANUU6FodhVlMJ2egkRTanazXYDbc8C1WQT2LRE+X/WR4pmP5pj4Be7LPcHD0TG5Q+xuRCaFT4u9LXbkK6GFswHa5mNqLQ2DOoI2jY6IYpznXpnzN+OcCuMw0wqua0IavCXdNAaupiMSm3VpFRQ1pMDzSjopZYIi6VC/TgIw8mz2YKxFlynVWg8Bh7UswqnPA5PkV73+Q/ydQW7Cp6Lv0fIxt5ZTlpuF03RjyQZxfvpyyUy9EhqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BtsQIl1waYU5IS/QzsGyFM2p0PVN4GfggxYMBN50pP0=; b=QULMNoJ7fBiQFbMkgGxYTNAknX7n63FDDVm5Ue8VDR6nq58x2PryddsSOaB750u4uLR5k7R+AZ+Tr2NvhvMn0OPkOZMuqGBgR/wbc493v6yK2utPyODOieTYzh+yNdSxnUG9VxYsa4zJM1dwYlB0JDy2npBIk62bGBNzrIUuLWC/GJN8YUaKVo16VpgfCX0em4jyxfiQfTFzw29m5cevXz/PS20zOyYCmfOb65oXexOxsaCwtFoJXJuFX9zVkXTS7ptZXt5B3vCQORBweKili0ZRD8h4NcWkKHguUVVzrzzTC9Pu9/AisqLrSEa2qHbQed6xFT5WWBYk026Q62A2WQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BtsQIl1waYU5IS/QzsGyFM2p0PVN4GfggxYMBN50pP0=; b=JCpa6ykkazpFV7Ormc80x6IYUpppbm8yYK5oPCVB8rG3ftoKfZsrGlgoya0cuBFZx07fPJCu6cp90atCROzxcFMDaqpBiC1rOknIVKxxwPZJ+ZcwftCgFhpqAAujVcZ/DRfYcqgXg5YFb/Pvmp5Fb/773F4MhdRO4D6YVzzwxAbZcucLVo5O0r/o7LLzamG+GOR8L6h8WGEPMnmRZVIjKypaeVlzVvzELihj46wqVyOmxd5FCvSw5oH+uxAGbAFxP8pxidV1Kq2FEnmF6QLcEuAyV7GZuam1FkiNpezMQmXMAz4pTJeZeRfd0Rg5QNe5ahp78PRNWYzZgBC9SXzXXA== Received: from CH5PR02CA0007.namprd02.prod.outlook.com (2603:10b6:610:1ed::29) by SJ1PR12MB6025.namprd12.prod.outlook.com (2603:10b6:a03:48c::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Tue, 7 Jan 2025 06:09:09 +0000 Received: from CH1PEPF0000AD76.namprd04.prod.outlook.com (2603:10b6:610:1ed:cafe::d1) by CH5PR02CA0007.outlook.office365.com (2603:10b6:610:1ed::29) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.17 via Frontend Transport; Tue, 7 Jan 2025 06:09:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD76.mail.protection.outlook.com (10.167.244.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:09:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:53 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:52 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:49 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 09/13] net/mlx5: fs, add HWS fte API functions Date: Tue, 7 Jan 2025 08:07:04 +0200 Message-ID: <20250107060708.1610882-10-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD76:EE_|SJ1PR12MB6025:EE_ X-MS-Office365-Filtering-Correlation-Id: 8778654c-0f10-48c1-4c8a-08dd2ee1cc2e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|1800799024|376014; X-Microsoft-Antispam-Message-Info: 2UVn1YJPUD5S4uwHUdxU/dKCe9+syUrCiXfXcEJrw67MQiv6IT3ZvZivI6fqejjUbk4VaJtZ2z8a50LpkxD/GaAL3ze5XXdhD8x6i6uLwAIvUccSLndmG7utAAdIqfYICQ+GmvdEEEULC40vAmaFko7rohUGzZCVFvvSpA9GKItNn+YmebHqSruQp6lJ0ooAATsttw1Q5KTcCqzYPuNHqGkAxKTVKVX8ntCq6hGq8zyYOWX3W/Vd6pmSSY+k4dptdbeDgTBlnybrkzdP+GkvkF2e3F6/04Naoy16vu+r+I++MFTVInHtYbTPjF37iVLcpGU77KsRdMN+Qn66r+McY8NqIv0LFxHcU5C7uYL6H7LZzQOqv+ls44qgkcLQd3EAuGrQRtNuvUOVpF+fyP5bxOXiIGE5ZG3eo/pxPlTLGcu50nE2WqnFyu8gvb6gsTSVY5GR4LJlZwLDjlJ+b5iSubAnBBf3HEkujNjB2o+QQtrgQTj7KgkoYa59AHmvT2sUFcs/MeIkwvJZvoJdZlhabTPTTNu5RNQvzvgxnpgdBFxv50H7EMeoYU+M1/ffQMFs6uo6qteiWAMvRrSh2hRbnu7K08HxZmORGAlKSwl8ife8rDpxZndO/43mVdqCje717ZxJ4GBIoIm7HwfUOEyQUkP2WTZ33G6YCQqB8UX1yLyWXaJYlDHBZ8Qb+JSdklHlw0IGQUKIuADDX3LTCdjkeOUqJWWbYph0LR9Rw3PdfhzGvY2bk2S7CO8Y93G7xXgaOAepn0p77voQYPpgPzpDH3sPY3dmOqyT/ePVgRTyANvRQMeO6K+CFNmv+BXa+YEsnPxDTInLYZu7G9JMPEaYCxQXa7btwOyyHlS5drAmyfZpgKybMcdgr3/UTcDWRAgRZRi8L25+ymv1tkqIfixrZVjCi1aenTKES/RU0OezHuBy3j4wNJGYKhQGATjDeDY4e2oVndTpgBiWcOybj3tJ8vusdI6bczgoDb1jcacsIyd41JW6v+mp6kHGlxNShY/ZE/7RrNbpwYvAGcL3c/yIAFoDUsb/+Gaw9QB7pSHoQ7bEmoSaxvHaVbHe7VzpWFkkDrbePlY8qyPxatTKPD2fDotoVmJ2coqy/b1EOsiA5dtQshFvoHVDP2eQcUmVhY3vA0Eo2MUfbt4LPBMhXF/kGUFkDDL6OsCkgoZU1lijrUQw5f8tBEggiDFB+SePgf8Lp1PkyNv7KhxYkfsgSm1pXjWajNSn9jL5Nbw6GrSZz1k/yaasIx9Kesrggp525msOhmAEeIhQW4hOtlnwK2GfE9W8aG5FJSjTlM+1Kv/unkfQK6xpZwsoQ+AOy7LXiRJRvP9K/tD744Q57BsVIviKSGDcCI/rmg1+JPvOgO9tjSw4svLUVVY7bafXMWWExGM9oa1LLEyWtYl0XOlogEGl0TR+gmLnWn1Mb8mDQtG0fgvrhVff+3rODllI9iUiDM+g4+eGCLsVTINrJeZr17JykzTyS/nlwypKnLdI/1uzlfs= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(1800799024)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:09:08.2865 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8778654c-0f10-48c1-4c8a-08dd2ee1cc2e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD76.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6025 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add create, destroy and update fte API functions for adding, removing and updating flow steering rules in HW Steering mode. Get HWS actions according to required rule, use actions from pool whenever possible. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 5 +- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 543 ++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 13 + 3 files changed, 560 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index b6543a53d7c3..db0458b46390 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -254,7 +254,10 @@ struct fs_fte_dup { /* Type of children is mlx5_flow_rule */ struct fs_fte { struct fs_node node; - struct mlx5_fs_dr_rule fs_dr_rule; + union { + struct mlx5_fs_dr_rule fs_dr_rule; + struct mlx5_fs_hws_rule fs_hws_rule; + }; u32 val[MLX5_ST_SZ_DW_MATCH_PARAM]; struct fs_fte_action act_dests; struct fs_fte_dup *dup; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 6ee902999a01..e142e350160a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -357,6 +357,546 @@ static int mlx5_cmd_hws_destroy_flow_group(struct mlx5_flow_root_namespace *ns, return mlx5hws_bwc_matcher_destroy(fg->fs_hws_matcher.matcher); } +static struct mlx5hws_action * +get_dest_action_ft(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_flow_rule *dst) +{ + return xa_load(&fs_ctx->hws_pool.table_dests, dst->dest_attr.ft->id); +} + +static struct mlx5hws_action * +get_dest_action_table_num(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_flow_rule *dst) +{ + u32 table_num = dst->dest_attr.ft_num; + + return xa_load(&fs_ctx->hws_pool.table_dests, table_num); +} + +static struct mlx5hws_action * +create_dest_action_table_num(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_flow_rule *dst) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5hws_context *ctx = fs_ctx->hws_ctx; + u32 table_num = dst->dest_attr.ft_num; + + return mlx5hws_action_create_dest_table_num(ctx, table_num, flags); +} + +static struct mlx5hws_action * +create_dest_action_range(struct mlx5hws_context *ctx, struct mlx5_flow_rule *dst) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_flow_destination *dest_attr = &dst->dest_attr; + + return mlx5hws_action_create_dest_match_range(ctx, + dest_attr->range.field, + dest_attr->range.hit_ft, + dest_attr->range.miss_ft, + dest_attr->range.min, + dest_attr->range.max, + flags); +} + +static struct mlx5hws_action * +create_action_dest_array(struct mlx5hws_context *ctx, + struct mlx5hws_action_dest_attr *dests, + u32 num_of_dests, bool ignore_flow_level, + u32 flow_source) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + + return mlx5hws_action_create_dest_array(ctx, num_of_dests, dests, + ignore_flow_level, + flow_source, flags); +} + +static struct mlx5hws_action * +get_action_push_vlan(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.push_vlan_action; +} + +static u32 calc_vlan_hdr(struct mlx5_fs_vlan *vlan) +{ + u16 n_ethtype = vlan->ethtype; + u8 prio = vlan->prio; + u16 vid = vlan->vid; + + return (u32)n_ethtype << 16 | (u32)(prio) << 12 | (u32)vid; +} + +static struct mlx5hws_action * +get_action_pop_vlan(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.pop_vlan_action; +} + +static struct mlx5hws_action * +get_action_decap_tnl_l2_to_l2(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.decapl2_action; +} + +static struct mlx5hws_action * +get_dest_action_drop(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.drop_action; +} + +static struct mlx5hws_action * +get_action_tag(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.tag_action; +} + +static struct mlx5hws_action * +create_action_last(struct mlx5hws_context *ctx) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + + return mlx5hws_action_create_last(ctx, flags); +} + +static void destroy_fs_action(struct mlx5_fs_hws_rule_action *fs_action) +{ + switch (mlx5hws_action_get_type(fs_action->action)) { + case MLX5HWS_ACTION_TYP_CTR: + mlx5_fc_put_hws_action(fs_action->counter); + break; + default: + mlx5hws_action_destroy(fs_action->action); + } +} + +static void destroy_fs_actions(struct mlx5_fs_hws_rule_action **fs_actions, + int *num_fs_actions) +{ + int i; + + /* Free in reverse order to handle action dependencies */ + for (i = *num_fs_actions - 1; i >= 0; i--) + destroy_fs_action(*fs_actions + i); + *num_fs_actions = 0; + kfree(*fs_actions); + *fs_actions = NULL; +} + +/* Splits FTE's actions into cached, rule and destination actions. + * The cached and destination actions are saved on the fte hws rule. + * The rule actions are returned as a parameter, together with their count. + * We want to support a rule with 32 destinations, which means we need to + * account for 32 destinations plus usually a counter plus one more action + * for a multi-destination flow table. + * 32 is SW limitation for array size, keep. HWS limitation is 16M STEs per matcher + */ +#define MLX5_FLOW_CONTEXT_ACTION_MAX 34 +static int fte_get_hws_actions(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_group *group, + struct fs_fte *fte, + struct mlx5hws_rule_action **ractions) +{ + struct mlx5_flow_act *fte_action = &fte->act_dests.action; + struct mlx5_fs_hws_context *fs_ctx = &ns->fs_hws_context; + struct mlx5hws_action_dest_attr *dest_actions; + struct mlx5hws_context *ctx = fs_ctx->hws_ctx; + struct mlx5_fs_hws_rule_action *fs_actions; + struct mlx5_core_dev *dev = ns->dev; + struct mlx5hws_action *dest_action; + struct mlx5hws_action *tmp_action; + struct mlx5_fs_hws_pr *pr_data; + struct mlx5_fs_hws_mh *mh_data; + bool delay_encap_set = false; + struct mlx5_flow_rule *dst; + int num_dest_actions = 0; + int num_fs_actions = 0; + int num_actions = 0; + int err; + + *ractions = kcalloc(MLX5_FLOW_CONTEXT_ACTION_MAX, sizeof(**ractions), + GFP_KERNEL); + if (!*ractions) { + err = -ENOMEM; + goto out_err; + } + + fs_actions = kcalloc(MLX5_FLOW_CONTEXT_ACTION_MAX, + sizeof(*fs_actions), GFP_KERNEL); + if (!fs_actions) { + err = -ENOMEM; + goto free_actions_alloc; + } + + dest_actions = kcalloc(MLX5_FLOW_CONTEXT_ACTION_MAX, + sizeof(*dest_actions), GFP_KERNEL); + if (!dest_actions) { + err = -ENOMEM; + goto free_fs_actions_alloc; + } + + /* The order of the actions are must to be kept, only the following + * order is supported by HW steering: + * HWS: decap -> remove_hdr -> pop_vlan -> modify header -> push_vlan + * -> reformat (insert_hdr/encap) -> ctr -> tag -> aso + * -> drop -> FWD:tbl/vport/sampler/tbl_num/range -> dest_array -> last + */ + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_DECAP) { + tmp_action = get_action_decap_tnl_l2_to_l2(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_dest_actions_alloc; + } + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) { + int reformat_type = fte_action->pkt_reformat->reformat_type; + + if (fte_action->pkt_reformat->owner == MLX5_FLOW_RESOURCE_OWNER_FW) { + mlx5_core_err(dev, "FW-owned reformat can't be used in HWS rule\n"); + err = -EINVAL; + goto free_actions; + } + + if (reformat_type == MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2) { + pr_data = fte_action->pkt_reformat->fs_hws_action.pr_data; + (*ractions)[num_actions].reformat.offset = pr_data->offset; + (*ractions)[num_actions].reformat.hdr_idx = pr_data->hdr_idx; + (*ractions)[num_actions].reformat.data = pr_data->data; + (*ractions)[num_actions++].action = + fte_action->pkt_reformat->fs_hws_action.hws_action; + } else if (reformat_type == MLX5_REFORMAT_TYPE_REMOVE_HDR) { + (*ractions)[num_actions++].action = + fte_action->pkt_reformat->fs_hws_action.hws_action; + } else { + delay_encap_set = true; + } + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP) { + tmp_action = get_action_pop_vlan(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP_2) { + tmp_action = get_action_pop_vlan(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { + mh_data = fte_action->modify_hdr->fs_hws_action.mh_data; + (*ractions)[num_actions].modify_header.offset = mh_data->offset; + (*ractions)[num_actions].modify_header.data = mh_data->data; + (*ractions)[num_actions++].action = + fte_action->modify_hdr->fs_hws_action.hws_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) { + tmp_action = get_action_push_vlan(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions].push_vlan.vlan_hdr = + htonl(calc_vlan_hdr(&fte_action->vlan[0])); + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2) { + tmp_action = get_action_push_vlan(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions].push_vlan.vlan_hdr = + htonl(calc_vlan_hdr(&fte_action->vlan[1])); + (*ractions)[num_actions++].action = tmp_action; + } + + if (delay_encap_set) { + pr_data = fte_action->pkt_reformat->fs_hws_action.pr_data; + (*ractions)[num_actions].reformat.offset = pr_data->offset; + (*ractions)[num_actions].reformat.data = pr_data->data; + (*ractions)[num_actions++].action = + fte_action->pkt_reformat->fs_hws_action.hws_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) { + list_for_each_entry(dst, &fte->node.children, node.list) { + struct mlx5_fc *counter; + + if (dst->dest_attr.type != + MLX5_FLOW_DESTINATION_TYPE_COUNTER) + continue; + + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + + counter = dst->dest_attr.counter; + tmp_action = mlx5_fc_get_hws_action(ctx, counter); + if (!tmp_action) { + err = -EINVAL; + goto free_actions; + } + + (*ractions)[num_actions].counter.offset = + mlx5_fc_id(counter) - mlx5_fc_get_base_id(counter); + (*ractions)[num_actions++].action = tmp_action; + fs_actions[num_fs_actions].action = tmp_action; + fs_actions[num_fs_actions++].counter = counter; + } + } + + if (fte->act_dests.flow_context.flow_tag) { + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + tmp_action = get_action_tag(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions].tag.value = fte->act_dests.flow_context.flow_tag; + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_EXECUTE_ASO) { + err = -EOPNOTSUPP; + goto free_actions; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_DROP) { + dest_action = get_dest_action_drop(fs_ctx); + if (!dest_action) { + err = -ENOMEM; + goto free_actions; + } + dest_actions[num_dest_actions++].dest = dest_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { + list_for_each_entry(dst, &fte->node.children, node.list) { + struct mlx5_flow_destination *attr = &dst->dest_attr; + + if (num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || + num_dest_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + if (attr->type == MLX5_FLOW_DESTINATION_TYPE_COUNTER) + continue; + + switch (attr->type) { + case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE: + dest_action = get_dest_action_ft(fs_ctx, dst); + break; + case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE_NUM: + dest_action = get_dest_action_table_num(fs_ctx, dst); + if (dest_action) + break; + dest_action = create_dest_action_table_num(fs_ctx, dst); + fs_actions[num_fs_actions++].action = dest_action; + break; + case MLX5_FLOW_DESTINATION_TYPE_RANGE: + dest_action = create_dest_action_range(ctx, dst); + fs_actions[num_fs_actions++].action = dest_action; + break; + default: + err = -EOPNOTSUPP; + goto free_actions; + } + if (!dest_action) { + err = -ENOMEM; + goto free_actions; + } + dest_actions[num_dest_actions++].dest = dest_action; + } + } + + if (num_dest_actions == 1) { + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + (*ractions)[num_actions++].action = dest_actions->dest; + } else if (num_dest_actions > 1) { + bool ignore_flow_level = + !!(fte_action->flags & FLOW_ACT_IGNORE_FLOW_LEVEL); + + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || + num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + tmp_action = create_action_dest_array(ctx, dest_actions, + num_dest_actions, + ignore_flow_level, + fte->act_dests.flow_context.flow_source); + if (!tmp_action) { + err = -EOPNOTSUPP; + goto free_actions; + } + fs_actions[num_fs_actions++].action = tmp_action; + (*ractions)[num_actions++].action = tmp_action; + } + + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || + num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + + tmp_action = create_action_last(ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + fs_actions[num_fs_actions++].action = tmp_action; + (*ractions)[num_actions++].action = tmp_action; + + kfree(dest_actions); + + /* Actions created specifically for this rule will be destroyed + * once rule is deleted. + */ + fte->fs_hws_rule.num_fs_actions = num_fs_actions; + fte->fs_hws_rule.hws_fs_actions = fs_actions; + + return 0; + +free_actions: + destroy_fs_actions(&fs_actions, &num_fs_actions); +free_dest_actions_alloc: + kfree(dest_actions); +free_fs_actions_alloc: + kfree(fs_actions); +free_actions_alloc: + kfree(*ractions); + *ractions = NULL; +out_err: + return err; +} + +static int mlx5_cmd_hws_create_fte(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_group *group, + struct fs_fte *fte) +{ + struct mlx5hws_match_parameters params; + struct mlx5hws_rule_action *ractions; + struct mlx5hws_bwc_rule *rule; + int err = 0; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) { + /* Packet reformat on terminamtion table not supported yet */ + if (fte->act_dests.action.action & + MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) + return -EOPNOTSUPP; + return mlx5_fs_cmd_get_fw_cmds()->create_fte(ns, ft, group, fte); + } + + err = fte_get_hws_actions(ns, ft, group, fte, &ractions); + if (err) + goto out_err; + + params.match_sz = sizeof(fte->val); + params.match_buf = fte->val; + + rule = mlx5hws_bwc_rule_create(group->fs_hws_matcher.matcher, ¶ms, + fte->act_dests.flow_context.flow_source, + ractions); + kfree(ractions); + if (!rule) { + err = -EINVAL; + goto free_actions; + } + + fte->fs_hws_rule.bwc_rule = rule; + return 0; + +free_actions: + destroy_fs_actions(&fte->fs_hws_rule.hws_fs_actions, + &fte->fs_hws_rule.num_fs_actions); +out_err: + mlx5_core_err(ns->dev, "Failed to create hws rule err(%d)\n", err); + return err; +} + +static int mlx5_cmd_hws_delete_fte(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct fs_fte *fte) +{ + struct mlx5_fs_hws_rule *rule = &fte->fs_hws_rule; + int err; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->delete_fte(ns, ft, fte); + + err = mlx5hws_bwc_rule_destroy(rule->bwc_rule); + rule->bwc_rule = NULL; + + destroy_fs_actions(&rule->hws_fs_actions, &rule->num_fs_actions); + + return err; +} + +static int mlx5_cmd_hws_update_fte(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_group *group, + int modify_mask, + struct fs_fte *fte) +{ + int allowed_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_ACTION) | + BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST) | + BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_FLOW_COUNTERS); + struct mlx5_fs_hws_rule_action *saved_hws_fs_actions; + struct mlx5hws_rule_action *ractions; + int saved_num_fs_actions; + int ret; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->update_fte(ns, ft, group, + modify_mask, fte); + + if ((modify_mask & ~allowed_mask) != 0) + return -EINVAL; + + saved_hws_fs_actions = fte->fs_hws_rule.hws_fs_actions; + saved_num_fs_actions = fte->fs_hws_rule.num_fs_actions; + + ret = fte_get_hws_actions(ns, ft, group, fte, &ractions); + if (ret) + return ret; + + ret = mlx5hws_bwc_rule_action_update(fte->fs_hws_rule.bwc_rule, ractions); + kfree(ractions); + if (ret) + goto restore_actions; + + destroy_fs_actions(&saved_hws_fs_actions, &saved_num_fs_actions); + return ret; + +restore_actions: + destroy_fs_actions(&fte->fs_hws_rule.hws_fs_actions, + &fte->fs_hws_rule.num_fs_actions); + fte->fs_hws_rule.hws_fs_actions = saved_hws_fs_actions; + fte->fs_hws_rule.num_fs_actions = saved_num_fs_actions; + return ret; +} + static struct mlx5hws_action * create_action_remove_header_vlan(struct mlx5hws_context *ctx) { @@ -712,6 +1252,9 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .update_root_ft = mlx5_cmd_hws_update_root_ft, .create_flow_group = mlx5_cmd_hws_create_flow_group, .destroy_flow_group = mlx5_cmd_hws_destroy_flow_group, + .create_fte = mlx5_cmd_hws_create_fte, + .delete_fte = mlx5_cmd_hws_delete_fte, + .update_fte = mlx5_cmd_hws_update_fte, .packet_reformat_alloc = mlx5_cmd_hws_packet_reformat_alloc, .packet_reformat_dealloc = mlx5_cmd_hws_packet_reformat_dealloc, .modify_header_alloc = mlx5_cmd_hws_modify_header_alloc, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index c9807abd6c25..d260b14e3963 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -43,6 +43,19 @@ struct mlx5_fs_hws_matcher { struct mlx5hws_bwc_matcher *matcher; }; +struct mlx5_fs_hws_rule_action { + struct mlx5hws_action *action; + union { + struct mlx5_fc *counter; + }; +}; + +struct mlx5_fs_hws_rule { + struct mlx5hws_bwc_rule *bwc_rule; + struct mlx5_fs_hws_rule_action *hws_fs_actions; + int num_fs_actions; +}; + #ifdef CONFIG_MLX5_HW_STEERING const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); From patchwork Tue Jan 7 06:07:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928269 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2063.outbound.protection.outlook.com [40.107.220.63]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA9CD1C3BFE for ; Tue, 7 Jan 2025 06:09:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.63 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230162; cv=fail; b=ezEEPe61IJdX4FsP3DnZaPIfBIZGRxPWc4vmEUGLx0sNCUS5GS5wCtQFgKHpU5DQth4h8FdZLTUOf/ZlNCaTtT9c/8Z5kByjGL55AL+0qQMELNqHT99vsU25G2j75yJzoFxNh2/DvtApgMknoYkzPGSfzOc9XFgicHtRd0jK0fQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230162; c=relaxed/simple; bh=z8EYEPyodgg8WiVRZl9VFlf+BGTAUf5pN81rzDbFqio=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NUpoXPadkv//95PBTMBvNgDb3pQpgXMeEmLth2eJ17euYBXvVn+kei+XZ8GC1BX/BpF+zhY/4yipjmMwNWEvLVGe2Qo5tkV8k6SnUksLI9JbCb8zknrA/p4phdCpCdADRMByYWgvR+FcMeogOZ3JD/RXXFlLiRTfoLylG1fLZS4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=ebFXdSrA; arc=fail smtp.client-ip=40.107.220.63 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="ebFXdSrA" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=H6d7WgGcWrU8WSzwR6XpC+rqGOQcuqv+C7d2CM9/gfO+0So+3NISDX2JcTK/kT/RnzCp8sNkFspXtR0tyN06xYH7LF8xkzGDsmxXNBiCYAYGoDR2DGmaAmabQXWY+nWmHlTaIRQbTxURmZk64JP/FbGAYidPG/lDBbVbAOgQ1cQld1X0hCMKgzR23wisQguJV4PUK+AoZdOofWeFSIBoAhUArrsadGELzqU4FN+4rjYz9z2UribNhBoBCHtLeKzNqKslSLDOrc8fHWTC8xNUSW9nlIYqSkoCva6iJwIXmfPMvwa0K5fY13w88mmAgFS9mArwiauLXn05XfVLCivgig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=p0jNjf1s7HPnIgu0v+ZTsOnR3srafQtwiMvKElaw/vQ=; b=fa42geAcY62wJld7/hTio2UIF0SbyNZUfi386i9lr/s8N5e0LPXYwff7lJqNGYZC1Gy/AcAmgA6zJpdzpBc+26V4TJwd2Ik97TdLfTVdjm9jDfHJWySfqbGfVP0+GIXmqDf9OuhH7PK2YDZbU7KLqHEuutFVgz0ZxOukF6AbfxmGgnBhPixtttz/yB0AZVE63R/Gbm1pJ7nD7JjkBxp+SNFsFhwRrgPBUf3ihgx2dXy4F90f9keqgBEtlef9QpUF102YrbxaiTVTUkuTQMHibQoaSxobVWIghpMQrTp4KGmcUks/kVZGyYtM4ZqYvxtw1hTmd4ly77dGZvP4hoKHUA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=p0jNjf1s7HPnIgu0v+ZTsOnR3srafQtwiMvKElaw/vQ=; b=ebFXdSrAeIMW+vkjP1y5JlkaB3kUHe+LN8QFpkOWcJOoDH7aV+l3HGtM1biomVXchYcCSNXEIhtgFO0QMBzHDGnvbhp3bjrXRFvcSSTFFwoShPyTYpdyzusdfBC8smVy95k30+glGjqNqTAqNzhHpg1MdgW25uSBa5XFDZpu703tbI5unGyOHl1ho0lREXp/4C3ZSFz6oRuwRGlmebcjk1K3W6SorhMfY5gegh4cSxyL2EjGfXIZCOYMxuFKFCQ2ZdD6wiNU4DHEvcSbRVpGt8H3Urw9RVmSsgKuNsP/fudf9852RkzpX4ljAIQd7Md23pO0+2ekCSgMECgFm2EyZg== Received: from CH5PR02CA0013.namprd02.prod.outlook.com (2603:10b6:610:1ed::27) by SN7PR12MB6791.namprd12.prod.outlook.com (2603:10b6:806:268::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Tue, 7 Jan 2025 06:09:13 +0000 Received: from CH1PEPF0000AD76.namprd04.prod.outlook.com (2603:10b6:610:1ed:cafe::45) by CH5PR02CA0013.outlook.office365.com (2603:10b6:610:1ed::27) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.18 via Frontend Transport; Tue, 7 Jan 2025 06:09:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD76.mail.protection.outlook.com (10.167.244.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:09:12 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:57 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:08:56 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:53 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 10/13] net/mlx5: fs, add support for dest vport HWS action Date: Tue, 7 Jan 2025 08:07:05 +0200 Message-ID: <20250107060708.1610882-11-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD76:EE_|SN7PR12MB6791:EE_ X-MS-Office365-Filtering-Correlation-Id: d2e5efdf-c609-4e71-ddf3-08dd2ee1cee8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|376014|1800799024; X-Microsoft-Antispam-Message-Info: pmnX4q/ibDr/7KaZmnp+13CvEkOmKUKZGZ62m89fE/jm+ryD49Lehnp/I0ISJztco+5eUOtxbJEe2vpoTSn7/yZUvEE30lTlWX/6+zkpw1FPkNfXamyMBFrGi88pZx90vfjwi0r9TuTM9p8k4LP33LxGxXIARJFahzS9+M/gdQdwxIq5WkE2aVHNdMwem72/ETFnpyEf1XS7o9t57DropG4MqxY//vUqmxS8k1JIY0xT7bHaWKuVqCrFcViss7Diy8vSk+BqdUyzxR1gUCL25vxUm2nES3qeQ90F4qtjqkVLZZxg2MLxhfYnNUQJJFKyksI7Lx/RBm23pSa+wni6y+cvkQcAiZZ54SrPJaNPtZ1XGHBHyao/09lyKhNIy8e0aQwYDdgIol81wI5He6X3pcHVUOpZPkJEmDfvllBJGF+NvMRriBQrpL9Upw7AJu1hajYlm6KatIGzLd3doP8FYHFUNMLGJG++2yy4qrI6YTXfCyzlaP+JnBiW++RnlRxx1wsxO/oT7jN4rUgI5mRqj9fq9+dGW/3nRbY5yCPcRHRmmB3BOVLgSmOkn1GGIdzUhu8//oVwA34yn1GUI2juzBJRmTtigpPWlPlsTxLjPQp/hzlzyPIlU76if1glsSRdKqNS0nLwA2IWrLz4s97xFj+CmxoU/Brz1G7zA3X+XBbuCr+lkEZ0CJ+eKGhH04aAxlTrWkR6KC6P+IYPecIVS4qW4yB7gxzdvScH0+tdwYmQRhXf2NzQhi2uTsZeyhDy0hulR4aLsbN2hF3cOnoUchlWBqXAjQNAr6L4ceXSWykpf5v/6PLXq/lVMmZbJCboRNy6CZTfVWBMnqJKumFcgLzK/89FLNMD/E2l/RlwRdAWHG6UieocVt7QUdS9lH9husq7WGYEoaiEgRBqdCBUDG9NAfzlstVOqYSVFmJpZvXNlOCoLJk2dUEp0K9g7GPN5enR0ra/cLSVBtvPJQrPGCGSSjKzb45IAkiITkSHEYLF5JRFxKVOzhJRdAJ6sF4GeKg0e+Sz34AIWGcktxlIlSi8JRHuJUneyy7Yr8GCAIrwyCzarimBfa9RyrWYwck6cP065DA7zfMo/V0xzVLYCOC+ySJR9SxC2ZrTdv0h3IxsmONrKpOABL577TJNoxF2RdVem9qtiiTW0DsSUVzUPgpVRFa/IO7T38OJ2FAq5S3/kuSdzkD7qMbTcH1jCnOg/qgThz4bGn+CxPC78c1iAKcdGOOmycOzZvPNE9LpTmxK2q5pcx9DafGI0F4RJxJKlteusCca2WjTX7xjg/U7BdAWhS99EuLiFnK73GRtAM56SQPvU3nX7NvrR3G1Px6RH5onmnvhopPA7mHY8ydRaYw9gfokK7WyOrX1jk+J1rAfLpl1llXK3ww2jwhbNRL+VfqhoQikr+4QTsdvPguPdM5stos2ulM+z7IykjVHdd91JUfrgG9hKvqy4yvQIB6DTe46KP8Pt42lu1tU7w33Ig== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(376014)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:09:12.8646 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d2e5efdf-c609-4e71-ddf3-08dd2ee1cee8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD76.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6791 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add support for HW Steering action of vport destination. Add dest vport actions cache. Hold action in cache per vport / vport and vhca_id. Add action to cache on demand and remove on namespace closure to reduce actions creation and destroy. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 62 +++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 2 + 2 files changed, 64 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index e142e350160a..337cc3cc6ff6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* Copyright (c) 2024 NVIDIA Corporation & Affiliates */ +#include #include #include #include @@ -60,6 +61,8 @@ static int init_hws_actions_pool(struct mlx5_core_dev *dev, xa_init(&hws_pool->el2tol2tnl_pools); xa_init(&hws_pool->mh_pools); xa_init(&hws_pool->table_dests); + xa_init(&hws_pool->vport_dests); + xa_init(&hws_pool->vport_vhca_dests); return 0; cleanup_insert_hdr: @@ -82,9 +85,16 @@ static int init_hws_actions_pool(struct mlx5_core_dev *dev, static void cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) { struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; + struct mlx5hws_action *action; struct mlx5_fs_pool *pool; unsigned long i; + xa_for_each(&hws_pool->vport_vhca_dests, i, action) + mlx5hws_action_destroy(action); + xa_destroy(&hws_pool->vport_vhca_dests); + xa_for_each(&hws_pool->vport_dests, i, action) + mlx5hws_action_destroy(action); + xa_destroy(&hws_pool->vport_dests); xa_destroy(&hws_pool->table_dests); xa_for_each(&hws_pool->mh_pools, i, pool) destroy_mh_pool(pool, &hws_pool->mh_pools, i); @@ -384,6 +394,51 @@ create_dest_action_table_num(struct mlx5_fs_hws_context *fs_ctx, return mlx5hws_action_create_dest_table_num(ctx, table_num, flags); } +static struct mlx5hws_action * +get_dest_action_vport(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_flow_rule *dst, bool is_dest_type_uplink) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_flow_destination *dest_attr = &dst->dest_attr; + struct mlx5hws_context *ctx = fs_ctx->hws_ctx; + struct mlx5hws_action *dest; + struct xarray *dests_xa; + bool vhca_id_valid; + unsigned long idx; + u16 vport_num; + int err; + + vhca_id_valid = is_dest_type_uplink || + (dest_attr->vport.flags & MLX5_FLOW_DEST_VPORT_VHCA_ID); + vport_num = is_dest_type_uplink ? MLX5_VPORT_UPLINK : dest_attr->vport.num; + if (vhca_id_valid) { + dests_xa = &fs_ctx->hws_pool.vport_vhca_dests; + idx = dest_attr->vport.vhca_id << 16 | vport_num; + } else { + dests_xa = &fs_ctx->hws_pool.vport_dests; + idx = vport_num; + } +dest_load: + dest = xa_load(dests_xa, idx); + if (dest) + return dest; + + dest = mlx5hws_action_create_dest_vport(ctx, vport_num, vhca_id_valid, + dest_attr->vport.vhca_id, flags); + + err = xa_insert(dests_xa, idx, dest, GFP_KERNEL); + if (err) { + mlx5hws_action_destroy(dest); + dest = NULL; + + if (err == -EBUSY) + /* xarray entry was already stored by another thread */ + goto dest_load; + } + + return dest; +} + static struct mlx5hws_action * create_dest_action_range(struct mlx5hws_context *ctx, struct mlx5_flow_rule *dst) { @@ -690,6 +745,8 @@ static int fte_get_hws_actions(struct mlx5_flow_root_namespace *ns, if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { list_for_each_entry(dst, &fte->node.children, node.list) { struct mlx5_flow_destination *attr = &dst->dest_attr; + bool is_dest_type_uplink = + attr->type == MLX5_FLOW_DESTINATION_TYPE_UPLINK; if (num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || num_dest_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { @@ -714,6 +771,11 @@ static int fte_get_hws_actions(struct mlx5_flow_root_namespace *ns, dest_action = create_dest_action_range(ctx, dst); fs_actions[num_fs_actions++].action = dest_action; break; + case MLX5_FLOW_DESTINATION_TYPE_UPLINK: + case MLX5_FLOW_DESTINATION_TYPE_VPORT: + dest_action = get_dest_action_vport(fs_ctx, dst, + is_dest_type_uplink); + break; default: err = -EOPNOTSUPP; goto free_actions; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index d260b14e3963..abc207274d89 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -20,6 +20,8 @@ struct mlx5_fs_hws_actions_pool { struct xarray el2tol2tnl_pools; struct xarray mh_pools; struct xarray table_dests; + struct xarray vport_vhca_dests; + struct xarray vport_dests; }; struct mlx5_fs_hws_context { From patchwork Tue Jan 7 06:07:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928270 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2076.outbound.protection.outlook.com [40.107.220.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57C4D2AEE3 for ; Tue, 7 Jan 2025 06:09:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230164; cv=fail; b=SfxZZfxpG+TPaLWfy/yQEUIUZFRxQU5BINU5sMFC/MGX4Dki2zce7cyQkHESfqD7Kl52fGULA8P4Tom9bghfyT3MQUx4d9OPeRleH0dh7mccmzn7E5w5i8gsfCYDfZyf8IJ+Hd3xu61im8d/rHX3k2e0O9xo3lEcS1GtS7cpzw4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230164; c=relaxed/simple; bh=N7KUtXev/x6/J7sqHc4e12PwfgXAoMsWryjgVBfoLTA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=hkZlb+MKWi3mXyYJN0fggV8+AI2S1B9QGzh9OuB53eJdOhb1kHIh5/5UyZL8gmhaAC2MroO9KLtqaZSrRpmFIjij/aT4pQoFLEjcdPleEbrIjk9RLFedCukLkOEbMoP1S6k2UhlFChhZCYEXXag8FyO3+7lixRBjnNL7XENfZXY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=gZcdnys4; arc=fail smtp.client-ip=40.107.220.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="gZcdnys4" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=zWlvlmFa0DmjyM87kbN481AUzUiD94SvDTY0Yz0TZQ9j+oyKsr+rPFUEU3wP4Zd/0o0bUoOe6rPzq80IbbCt1E8yDPnbrWPKm0iheNezaU971/6fxq9vx9/yG33HF1VnzxAtMYleFCw+r+jH9SuANnP2G9wfJ7LM+zwxaINOWCIHYxGM4+FTxD0F0de2MIO4lHcUd7FQE6wjmx7VqnYmzQorQGCDp73x6MtoYiL7DK7zsjBmhJYl9ewMs95EPWUIfSnb+wEwqYrW3ggJeFdYGl1RX3EqAXZYNKRuUFzMvBEIUKOKu9WrK0VzsgDmMZWOKXuzLnAhVToQMjrIigPjPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4cZPRFet7guXqZe//JwwS0Dyx+SJZSzAnT7VnBWeh/M=; b=lX/VYLF8uJoN8gOWTn7gAHhGEKLRLUcW7E33+AwDbGaMSx+9rLOpBsfSb3LOv/N6+vYUiQr7m9vBVmbL9PPFKiWMbiRk9bJwQo09kc5Gm00D0azqWfsKCy4gDayc1brasVRYShiTByPvpz2dza2JNxpLXi4LIrui8XflsSJI3x+ebtB8TODiFy0Xf2G0axKUi+7AbolRzJ6CxGy9KAiveH5+9B0B8/e72ULmjBh6iViudwYeRtNfUbYJJAiJtkkruu0lxf4A8KVItCIbS6klMKa4kycc4G6M5O3bhpb0DXHeLhrUP39qvJJlDggPwx2JTnyExbeFE5AWcBmY9s55wQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4cZPRFet7guXqZe//JwwS0Dyx+SJZSzAnT7VnBWeh/M=; b=gZcdnys4FL5ZKAozSFNalohkT14xb+qpYfmT5OnvW4JNfVq8jOhHIVrleZYiqJd6dOOtwK2Tf8q0gdY7IxqSBfMOLAGD3UtJYXYeJeAbjWC008K8OzT52Qv9O+RsK0HrlY7ZKvvWQ0RB1SQ/QP0RHaAYpUNZ3EPE05EjOxJ3qS+wF/di735w82a2NjnFMcIhCJV4VWKppCROj384J5/nIDex7bxYeEICG8hQHSUyy+fXSlbjO60FZ8M5/fFoeDQ4lwep0jMbqOkhmEn13/TahXNmFBdEIzbWVG+iBRyDQRjjAAlJUlmiFuauk51rvZJrjT+VtnDmVVKZoUFlsBwP4Q== Received: from CH5PR02CA0003.namprd02.prod.outlook.com (2603:10b6:610:1ed::23) by DS0PR12MB7630.namprd12.prod.outlook.com (2603:10b6:8:11d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Tue, 7 Jan 2025 06:09:15 +0000 Received: from CH1PEPF0000AD76.namprd04.prod.outlook.com (2603:10b6:610:1ed:cafe::cf) by CH5PR02CA0003.outlook.office365.com (2603:10b6:610:1ed::23) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.18 via Frontend Transport; Tue, 7 Jan 2025 06:09:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD76.mail.protection.outlook.com (10.167.244.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:09:14 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:09:00 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:09:00 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:08:57 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 11/13] net/mlx5: fs, set create match definer to not supported by HWS Date: Tue, 7 Jan 2025 08:07:06 +0200 Message-ID: <20250107060708.1610882-12-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD76:EE_|DS0PR12MB7630:EE_ X-MS-Office365-Filtering-Correlation-Id: aa8f6a1a-2285-4dd6-e423-08dd2ee1d012 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: Amb/QZ63jak/twaFW9s0a7ohKd3ZpWSpJEt99UsuUH20N5zJiR0XuoIaiUMuAV5+ZDP4lSIFuxmJvIBGeU7PxewyRIXmHLw2E0ux+Feoog1DfKVdFUVkB20+SyQD1rjQjTZGgQ834F8DgxSRkZUiCya4u3o1FHRpDzYFpPhRXCyr9aN7Tw9p+ivzPEQ0wuwj/94iPIdS4Mt8+V1bpWsdRycVoFLxCHTdpPLQZt5J0a7vflwdfLg0OyB+pd/k4xMlwAJwO0ZutD3WV7A7MoIQGnll+FFbFmDUlehAbc9qGWgRv5FMrKKbDktqWHc6XP9jHDQzpnTUpuwV47uxMXmkYXnX3LgL5+IqNYgtmtg6qLngNMblY7XLBWokKJhp6uKV+rTuRf9Irm5IlX3phBmvP5xLScNu2MWKmwMSVS4a0SCNTHSFYMrmHmztfJo5sHFD8WV/81mz35GpJHbD6ZyweXBzl+eFX8zIZEFoMG5StCrz6oGaTyXRh9ysmMEP3z2fmWZiwn5O+lK3wIb/t9Blu+wym4NoEXdHhJf8eKMXBnc8e6ht4aFgmXbiTT+EhY4ilNtppDwHrUC0aBN+EWa8mE73ZBudcQjPxLn/9MqNvI4PRj8SjJJZ4b2DMEUdgtlMN1C/u7wYgmsXdYi8qUO5A2IBaLM0I9Qw/rlqjwm2aGsaiimERkU/aiOZmim7v9t4uhTHWWRfCgqv4fGV/y1mYbJLb+I2w8meFGYfx1ULwstjfL163XodFFrGpLnSXBrI00aEYz0e7uOu32jO+kEI34qY1GDPDWzEtW1mzhuL6TKbUQXFxGmHYC/Lb7YO+wl2N8zIM/rO1AaBzvMSjax7rD23RVZsSaNedHbMWIWg1Jn6loJ+ovCeZeE/cj1QALDBCNvQ5K1er2anzZpl1I7NSOd1HgmbxNNAG1jt3woXWnz7k5MZ68r4151uGwYhz1su9FINxT7V2rXPRCh3+GK6pE8j7zVi5ozdd4uPvTb43ov2jXGxW4Any3CA/KESmq7m/SdENIw9hVEPml4ag9zaG7CphM3PuA6R5UfOsI1Btsz2O4kwL+qOIVxIBh2bDNjfahIOimuqMr0UmMc7KUrG1sFfEcqNSrds+F2B6GCL1z8Ii+BAlBoADI0AsqdKYUUlwJ/urHAmuswFosuWm0JICPnx8foEXRD8iKm3jZ7egNyc/j9MWICHwcb/KtGEMtQ5a8vCYcv2EKgbB8uo7Wjrb0b42Jj92P/nWhaDYZZjefyvQ+W/0PZTbQB9v2Q2YuZmTpoUa+FkWEOxEauOjP8ygquybF1NtWfOJNr+/LBDFJKnmj4ApCDYu5qdCulDTgq0bC6kV4zU7TI22dIN6ecpMMWgBZ82qT3WlWh/tiw/wtUGFKZYGWk7Mes+PxJnnJdW9R6gmGBIPHPXQBa4AbxhchllGFBFJVuMmIKDlusDWnRP8RTzr0bRnwj1d2CxnPeuefD07uBVugTMrq6GVYWlADXqgskqGHzQxQcZVcPgU9g= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(376014)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:09:14.8177 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: aa8f6a1a-2285-4dd6-e423-08dd2ee1d012 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD76.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7630 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Currently HW Steering does not support the API functions of create and destroy match definer. Return not supported error in case requested. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 337cc3cc6ff6..d5924e22952d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -1307,6 +1307,18 @@ static void mlx5_cmd_hws_modify_header_dealloc(struct mlx5_flow_root_namespace * modify_hdr->fs_hws_action.mh_data = NULL; } +static int mlx5_cmd_hws_create_match_definer(struct mlx5_flow_root_namespace *ns, + u16 format_id, u32 *match_mask) +{ + return -EOPNOTSUPP; +} + +static int mlx5_cmd_hws_destroy_match_definer(struct mlx5_flow_root_namespace *ns, + int definer_id) +{ + return -EOPNOTSUPP; +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, @@ -1321,6 +1333,8 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .packet_reformat_dealloc = mlx5_cmd_hws_packet_reformat_dealloc, .modify_header_alloc = mlx5_cmd_hws_modify_header_alloc, .modify_header_dealloc = mlx5_cmd_hws_modify_header_dealloc, + .create_match_definer = mlx5_cmd_hws_create_match_definer, + .destroy_match_definer = mlx5_cmd_hws_destroy_match_definer, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, From patchwork Tue Jan 7 06:07:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928272 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2074.outbound.protection.outlook.com [40.107.94.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54ABF1DEFF3 for ; Tue, 7 Jan 2025 06:09:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230168; cv=fail; b=iqzNLcmL4aDAkG7ON964I/Jec46KylEgiHV93KruLIXXDix+PTB73jc/SbhZeppUtXzZeEDzjk3+HImgiurHuhoqiZ+oYGhBlR9sWERt+EoW4HyhRhoX96PbnUFY8RFwv6NJYS1c/r1bs16NQ0aGNRotBN20ne3u3ysKZjmxiM0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230168; c=relaxed/simple; bh=NgTByRPZnHtLHuUat97Mbc+K4CnwE73ctCPezXV+6Pk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=anVhDNUMylSigl/5n+f3auDlo97924v9wXNKLn4F5LzaaQFkLxSuTAwnEGy3MmKs+2bZa3RV3LdJcp4Mt2O9QNCVZrJIAbFfjs/hrMbUHXw9rr7H9ZUsY9CM1ep4AA6kPtwY/eG1WM2eSgxUNiPrVGt2R3M7YBvhhu8ltbGSmvs= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=G8wUiV4u; arc=fail smtp.client-ip=40.107.94.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="G8wUiV4u" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=S5Uu4aUm20NyvZ1e7mNqodHgd66MpsSRdSOQuW9g1MoGg2GjwJFTt626E2599Q9Qp8umnxVbMf25+t+txuX59doIY5/uT5VWlPW8mWrG2LeG8nfx5XKfn/fhoTz2Z8vxk4JIhXiH1eZT4YuAtWCJdcbVqyw9tC8yiKEKAQ1ZJney5ONSviB02w1cr+Nn114VSDiNmzpxgU0iBGsaqtmHUOnS5XsyigyLqylj/TKUt8GsyZepDkpm5NwEnFCT7PBMDRDUvJ6z+A+gaNXN7udkBBg/HA2So9xbQmKGFxGZkOysgsxdmYW2SPLHJtRyVD+qC4Sc3fSdmeBCeEx3DKlUvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vD1Xxcw/eQnpy14ry491lFWO7iVw295RAQ/GODWVRY0=; b=Tfh3wFjm+XUY8IWk5wCtXKQHAWttU5yNSLC41prmD+lfOVkai6kAF4rnyvwWqGk+9jz5KCZOAY8ZNA5aniz6waUNVmCGSaO/cQ13Y8CRfy6FK3wqSkWz1ez1CQi1Keh70UmppnUZkFbG+y+GwaA7RsZsejPqP7WGPwGsPd9CwV/xEtlLvR7s0S0PdOtFbhauL9iYww+piHRYhtrDge4f8ocVDOOtETqMFUFnMI/2qoF+pvukgVlL+6iFYIhoW7v1QVn9psmAJ0APdKOyG6gEbPy2vEUvs8B1CYL075byxngjZN4nMd/9w5zdDR6DYUyefqxrth1B+6arQ7+q+WHTJg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vD1Xxcw/eQnpy14ry491lFWO7iVw295RAQ/GODWVRY0=; b=G8wUiV4uE3zIleHKzS/KKgtP55NUxdlLAbm1ekw4G/PBIvSWoZIWISWOJPvQbOx7vjwWOu3j4qgwDU6qRhScQ3HUKKbgicWo2mr6YKXE6duXitXgqUp8kMdCiW0ktz2G1B+1HejAlv5svc0GC4fq0sdGoYcpgSktbxG6VjOlLbpHHnBCLsAGhGoo0eyvMY/eJGI/qBzYXKAYXC1oKNSpM0LOe5HSV9Nqn5UbxzGCiCjqpABo+4LMCWA2tzjonpvxIli7buK2pBC1w2bTZpoE/QJ2KQkSzeGf32K8Y4Ru+fmK7cIB6GBOqLN0RJBQtWAUxF2KtyXSMLaPvYujcYKCNg== Received: from CH5PR02CA0006.namprd02.prod.outlook.com (2603:10b6:610:1ed::16) by BL3PR12MB6426.namprd12.prod.outlook.com (2603:10b6:208:3b5::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Tue, 7 Jan 2025 06:09:17 +0000 Received: from CH1PEPF0000AD76.namprd04.prod.outlook.com (2603:10b6:610:1ed:cafe::cd) by CH5PR02CA0006.outlook.office365.com (2603:10b6:610:1ed::16) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.18 via Frontend Transport; Tue, 7 Jan 2025 06:09:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD76.mail.protection.outlook.com (10.167.244.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:09:17 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:09:04 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:09:04 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:09:00 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 12/13] net/mlx5: fs, add HWS get capabilities Date: Tue, 7 Jan 2025 08:07:07 +0200 Message-ID: <20250107060708.1610882-13-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD76:EE_|BL3PR12MB6426:EE_ X-MS-Office365-Filtering-Correlation-Id: c4fe60fc-c0b7-4bf0-212d-08dd2ee1d195 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: 5DiGN3TVa1FQ56coAr4FeJuGI8t5o3GtovJWWaufFbndtMKZHsPHPyRo96h8UXeWtu2NIbUAB3GczIEypafayJG3ynCeBJmVNtzbejJnp+xldq8Ee7Yb7AIIPqZNG2lC32KLufWbAbmo1RCtqP7iUZg3u+bcX1DbLYdKRmxStNHVc9Oagva6JwdCv6nedgiJzWRxnUPkEykDW9lIjphbl09PZQpeE8irNoUAIKitrfgXbSG/ty7K7tDIj1AjjR6z6YYEbfET6kZtu6YIeQ4IcXtDH5GbBfJXnjnf7ApZ2qUmlS09UUsp10blAzA+3+z7h2dAWk36NASzOhOXe+g1K3HQdaxj7aZWOQHNlDrdiRxM7fKrAWrMDEtVIGL2xtpbBWJzcpBtHg8iRNobrPJM6ghxjaZyE938ynljkKWQi+SrfRDaePmMkgIWBy3rBlBPtOHy+lV+QZ4K0RVBJeCUwUxL/ds+QSMuyI0HXL+Ye06sF0GPYuY+fdOLK5hmsVYX6rKVqfAS1SwGOJsms13qM5w8IIrG458v+ZyXSPTA7byga/PaE+pMfswGKi4RNZttF5xMLBYQO3rkpHyVQOreh0O3ohOrwm9HS09gFYuiZFCD2r+KgKATOv7jyAW3NRJQx+rM44OEG5dae+vbz4qRKt95qKIoFp4tnGn5hMRsAkISig+V/wSGrPcDr/EVHeEQQkfi9doriklIWvP4vlad/aQP/tb/zcylJTryBF96ZAMoLWkD2JbwD0GlENImHxsQTYowAK51abZEjIp+giX/Y+OFkZ8Cj3MNUn8WlYvQGLiALjl+jXizn+lWFA3PczuG5qTyZ3JcXGia1EwFE/8PBlPZmMPTv/NneDOGz5Cxv6XbFI0ObcrII2sUJj9Evw2Us5Wyi0zeXqxz7rC373Ol5c2UTXDkgQ+Im3/xWNZPUqSt55cuHFePz9IuxxpFMxWVG6DkXenwTPIRLrcVDpMCtFXwLyxiACw03o5yNaezY0j3W5Tt2IQbdG4Mm9dSYZBBEjlonZTzunMUMfMhIhYP1PYokLJqQzptFEKfVz0y1II+Mt5vG4q4ovZQ7QC3/RmGe4L0X3ZilVZNhhLI5E9kC5M/eaFoK+ffjkjyXnWZoV27A7NpbrP/ne/yKIbyLs9mHeLhBq3kAnHctre9oY+dFD57O266mgOjzigyeOMfDcJ3401x+Mg/+wtvXejP4548dtNcvZwWu9uwa0cZkhT0ZJkyt0sUjupLLo6N685Nf/VD679RsyJ3AXC/8g0fuGCDR+sPz8qM4N18mDirMDbJ6M877iA/pUrOXJblDEDnW1xwBQEx4749ifBjGQyqFSn4iJ9JYuh09NoW91PnBPkjbQ4Ic59e5tCdPPwc5bGy+EZDdsohImVZ5gGBFBCx7029cEYO2acER4a4pwXivzSYNpr0ScWqbWBjXftlt4kPszSFlscZh+zNeodsVfz0JMnyr+CkdzmYgUNTdHanP7smsFfmMSKFIze7lot1h30Vy1w= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(1800799024)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:09:17.3803 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c4fe60fc-c0b7-4bf0-212d-08dd2ee1d195 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD76.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6426 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add API function get capabilities to HW Steering flow commands. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index d5924e22952d..460f549cc2da 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -1319,6 +1319,17 @@ static int mlx5_cmd_hws_destroy_match_definer(struct mlx5_flow_root_namespace *n return -EOPNOTSUPP; } +static u32 mlx5_cmd_hws_get_capabilities(struct mlx5_flow_root_namespace *ns, + enum fs_flow_table_type ft_type) +{ + if (ft_type != FS_FT_FDB) + return 0; + + return MLX5_FLOW_STEERING_CAP_VLAN_PUSH_ON_RX | + MLX5_FLOW_STEERING_CAP_VLAN_POP_ON_TX | + MLX5_FLOW_STEERING_CAP_MATCH_RANGES; +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, @@ -1338,6 +1349,7 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, + .get_capabilities = mlx5_cmd_hws_get_capabilities, }; const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void) From patchwork Tue Jan 7 06:07:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13928273 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2067.outbound.protection.outlook.com [40.107.94.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2CE51DFD8B for ; Tue, 7 Jan 2025 06:09:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.67 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230172; cv=fail; b=atzlOVHitGokXQEjoG2Hrr/6J/DnjQVc8UvsAqHfLIq0DjZchhpeGNHtura+nX7Y89NjScGWqYSgikfTH2WcoRxm98SjZSq/v/Cu2vbDiFsIM0WeYHqZ8R66/3rASy1wjcWS5EfTIqw06r8dho38HsuvhyvycXkm3948TW4xXn4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736230172; c=relaxed/simple; bh=St/XKmC09DsKhG33hRYvmY4g6oAgSJX1fPkKsgDw+vA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Ar4yZpMfwkDzyRIW/eJQ9xLS2Om/mm5WZvDJGnyajcQ/ZCftFoCQP5MOVBhflARUbSNwUVsJTnsxVzqhMt52jAaLnooR8bf4qAEDBiDCjHOLHhoVj+LdOOS/56nL5Bu8CZspQNxmaY2eh39VkqiydXKLsRuRMOhiQq5a9blPYIM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Mnguo3EF; arc=fail smtp.client-ip=40.107.94.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Mnguo3EF" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=i5knAWz7w3W1lvFVg2XJo2IiEGVI+3+X8MzNV9W6+XfVnjnavoGlmTxGH2U8BqhMQ2L+5+uhA6jk/H7s1VppTn/2dSpa0pmepfCpsj8hq8LV79Lf2nkRtqv22g8t9BC2gPS2JH2O9YxO2aZyEJksNQTrpyVVFwpIgs9Igtv8fbhC4+rH0YDlkxGBNKf1ffoSw6sFkUL5rj8V4tqg/93/rtn0CE5a/SoVCGaF0TP7mf/prgQ70iSqgp3FhtvUX+y5ff44YHx34CQUY+K1YXbnltopQt1dnXnrRmWWAoOg47LiCplOrTpiPMLWd8YrAr58fCSJrdHTuweOyrI7mE2eyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IEITPoxTvNQq1KsjqshK/3xfEL6dPLzdIF7Y/9fNBMM=; b=RoVJEJLLwlhnsMIYJDtmf+OnsFS9C1oUQ22QocSWWMYgy/1sQIGNIdGS87XrL5irkg8kqFNrFq/JbcFrbA6EYjo886oAk8UghLcZLAwTRA6awSr2mtGJp/zzxvCq7fDrTEIkSoFjrgclQ5aIz6hwDbztBhm6l7zkpE2O8wZ24Sw6Fvg33hl8RecRd/QUZenP3/tspFyTCok+WrL+0CZygT/NxbJhdwtllI6YZYRqFx7LRJuwKggGCXIKMxYDEJ/RXW8OnCnbvI1l4AcqgF6S3LZhlmGAQylGWNuAUNxBMP4okkuigzf8fxcgq5tpLt2/X3ePi6x4ojuBtCY1HKhZlQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IEITPoxTvNQq1KsjqshK/3xfEL6dPLzdIF7Y/9fNBMM=; b=Mnguo3EFRRRXCKJv72uUvTUXlsinNCemX5cdkmy8lODCDzLiRntUPjUQUW5CNv1jfwlvgp75LyZxY8/n6JMjnIUjvPJjls/GegTrBtybXB6XX74t7eHKz0KtLfLVfcnM9bvNATulHmIW9j0Lw8ORSyVy/B2DcOkQX/1Wt4udTMOCuNd/sdr/zxSrwyz1tpa8qSURJN/rJ0H/xbBLTYaSPAwDEvH1K5nt8rhZd5KeuqJYsvu3hCM8Sbl1bZ4bXQBj2MekIID0ay4s0DbObwcElf7g42tXwRjOZXOxnQw8vSsRRMgBgQH54s3g4rshvbOa+9fabyDNTDGY8+SFkDb1iQ== Received: from CH5PR02CA0006.namprd02.prod.outlook.com (2603:10b6:610:1ed::16) by IA0PR12MB8645.namprd12.prod.outlook.com (2603:10b6:208:48f::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.18; Tue, 7 Jan 2025 06:09:21 +0000 Received: from CH1PEPF0000AD76.namprd04.prod.outlook.com (2603:10b6:610:1ed:cafe::ea) by CH5PR02CA0006.outlook.office365.com (2603:10b6:610:1ed::16) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8314.18 via Frontend Transport; Tue, 7 Jan 2025 06:09:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD76.mail.protection.outlook.com (10.167.244.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Tue, 7 Jan 2025 06:09:20 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:09:08 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 6 Jan 2025 22:09:07 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 6 Jan 2025 22:09:04 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next 13/13] net/mlx5: fs, add HWS to steering mode options Date: Tue, 7 Jan 2025 08:07:08 +0200 Message-ID: <20250107060708.1610882-14-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250107060708.1610882-1-tariqt@nvidia.com> References: <20250107060708.1610882-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD76:EE_|IA0PR12MB8645:EE_ X-MS-Office365-Filtering-Correlation-Id: 2c9a0080-db33-455a-e0cc-08dd2ee1d3bb X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: 453Rwn+0VHeljqwopIe8RiihC69oCPox7vBTeig36JjvGMz91XTk05KBhH8V2FvPS0VasjcutM0ifTkxshw03xJDERn1QgErioPOAsv4RURftVXCACZAhRMsH7rkNxQc1dWxUahMjY0ZmeXGWtcT9vQFjmzMcpXnXjSgcIegwQMdOsatXqDjKRXwzNmh+EXWxwxrMzsVRQujMMy91x0aK8RicMnMObVSAYPV/mGSNwFrjvpvFbuutu0iuC/46M8ms0vbHOjYLBFzArrzHdPjwVWqEawgOQG1p2gRmOUbvl11dQDFCUpnuFIolxeRhje3mNeON8d97pKP02nIZ5QVpy+Zau4AMGByoEygFeRBu5HzA3KvXaDN2KSK+7A/RdU37z3EODD89cQh7KIpZRGeEK+9K6mX7QkmAfBGbxGg9eE3TpQzYI4uzo+lNQvXqlzOb+QQAXUh/dr2HU48kLaDgM2JGV6PE+6Hz9E+oUYtlCO6kEcfl1Admy9MJUKsTyJLg1AMMcFO8NuhlYmr2YiyBHXzg0i0tQl0/cHMJxMATbYjV8v/BRduZwNZ58ZF9JPhjdAB23a9XRFCGBELOYatuSUWS5Yp4zETm+iKTpwAGbqxS9OcrMg1LvlF0PaaYLcDkfJCrC5GQg0QAhvfOT/IUP3xCL1bz/a02PKFinbbYHRwXmkUePz6naKpGaB0To/pxBQsI88HhVslfdCvG1Vy/vdQ8UrvCqTXtYHXmUBG5gOKBACO51JLr/N768sosGdQvzhxWgHiRqqIJ/OlkDOhAmEn7NQXZw6f5B1aI8sktwjBy9yTU2skygo1IV0c8BrQE2ctMNGWZWyDEcWmXFpcJKh+IjeN86YGOXUBScQeyn2QeS/EtxgcIi/snNTUL2Z48DPk4SDOr9Jq6lHrvEilw5cnU7lZPnmuKKSMuyWllAocBK+BMbfhxLyfRcuBpEaiDQSzgyZMBCVhLPw1rvcgOyliyqUYCAbBP6/FtBBaYCXIZvFC2A+hsGavi7dbA41kjvgP/yTEM6KPPnFlEHqFtz5j4AmzMIZ3RoSevsa4EGB5KHxKiOc5DpizxaY4iwNtBilKqNp2mxZfHsJiR8uMtA8Z20N2jrMz9KFpeoV/2Ixh7hHigHl851Zl2VD2wbGY2BgU6iracJe0FpbQWc8Uq9EuRrmGX/NIvJol0dXBL8fXvgMwuYWRl3zANVhPwbuuayJdZrWitMlkPTF96GugeMJb20faNmxFE6wKc36rqVqVYzz/GOWp5aZUsSwtDT99luKgtDpiq76UGpl+UDweTzaVesaOF/I38ajkIvWwoSFud5UC5gD5xci1KyYrZLj5FDC//VhBXW3km89ajM4EY7lkZ6SYmmEz3NzUP+Svx0+OPw+U6jdJ4K37eJDNLRinMpd82K2xvWcBLvZHXT4gsO6GOeYwQKvJwvxmlDp7ndh8tGGaOk5IphB7+wLUiBEVG1SrMrEu+fjJR2PXgSFhzz/rpQHkroY2ECIv7y9EZHk= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2025 06:09:20.9740 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2c9a0080-db33-455a-e0cc-08dd2ee1d3bb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD76.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8645 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add HW Steering mode to mlx5 devlink param of steering mode options. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.c | 50 +++++++++++++------ .../mellanox/mlx5/core/steering/hws/fs_hws.c | 5 ++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 7 +++ 3 files changed, 46 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 41b5e98a0495..f43fd96a680d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -3535,35 +3535,42 @@ static int mlx5_fs_mode_validate(struct devlink *devlink, u32 id, { struct mlx5_core_dev *dev = devlink_priv(devlink); char *value = val.vstr; - int err = 0; + u8 eswitch_mode; - if (!strcmp(value, "dmfs")) { + if (!strcmp(value, "dmfs")) return 0; - } else if (!strcmp(value, "smfs")) { - u8 eswitch_mode; - bool smfs_cap; - eswitch_mode = mlx5_eswitch_mode(dev); - smfs_cap = mlx5_fs_dr_is_supported(dev); + if (!strcmp(value, "smfs")) { + bool smfs_cap = mlx5_fs_dr_is_supported(dev); if (!smfs_cap) { - err = -EOPNOTSUPP; NL_SET_ERR_MSG_MOD(extack, "Software managed steering is not supported by current device"); + return -EOPNOTSUPP; } + } else if (!strcmp(value, "hmfs")) { + bool hmfs_cap = mlx5_fs_hws_is_supported(dev); - else if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { + if (!hmfs_cap) { NL_SET_ERR_MSG_MOD(extack, - "Software managed steering is not supported when eswitch offloads enabled."); - err = -EOPNOTSUPP; + "Hardware steering is not supported by current device"); + return -EOPNOTSUPP; } } else { NL_SET_ERR_MSG_MOD(extack, - "Bad parameter: supported values are [\"dmfs\", \"smfs\"]"); - err = -EINVAL; + "Bad parameter: supported values are [\"dmfs\", \"smfs\", \"hmfs\"]"); + return -EINVAL; } - return err; + eswitch_mode = mlx5_eswitch_mode(dev); + if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { + NL_SET_ERR_MSG_FMT_MOD(extack, + "Moving to %s is not supported when eswitch offloads enabled.", + value); + return -EOPNOTSUPP; + } + + return 0; } static int mlx5_fs_mode_set(struct devlink *devlink, u32 id, @@ -3575,6 +3582,8 @@ static int mlx5_fs_mode_set(struct devlink *devlink, u32 id, if (!strcmp(ctx->val.vstr, "smfs")) mode = MLX5_FLOW_STEERING_MODE_SMFS; + else if (!strcmp(ctx->val.vstr, "hmfs")) + mode = MLX5_FLOW_STEERING_MODE_HMFS; else mode = MLX5_FLOW_STEERING_MODE_DMFS; dev->priv.steering->mode = mode; @@ -3587,10 +3596,17 @@ static int mlx5_fs_mode_get(struct devlink *devlink, u32 id, { struct mlx5_core_dev *dev = devlink_priv(devlink); - if (dev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_SMFS) + switch (dev->priv.steering->mode) { + case MLX5_FLOW_STEERING_MODE_SMFS: strscpy(ctx->val.vstr, "smfs", sizeof(ctx->val.vstr)); - else + break; + case MLX5_FLOW_STEERING_MODE_HMFS: + strscpy(ctx->val.vstr, "hmfs", sizeof(ctx->val.vstr)); + break; + default: strscpy(ctx->val.vstr, "dmfs", sizeof(ctx->val.vstr)); + } + return 0; } @@ -4009,6 +4025,8 @@ int mlx5_flow_namespace_set_mode(struct mlx5_flow_namespace *ns, if (mode == MLX5_FLOW_STEERING_MODE_SMFS) cmds = mlx5_fs_cmd_get_dr_cmds(); + else if (mode == MLX5_FLOW_STEERING_MODE_HMFS) + cmds = mlx5_fs_cmd_get_hws_cmds(); else cmds = mlx5_fs_cmd_get_fw_cmds(); if (!cmds) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 460f549cc2da..642f2e84d752 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -1330,6 +1330,11 @@ static u32 mlx5_cmd_hws_get_capabilities(struct mlx5_flow_root_namespace *ns, MLX5_FLOW_STEERING_CAP_MATCH_RANGES; } +bool mlx5_fs_hws_is_supported(struct mlx5_core_dev *dev) +{ + return mlx5hws_is_supported(dev); +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index abc207274d89..34d73ea0fa16 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -60,10 +60,17 @@ struct mlx5_fs_hws_rule { #ifdef CONFIG_MLX5_HW_STEERING +bool mlx5_fs_hws_is_supported(struct mlx5_core_dev *dev); + const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); #else +static inline bool mlx5_fs_hws_is_supported(struct mlx5_core_dev *dev) +{ + return false; +} + static inline const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void) { return NULL;