From patchwork Thu Jan 9 16:05:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932953 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2043.outbound.protection.outlook.com [40.107.243.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADCE321E091 for ; Thu, 9 Jan 2025 16:07:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.43 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438836; cv=fail; b=mNxsXNTXJIN/xv2BbKPa8xfmmNO2CpOCwJupiX7JKiJT3K0zlnFyK3aNWZD2TPEdKqv0RpH7WDDPAs0G5YWXhDoSsDl1pqqQz3bp406wamULXx5kQCNFeZIkgPy9mOPebB72BSkGh1pGgD4cQAPO4ONkFKID/8+O4/I5FYgdoHc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438836; c=relaxed/simple; bh=yzIhOaDSygVcxmkonIbUEZsOwsEkBz6N0xdKBINNxoQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=TDR+u39pUFvJawPkJXD3vKcmPU7Oyj803ZdbhYsqdU/j3lyRbDUxgPVB7+9fOwgRQCe62dotZ1c1iBYqaMfAkvqynUCHw+WWBFaAh2YeR4D3Ov4x+mL3DMnr/KMzDBAFkpdmlMScY4zJnw458PXpnmVVunQTmWewk0H5CJJ8y8c= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=ebihol0i; arc=fail smtp.client-ip=40.107.243.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="ebihol0i" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=yC+MdIJDHmIW1/GK/hihRc4LnyuRuozDUgKcSZn6V2276cBWS1dknhDrha1P03f5mkMb2hiEjkQEwwH37UINNxsXcYgT7Ue34IY3BQD37tAJi4LrWarES0Pc/UCTnVflvKICSUy3pzI6tFhdR5ktbffC/ligeDIK05k/zcrUUEuV/lwbLVPmDnfiubIvWGKpg42PBaup5x7HJOgKv881Zd5Dj0HEIoP+o1byqNcfajQvzVKL5XHRteCk4BRs/wBMSe5BMOZLvg+fJ4j/NCHK8XRMRTQtSUcGboIPkXuMfTmQFQddLNl9vnoCe9ZGU5dHKDh4Ybd36ArsRDBBteJbKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PfP8YkeMbLTD7XxTvLWTXhce3wApAGSwPZo9jLwDK0s=; b=cWp9fQDVJLun/0nQ6MklXDPIXPeisThp57F+6fsmVFrsy2t6+YwOdac9jlpXlN323Q0Y2ZjTI98f3a3azHYSmJVbxAtrEyhlJCJSRl2HmYN2+/rASB0CACSVPGTG7THTOdPIeMg3mZlLkZ/w0a5MsL6s+PA8Xr6vcwOW0l+vMK/Esh5lSlSQshoxQtSsZymEaOtKi8B1BlYPF5cr0Cl2/dUe/Q1nG0qOBM/8EEciFpu9v9hBj+KFDS5Br62TUrIbo4LMLduug2J/qKSf91JKmZOqIBtLm1XZKJcpTxFtTSOEjaX2jHxRC1Amn0TtB+ETPoeqzFLvs0UlgEPCw0OH6w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PfP8YkeMbLTD7XxTvLWTXhce3wApAGSwPZo9jLwDK0s=; b=ebihol0iKvoswaFF/VS3vbpDIr00WOTU3/Ili4QrHZXzIKvP2z7V/C+txfHEduBCeMQHBpAQmQaZVwmTXqlL5xnlp/RHtmQx7KYZ4FxM5ozHWWyy7Nwm4PORq1MwHIltJ4WMZxtn5KwzgGtBO5bi/SSRrVMUskoy+km+b4SP0MLGT4qMvmJo7aHmSp20I7mzbZlnMWeyVl8loYLFBF4+8F/vttazndgXowQU9mO5ErIL8c2cBjdHNsmfaoAzC3uzl8pLJ5+7VfTdHHY6VlE+WmZ01QgAtJTq93VjnyzLZGGpZCcF3MDCk3qTzqxoELbimwi/CNR6lj6KsySOOL7H9g== Received: from DM6PR03CA0016.namprd03.prod.outlook.com (2603:10b6:5:40::29) by SA0PR12MB4429.namprd12.prod.outlook.com (2603:10b6:806:73::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.10; Thu, 9 Jan 2025 16:07:10 +0000 Received: from DS1PEPF00017093.namprd03.prod.outlook.com (2603:10b6:5:40:cafe::20) by DM6PR03CA0016.outlook.office365.com (2603:10b6:5:40::29) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8158.17 via Frontend Transport; Thu, 9 Jan 2025 16:07:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF00017093.mail.protection.outlook.com (10.167.17.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Thu, 9 Jan 2025 16:07:10 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:06:56 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:06:55 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:06:52 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 01/15] net/mlx5: fs, add HWS root namespace functions Date: Thu, 9 Jan 2025 18:05:32 +0200 Message-ID: <20250109160546.1733647-2-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017093:EE_|SA0PR12MB4429:EE_ X-MS-Office365-Filtering-Correlation-Id: d19836d4-ecad-4e4c-880f-08dd30c7ac60 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: X5DwcpLCsEA0R0kC5hBOHvIYlseBlmtL6ET28u9JIxel4g1PQeKfqAuWc1AO642VQUnZCHIj27X18vrE2BU3pdB0p/k+YVMH/Dkd7soLb5j+jEf/bCgB8qgU8HsnFQWUwRNSI507VltLns4b6HfIX7xH3gXkSGMYYKPyyxQkG3mNjXRgFNfcpNLfYe1u2CHmqiyXs/qa0Ex4QzSrbobTHNmxFX065QXfLQxsTe/UBjeDi18ooMRAcV3aBxBxxV8L48bV4fBe+JACra7LG/+SsTG5Xehlm+/10PVOiWMCdWLmYz9qJmVaiyb2IBjK8vhgmvVS+SrLi1v5WLEJzV7knEKQN8pyBKvIiSGe7/jamC3GpT9JxdZtMSqziMpfI5vSkYNJI/h+5iLfR17diHVDB8Lor7NB+DN0Cd9Arn6qSqwVv68hftf9VpsUQ2OLvxeqJsUZM8XP9sLUI9ZCYzGvxF3Oxs14BOTTkUg2glSvBfMMKVmty5N0TWnyfieoi2p6H/ULDy6NGN9bf0PfbPhQGc8qwXN6FtKtjbUmrJIACKff8DOVqBAQxDBhS4bGimI9l4skzW0vRCQC8ixQy/c2fNqQfcZY76vnCSvhcWyKxRfSe7Dqni2TUIG1QvNhMic0pYtfs9T5z5AMJbFI5iMRIQzC/ap2WlYBxZqPT/+Izmn1ZsLztw2PlYyygOb2D3YWn0VaHPvQnQ/Je5839ca+Qpxk70qStSJs6ygpIMMhn3YY7fVuZpyXsRP1m1jdSmD3hs2dduRBBVwvvwqZPRB6R+eDjRZ7OFOV2UAtDbDwf+gRWPTVILQdg2KJUZGp+n7s1JAZYBCp2vQZOzJKXHfwnavYbaC20HTfNXk6+e9v+ksL/hwVRnSxYfOnq9Psgyps4Eujy74nCJqjclZUhy9TbdofLVjRV7aSsDVqoNv/Tu3uN+FPERqdW4aa6KkUSu2T5Ut6XNIuzefen0moT+no13R1yeOrOj09CO1wJ+BxSkBhsQPAx5Dmn/0z7Y+vzDqwo7dcllTwkGL5Qtljt64Tu2cy18Mcg3kAIxKrsoYpkxKkqmWa9tT11uwhAYFxi1lMo6iF5hHdRTyBoFxMonnIorH5tlIxQH9fnrXzZsP/sZvjictHIZI/cUHXl/bDnRC+KwZM2MPN44AIZQ//6iAlb/LLB+I/23bvSNYGcnm7sMikD+qlEPTyleh3/0RQnq9dEEGRVxuOVEZnqyYuGJ0GWI+EnB8hmll58co0dzM5Jz7SY+ofkaSYtedGzh+FD8aac80thjbP9hzzx+f2HhnLyF6gKzTW5kAoLILZ9Y4lWRo0SK+2LyU7ZdCQM+VAagJLxHtOaX6UuKL4f0489ZT2N+TXP4h/DmaAiucVqtoK+2H/Lk8eLM3+4VMAQt3HdIPOzOrn7XRjBMP7uXUCloZ/l4OMXYlpq0YD+uMR1vbAen2lasRsywPvb+z/0zK/ZZfNtN7SBzMgZXNAjmsPqAxcYA== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(1800799024)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:10.3704 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d19836d4-ecad-4e4c-880f-08dd30c7ac60 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017093.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4429 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add flow steering commands structure for HW steering. Implement create, destroy and set peer HW steering root namespace functions. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/Makefile | 4 +- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 9 ++- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 56 +++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 25 +++++++++ 4 files changed, 90 insertions(+), 4 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index 10a763e668ed..0008b22417c8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -151,8 +151,8 @@ mlx5_core-$(CONFIG_MLX5_HW_STEERING) += steering/hws/cmd.o \ steering/hws/bwc.o \ steering/hws/debug.o \ steering/hws/vport.o \ - steering/hws/bwc_complex.o - + steering/hws/bwc_complex.o \ + steering/hws/fs_hws.o # # SF device diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index bad2df0715ec..d309906d1106 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -38,6 +38,7 @@ #include #include #include +#include #define FDB_TC_MAX_CHAIN 3 #define FDB_FT_CHAIN (FDB_TC_MAX_CHAIN + 1) @@ -126,7 +127,8 @@ enum fs_fte_status { enum mlx5_flow_steering_mode { MLX5_FLOW_STEERING_MODE_DMFS, - MLX5_FLOW_STEERING_MODE_SMFS + MLX5_FLOW_STEERING_MODE_SMFS, + MLX5_FLOW_STEERING_MODE_HMFS, }; enum mlx5_flow_steering_capabilty { @@ -293,7 +295,10 @@ struct mlx5_flow_group { struct mlx5_flow_root_namespace { struct mlx5_flow_namespace ns; enum mlx5_flow_steering_mode mode; - struct mlx5_fs_dr_domain fs_dr_domain; + union { + struct mlx5_fs_dr_domain fs_dr_domain; + struct mlx5_fs_hws_context fs_hws_context; + }; enum fs_flow_table_type table_type; struct mlx5_core_dev *dev; struct mlx5_flow_table *root_ft; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c new file mode 100644 index 000000000000..ac61f96af1c3 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2025 NVIDIA Corporation & Affiliates */ + +#include +#include +#include +#include "mlx5hws.h" + +#define MLX5HWS_CTX_MAX_NUM_OF_QUEUES 16 +#define MLX5HWS_CTX_QUEUE_SIZE 256 + +static int mlx5_cmd_hws_create_ns(struct mlx5_flow_root_namespace *ns) +{ + struct mlx5hws_context_attr hws_ctx_attr = {}; + + hws_ctx_attr.queues = min_t(int, num_online_cpus(), + MLX5HWS_CTX_MAX_NUM_OF_QUEUES); + hws_ctx_attr.queue_size = MLX5HWS_CTX_QUEUE_SIZE; + + ns->fs_hws_context.hws_ctx = + mlx5hws_context_open(ns->dev, &hws_ctx_attr); + if (!ns->fs_hws_context.hws_ctx) { + mlx5_core_err(ns->dev, "Failed to create hws flow namespace\n"); + return -EINVAL; + } + return 0; +} + +static int mlx5_cmd_hws_destroy_ns(struct mlx5_flow_root_namespace *ns) +{ + return mlx5hws_context_close(ns->fs_hws_context.hws_ctx); +} + +static int mlx5_cmd_hws_set_peer(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_root_namespace *peer_ns, + u16 peer_vhca_id) +{ + struct mlx5hws_context *peer_ctx = NULL; + + if (peer_ns) + peer_ctx = peer_ns->fs_hws_context.hws_ctx; + mlx5hws_context_set_peer(ns->fs_hws_context.hws_ctx, peer_ctx, + peer_vhca_id); + return 0; +} + +static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { + .create_ns = mlx5_cmd_hws_create_ns, + .destroy_ns = mlx5_cmd_hws_destroy_ns, + .set_peer = mlx5_cmd_hws_set_peer, +}; + +const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void) +{ + return &mlx5_flow_cmds_hws; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h new file mode 100644 index 000000000000..17ac0d150253 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2025 NVIDIA Corporation & Affiliates */ + +#ifndef _MLX5_FS_HWS_ +#define _MLX5_FS_HWS_ + +#include "mlx5hws.h" + +struct mlx5_fs_hws_context { + struct mlx5hws_context *hws_ctx; +}; + +#ifdef CONFIG_MLX5_HW_STEERING + +const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); + +#else + +static inline const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void) +{ + return NULL; +} + +#endif /* CONFIG_MLX5_HWS_STEERING */ +#endif From patchwork Thu Jan 9 16:05:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932954 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2078.outbound.protection.outlook.com [40.107.92.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E33F21D5BF for ; Thu, 9 Jan 2025 16:07:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.78 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438844; cv=fail; b=c+lKsUT9JdOuh+PMCg7fGtq3KWZYVqM9CKuZN7dfWpHIDKwIFz3HHkqC+wKd4/fNNn1J70D6SQkXWg6JPUGRburexisfohbtaQt3aYYEASYqaUq/CFIn0JUvRzy91SFZ/ogaKs29PbS9ZwJyULKmg4A3DEGBXaHl/tR0zxJQU48= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438844; c=relaxed/simple; bh=O3K584dXHepjG+q3jsNvtZt1evuomqASdzD9WgwtgoY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dP41EnvmKfHG7Qqcp9sI2stP0xGYSh3EdUIrIhYlkp6y4Nv552ohF4Y+W5rz88BKfiVsIxIT0jl41LXkUf7Hv3AffyeOS9xyWLjMV5/b02hrKt5Ai0TgVJTDRlYIDq8LcM0j6O7T69VS8adynXLjG6xHiveVzNMSdl8zK2K3dO0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Cd/6v3LK; arc=fail smtp.client-ip=40.107.92.78 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Cd/6v3LK" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VfYQTmyVw4aZG9NxTb8VApy2Heg2zHkXWPzGXpdi/1I+ytt4bI9bwhRcex1Bfp1SvT1EswFa2FgqE76DGF7y/aFNrM5khqSGoqgkejSPc7h8W73Jl9v7Z7F4lW1IwiK39TBeevCnYRceu7PW3DPRuhwIgeH3casRUPYYR3i3yKJjfC5dVgmBf9Cs/Bvz+szJWHHHiYZ7ruQlkcS+8lRH5UEYhY9LeZVvhOaE/PmcSKefj8zmEhdsGvL0P6kyEWHNKS2cWq7nDwue+VS8M/MiWteGCu3DO9SKZmPbF4++pevmtFePdNQ+0HDPXCNTL7aKM6/86QV30MD0GZyrmrJXeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2vJyC8MDYuklGJnS9da83DTyYxUzvtohEmvaFv/Pm1Y=; b=uc9uRB45By/fynaL7fV+aWwP2bayX83o4nschi/3oVyzAO0C337O/cA1500hG81HpTMxORG7MexQKUZ7lHHu/gWJXmmgzEvqa28+4xO6xnEIaoyPOlCc0aBhkKut23P0piXii7sKGDWABLRoI7IpMwn/OgbvEEau7FKgXduW8ZWKRUkArSGfiJQE1+7N8phcr/DWaxECyQIjWdbN8uohv/eVj23V8B+EAMT7Mz/MaFAWCw8RH3OpYPITWWCQTpUZ+NQfLjojRGs5YdEykcbJtttdEGlZz7VyTkXKi5WrIsF02hEdCrbe6YYbAjZSfZ6Ks+/EngFJrMJZHyxRlMCqqg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2vJyC8MDYuklGJnS9da83DTyYxUzvtohEmvaFv/Pm1Y=; b=Cd/6v3LKJ/Vya9MwnPgPRPvdfD9jecrqv6y3eZ2tRUwTWfqFZv23m22C2aamEfC9yVAhajRR/gM44Efk+MfVeybVy3oqNvsBx7oLmPbMXUm/WCXRgi5tta9ohprIJB0eObhR9v2UMIYpYi/Ti+d2GpN/NSSBBOYdIf5cKQHJ69cYzuEoXM3hwGPjsysAfKK7Qn3zUCq6IuzgC+/57xam0TQuT3cTU50yPie0sL22kYW56EAW+qtNAOAa5jvnHjA2d8bg8l52wKbENrVa36P8UwP/GSg/fzno70M7rF1jtm1uOJfTMWmQ5wT/TXRdHOGFzG4ArEfKMfrG03cTWG9dyQ== Received: from DM6PR07CA0124.namprd07.prod.outlook.com (2603:10b6:5:330::34) by PH8PR12MB7157.namprd12.prod.outlook.com (2603:10b6:510:22b::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.18; Thu, 9 Jan 2025 16:07:14 +0000 Received: from DS1PEPF00017090.namprd03.prod.outlook.com (2603:10b6:5:330:cafe::99) by DM6PR07CA0124.outlook.office365.com (2603:10b6:5:330::34) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.13 via Frontend Transport; Thu, 9 Jan 2025 16:07:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF00017090.mail.protection.outlook.com (10.167.17.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Thu, 9 Jan 2025 16:07:13 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:06:59 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:06:59 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:06:56 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 02/15] net/mlx5: fs, add HWS flow table API functions Date: Thu, 9 Jan 2025 18:05:33 +0200 Message-ID: <20250109160546.1733647-3-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017090:EE_|PH8PR12MB7157:EE_ X-MS-Office365-Filtering-Correlation-Id: 3a01b405-0299-4a34-02da-08dd30c7ae71 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|1800799024|376014; X-Microsoft-Antispam-Message-Info: lPa+XAFaDU3P+owwaESv2Ikzu8nYLBsgOZQ5rTkfuIJEOsrqY6MmJ1uLNkmm9n50oBAFc7NvOh6KBn0pLqay7XK3gkrx9W6EaMJj99ACB5do9BcivUu89HF/+ZSnUmEUhC6XTnhQPiBdr/dxtefvDTx53OFJtg1BVNxrlckBW4/IVygLpZP5ZBLRQhHL8LyZyogbUJu+gqz9pZm8KWVlt/RgtA4xQJN4Ifd5cCUFFz8i5mLAV2jjQ4KV+8ZP9xj/3Kjv3V+DKGI7YON1OSHSauDo4g6Vx6exK/747UcEMwwjDzAWfWYDwU2+ZtkMG6IugA2CYktwFDZTdud6gX94GtxHoTHXZPZ7Dayb72kLK8ZobXFBDAlO89sP4qT+GRuV06o2ptKEvSOJ64DQ6Ceb/uSA8K6XJZUDeP9UAQ9XdWieOEsJXnzjApHyKXyDzfn3oZ1fJkAOULr0NnZQhIBgzVupDIiBXvrAaqa//lxFmXRZSkvaIG1j8Mexrvx90+xJkpM/e7TpXV0i1gFoIQWoVkfjHQFe+oV0RuNPbvJrzfD24rAVy/EwAfpyH75zO43fflazMC+zCkO8IGI2KR4xucpL65bOPWpJC6ZDJ983x+PTvCMFeI/lbWHOMfvX7aZQ6fGNfZrw14EM5xhYaRQVPZhzqRXCe6SKNZm+uVwEXMx+YR/ER+S2F7tm8lCJeZVKj2RhwwNv9l73e8TYmeyBP8lZ7fSRJSWiyCUHpf+1/EpFVqEYcMkO1YIYIgN1WcMVY5A4eWjpWLRBW06hnOFDLWF3Ek2lbMR6CMbMigLURHdWVFFvolmCkFZ6oN0q82cJJMse+s2Kz7/T6XTGFvy1UV3sZymoGvclamUZhB9RG8nA3SmNNRUio56Xf8Fvwv3nAxEL0p5bSpsAEk2diUyTlo0GEu32jvfPsklUspzS4klXPLm4tZcVCrv3nz0MHyavn4YyBxMpEX3W7Jdtk1Y8rwza8UwCUWdRZ1R/v+wXmZXDJcNnCoBHZzcgajASHFCrrViF57G24aFHlGw13ox+fBvUJP7fh/hGRyzTmOOS7YlnSPpEb9zT9HRGQvayBuVO0uBgfhkCOSyq1hSP5McN7XwasWSt0J68Fx7CnhE++2/9R9Qey40RTqLyYtsUVmKNhn6/AIW2Y0ONWytkaW9o3Hv+XqP874x0PJtdvhG03q7RYAFUpPGrweQXFh9mfX/dbgfb47D2fO0j1Mwau0qkbuCl55RpGQn0EWfjBsHhXiHXEZmfDutV4a25mgy2yuslT5c1cYiERlLLTyLAfRQWqoCPglwEslZ1h/vJDzKEKGwkkx8dnUJ9YjvKvM0woYt6atGB79gACp8lt3h6zhKH8ls9GKGcqQNsSyU4qx0e7fXRxEdR9iZCvNZHjj/T0hZd6GUmEZb0Vr2Ta1P6Hy6FpQzm8KtR4oJK7TIWZsfnKROyyUZQb15SaXxTNKVBnVcL7sA8vpRlxrW/oV9F2+9hzWfK2JGSJUmJwYVh6pTHqtg= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(1800799024)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:13.8183 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3a01b405-0299-4a34-02da-08dd30c7ae71 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017090.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7157 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add API functions to create, modify and destroy HW Steering flow tables. Modify table enables change, connect or disconnect default miss table. Add update root flow table API function. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 5 +- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 113 ++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 5 + 3 files changed, 122 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index d309906d1106..7fd480a2570d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -192,7 +192,10 @@ struct mlx5_flow_handle { /* Type of children is mlx5_flow_group */ struct mlx5_flow_table { struct fs_node node; - struct mlx5_fs_dr_table fs_dr_table; + union { + struct mlx5_fs_dr_table fs_dr_table; + struct mlx5_fs_hws_table fs_hws_table; + }; u32 id; u16 vport; unsigned int max_fte; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index ac61f96af1c3..57d88088e18b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -44,7 +44,120 @@ static int mlx5_cmd_hws_set_peer(struct mlx5_flow_root_namespace *ns, return 0; } +static int mlx5_fs_set_ft_default_miss(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_table *next_ft) +{ + struct mlx5hws_table *next_tbl; + int err; + + if (!ns->fs_hws_context.hws_ctx) + return -EINVAL; + + /* if no change required, return */ + if (!next_ft && !ft->fs_hws_table.miss_ft_set) + return 0; + + next_tbl = next_ft ? next_ft->fs_hws_table.hws_table : NULL; + err = mlx5hws_table_set_default_miss(ft->fs_hws_table.hws_table, next_tbl); + if (err) { + mlx5_core_err(ns->dev, "Failed setting FT default miss (%d)\n", err); + return err; + } + ft->fs_hws_table.miss_ft_set = !!next_tbl; + return 0; +} + +static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_table_attr *ft_attr, + struct mlx5_flow_table *next_ft) +{ + struct mlx5hws_context *ctx = ns->fs_hws_context.hws_ctx; + struct mlx5hws_table_attr tbl_attr = {}; + struct mlx5hws_table *tbl; + int err; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->create_flow_table(ns, ft, ft_attr, + next_ft); + + if (ns->table_type != FS_FT_FDB) { + mlx5_core_err(ns->dev, "Table type %d not supported for HWS\n", + ns->table_type); + return -EOPNOTSUPP; + } + + tbl_attr.type = MLX5HWS_TABLE_TYPE_FDB; + tbl_attr.level = ft_attr->level; + tbl = mlx5hws_table_create(ctx, &tbl_attr); + if (!tbl) { + mlx5_core_err(ns->dev, "Failed creating hws flow_table\n"); + return -EINVAL; + } + + ft->fs_hws_table.hws_table = tbl; + ft->id = mlx5hws_table_get_id(tbl); + + if (next_ft) { + err = mlx5_fs_set_ft_default_miss(ns, ft, next_ft); + if (err) + goto destroy_table; + } + + ft->max_fte = INT_MAX; + + return 0; + +destroy_table: + mlx5hws_table_destroy(tbl); + ft->fs_hws_table.hws_table = NULL; + return err; +} + +static int mlx5_cmd_hws_destroy_flow_table(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft) +{ + int err; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->destroy_flow_table(ns, ft); + + err = mlx5_fs_set_ft_default_miss(ns, ft, NULL); + if (err) + mlx5_core_err(ns->dev, "Failed to disconnect next table (%d)\n", err); + + err = mlx5hws_table_destroy(ft->fs_hws_table.hws_table); + if (err) + mlx5_core_err(ns->dev, "Failed to destroy flow_table (%d)\n", err); + + return err; +} + +static int mlx5_cmd_hws_modify_flow_table(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_table *next_ft) +{ + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->modify_flow_table(ns, ft, next_ft); + + return mlx5_fs_set_ft_default_miss(ns, ft, next_ft); +} + +static int mlx5_cmd_hws_update_root_ft(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + u32 underlay_qpn, + bool disconnect) +{ + return mlx5_fs_cmd_get_fw_cmds()->update_root_ft(ns, ft, underlay_qpn, + disconnect); +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { + .create_flow_table = mlx5_cmd_hws_create_flow_table, + .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, + .modify_flow_table = mlx5_cmd_hws_modify_flow_table, + .update_root_ft = mlx5_cmd_hws_update_root_ft, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index 17ac0d150253..c4af8d617b4d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -10,6 +10,11 @@ struct mlx5_fs_hws_context { struct mlx5hws_context *hws_ctx; }; +struct mlx5_fs_hws_table { + struct mlx5hws_table *hws_table; + bool miss_ft_set; +}; + #ifdef CONFIG_MLX5_HW_STEERING const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); From patchwork Thu Jan 9 16:05:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932956 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2048.outbound.protection.outlook.com [40.107.102.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A749220697 for ; Thu, 9 Jan 2025 16:07:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.102.48 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438853; cv=fail; b=LU/y+gjjJ3y6O+oRVfUkXy/6Y50w0mKuXMfU26TUHikwyhyN9dXPP1bdM2rOqG3LcdNPNBPyKIAi1AM87Cg1xHbV2IhfezUSqmVoxrar1oiL9sboXsM2G9Dd+W5TTEgPQADAMjPgAYr3V/f+fNh/6zUzvpKNWTm9dZnGOB0h7H0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438853; c=relaxed/simple; bh=GSit8TEa72aICYZs6LO3+jRnScvJRZGZ/sQbxOLjajM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rDF0w2giFYy2ytgRUtsDro+o4Qt2S7a+fMJI7peOIv2oH1Ybpr1ze/KNDc4TCnsBbBrHyHbtbWClqjnZCUQ5COpdmsko6+M4PueOvGAGnamt8ySPzk8hoGTGGVktGg/nQOT+mNIAYAPwU8KU2yoxmES4vaBeh5souJhnGuZ17zs= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=bqQU5t9W; arc=fail smtp.client-ip=40.107.102.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="bqQU5t9W" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Ziuhk2DNIZBKA3G8NWtxud2ihbKnzxiegbG8fTnSHqul4tbrMny7DGuU6HXCIln6vA04B4nGf91NWtN81djP85V0eKe3nOCUKuhmEstcRzTvXeboBVwjSw/H5MKGQzS2C1+Gd6FQfukU08+yFZnxwqJJ4vYVjTufL8FJogt+qF2AyXPzDGrQMbjxL6Etvu8Rp11QfHMovola+2NTIs/QBPoKM14ka9O/LPo/uOlbmzKlVYuXP63zCaI9jcVkHXJRKr7+XfzcVCSNeo82yi270PEeW0fY5mMIdZ0BNcvss9/SlL24O6JlUwqNZ6BVh4PnvEAqsYxSMFmsoALhv3vloA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pi9v/v8s+YKUxanEBU41agBsKAdCpnMtf+j84CZkFMM=; b=DiL1ZzWUNWAgimUq7Vm8eaiE1srZeO02rce92hzJHGe29sAsoDqGvXIRibDJwXPchomviTTY9HhGRdXKy8u2jlFTojm+KmULq5bZceY4ULE0LVIW/hvUpamQLRC/4acG77K5YlYY8MOvNAEfR2HhpKDDx2C1JmzZVpzSW9uOv9mCe7AilqpDLckNOb//aOA2EKmZ4hSbssbEVeHymJ+LgA1wmqNaU2BNgY+R0iOOCMjNaBPWH2nVy7zYbdNy+cjHbx+rTwQGuvo+4UVO68iAvBW3pjGUEc13UUkOORoH3vBi23a8nVnwXneHeDgL15tTqi3IKQwmIjGnbkbZJUyeFw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pi9v/v8s+YKUxanEBU41agBsKAdCpnMtf+j84CZkFMM=; b=bqQU5t9WpeaFVaKPF9wWB6+ptaCHpYEOmT2fjs4XYebyc09EhPgPUb/mImZ9y0cIjfWb8DXSCpTRpEiQSpEf+3z9s9eXdkmi/lBShrbqwaETlsoNpBoCyVmdBviHW1YUDTl4rsyhPLd9Q5yIZnrLIY//XGI93Plj3FlNXC3zh00IdljMkGxNIYSuKSnE23mrqnERMk/HMJ4dNleLzNsuS0JsMbyb/WErOCsf/Wn/D6sOoYE4lRyZqK0zM5s9aZYL8ipkYjDdpkDAG51mvFEcLUCCOgUv6Ooo4NVTQeqVD33TFHUSWCGMSXxFPPLFJzXiTkYTiik5++2eN/7Uzeyhig== Received: from MW4PR04CA0103.namprd04.prod.outlook.com (2603:10b6:303:83::18) by DM4PR12MB7719.namprd12.prod.outlook.com (2603:10b6:8:101::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.12; Thu, 9 Jan 2025 16:07:26 +0000 Received: from CO1PEPF000044F8.namprd21.prod.outlook.com (2603:10b6:303:83:cafe::5f) by MW4PR04CA0103.outlook.office365.com (2603:10b6:303:83::18) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8293.16 via Frontend Transport; Thu, 9 Jan 2025 16:07:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044F8.mail.protection.outlook.com (10.167.241.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.0 via Frontend Transport; Thu, 9 Jan 2025 16:07:25 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:03 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:03 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:06:59 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 03/15] net/mlx5: fs, add HWS flow group API functions Date: Thu, 9 Jan 2025 18:05:34 +0200 Message-ID: <20250109160546.1733647-4-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044F8:EE_|DM4PR12MB7719:EE_ X-MS-Office365-Filtering-Correlation-Id: cab299b0-e05a-453b-55f3-08dd30c7b55d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|376014|82310400026; X-Microsoft-Antispam-Message-Info: y6ZWLOTUx/KDlzASWz/jf2CIM0ltpF7Y6FCqU+4xGRojKriCb8MHqpVVSxj+ab/FD0dgDjH1NUyLh2CHeJhtSpMVGI+pHh2ZeAlB1ZUrMiBnx5Q5OrckAIz5GeX9dmxEGoZYhRWQ9wJvD+3zxlfskkIW7KxO2fvdzml9sOQnwLDrOzzoOwvi1zvdSjOhTI10SIfCpLgTdIzLte4lEIkl1SNGaJid1YUtZKjQiRlPfK7iVFGaL8COyFfYfyQ0mLz8QLktSfhTaqdxS+xPUyRAl99d0ofQckIC0zNPfOZbcpYbcyL+vaHBakq74wsp8hj4nOPSwVN6Ln4rwR19qOne6/TNmuo24bRFR2juj4EYpoS4erb7vfpKvs8PeUFPqUBRMwtqpQooi3fedpYm/1lQgyf1D8EDeSPEN5eRnWHEiVeFsf2r6SI4Oz/TAJXryZzFUnCp4Ayqj4ZSuPEY1EkV346hFcGtXHdT850g/uaV/Mg21riL3qT2iZsOzrIb04SKPIypREknOOcnGdIMuO9CDOPlRZDnpr+xdsTD82nrjJp6x4xk0F+QKPyCR4EN1r/5XGl0JzO6CBZtMHtQ2ifqch7JeFo1I8wpMpliku2ogJ4RvwBEcfGhl1clLQUkj1jdeNnxBPEzdbiMO7Hn04hD6WBgHcLcbnWw54hm6B5E3x9XOU62aAS2tZR5XyqKoRLFUzBoqARN1N1eDOZbuwGd5UCxdq2KJ9OBk2zTRFHqvf4gSgdy9x4bGSmy/knFS2TVJ2yvaWNrbbKep8gt1uyG19NAyslP1Ln00gabXL3SKqMepqW65s96o5YIP+jw2Y2jNV2QKUsUpnJNW1a5Jy9rp0EGFpUenwMhWfDI5QGEZJQdwNaxDQf3EjxVr7bnDzZ1t6tzkGubz1n1A+3uiwCH80ImPPonw0E47I5RcbSyHfqM1OTazwK210DPyWPjSklGpoB8U22tdyERqMSyUUYhZ2O+qursLTj0/5G1OUHGKdZcobR4u+TD3x0aBL5uLDKHpJRNKS+mC1E1RckVR+kZNjosbqpfSH8qUn43IsAssxjFmXNmvdp0u8EbcHz5ExHDOt+4KDfbCcQT8DtII09ZBIftDGkpIVhtyHjQIjHWDGdCzBskunHOz9NZReItRWUtoG4Uecvvf/HCoGgetbKGsE36Cs9XtQzzAEDTULyWFR4OF3anheyMwapAUvCiTqJp47k85E41AeamgqT+74MMU0jxxoyWYa1+TJUZTj3py1bTOkwwegUPAJ9+NSIcz+u0ibdl/ZbTa7nuoihp7MRqt/fJtbE+nuKAx0GOk3E8vYsl/Nw0DPaxr5neGUqwDzAgGd/PwTiCM91lCYvSy/4LC8YyEz5bmm1GYjM1NPOv78T10Zunstt+NLLml+lAEE9mumJx/YxYkLpctb+ywlBrreztIlaOZelPKrUMpr0Wmp/qoKmwxgyneXClScKLNIEcBZiv5MDgqver2AMujs42evxe9/yYb1h99U7HFtJSRRE= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:25.4490 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cab299b0-e05a-453b-55f3-08dd30c7b55d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044F8.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7719 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add API functions to create and destroy HW Steering flow groups. Each flow group consists of a Backward Compatible (BWC) HW Steering matcher which holds the flow group match criteria. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 5 ++- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 42 +++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 4 ++ 3 files changed, 50 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index 7fd480a2570d..915cd3277dfb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -285,7 +285,10 @@ struct mlx5_flow_group_mask { /* Type of children is fs_fte */ struct mlx5_flow_group { struct fs_node node; - struct mlx5_fs_dr_matcher fs_dr_matcher; + union { + struct mlx5_fs_dr_matcher fs_dr_matcher; + struct mlx5_fs_hws_matcher fs_hws_matcher; + }; struct mlx5_flow_group_mask mask; u32 start_index; u32 max_ftes; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 57d88088e18b..f0cbc9996456 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -153,11 +153,53 @@ static int mlx5_cmd_hws_update_root_ft(struct mlx5_flow_root_namespace *ns, disconnect); } +static int mlx5_cmd_hws_create_flow_group(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, u32 *in, + struct mlx5_flow_group *fg) +{ + struct mlx5hws_match_parameters mask; + struct mlx5hws_bwc_matcher *matcher; + u8 match_criteria_enable; + u32 priority; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->create_flow_group(ns, ft, in, fg); + + mask.match_buf = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); + mask.match_sz = sizeof(fg->mask.match_criteria); + + match_criteria_enable = MLX5_GET(create_flow_group_in, in, + match_criteria_enable); + priority = MLX5_GET(create_flow_group_in, in, start_flow_index); + matcher = mlx5hws_bwc_matcher_create(ft->fs_hws_table.hws_table, + priority, match_criteria_enable, + &mask); + if (!matcher) { + mlx5_core_err(ns->dev, "Failed creating matcher\n"); + return -EINVAL; + } + + fg->fs_hws_matcher.matcher = matcher; + return 0; +} + +static int mlx5_cmd_hws_destroy_flow_group(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_group *fg) +{ + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->destroy_flow_group(ns, ft, fg); + + return mlx5hws_bwc_matcher_destroy(fg->fs_hws_matcher.matcher); +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, .modify_flow_table = mlx5_cmd_hws_modify_flow_table, .update_root_ft = mlx5_cmd_hws_update_root_ft, + .create_flow_group = mlx5_cmd_hws_create_flow_group, + .destroy_flow_group = mlx5_cmd_hws_destroy_flow_group, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index c4af8d617b4d..a54b426d99b2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -15,6 +15,10 @@ struct mlx5_fs_hws_table { bool miss_ft_set; }; +struct mlx5_fs_hws_matcher { + struct mlx5hws_bwc_matcher *matcher; +}; + #ifdef CONFIG_MLX5_HW_STEERING const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); From patchwork Thu Jan 9 16:05:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932955 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2044.outbound.protection.outlook.com [40.107.220.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91B8021E088 for ; Thu, 9 Jan 2025 16:07:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.44 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438850; cv=fail; b=kxHvjhcvfdnURTSjabpmymUn3Obh3/0yvR5uGH+aBBPkZ2xZgKWTb7VYNGeZRj1xq+C78bzFGFr1iAIKBSR6N8EokRI3Om/vS5U/6G6A3rq0QyVIf5MxUARfnEeOSkqu1+qgMzfkra2Nq9+LSlrSqzBt0jZ0j75J+toIGAVxyp4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438850; c=relaxed/simple; bh=FvrwwIz5NI6vE1z9UcHM2vW7RUhOrBlStJm7Uy/2xxs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=OfOd6GApMr5O/H7IMxoh8EORiQlJKnX6OaMsBekroXIBPbPnF4kmQHZeUOz0auq2BUZ2jjj4USyizq7+2bPQAQHfpoEBkgFRV3Z7fkHCh9e4AS7ClIyh20MOYruN0wK0/WmLSVLwfEF9XmjWVXxKpHSeRKURUYMDQ9JEy7aT/qY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=FgMT43Vp; arc=fail smtp.client-ip=40.107.220.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="FgMT43Vp" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=pfmRz66/xw0ya00+J88JvJw31/WfhLnJ8zr2DlhupmCyIrAQvY296QzFBBcRgqQNWtKUdiEYDMo9hdoAxM71EZDZUG9svoKglZSx49BVDpdKB0H85YrdY6cRvGl0D3nz71pIqlaUU3ZefA0jXV00JwV5js+GKwbwXFXXuq4ta1tT9rcI3jZxuFQrvuy1pX/7aJ6PVd0SKdYKVasRM4msKXJgtE5qrRi6FdDM3tYG8Ice4Sp4EjR8h+gh3v5lHOKp9pvnHjSUwcDqSo62wqEG6WfytQMBTr2LdVehIeLwrqylBJA7+xSxYkyczw1G1PNUcdCfqolEFOLzLZimWlxDVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hOhWIps7jFz+Jl/EudXdDn8R+AHJlKvdzoV1TqhRCFQ=; b=v8AI2dmYEQuhBKGU0GwlnpgkT9yZrZMovgzNUU55zk46Il9ysuEkPL8jeIVesW0K6ovC8iQ1+c8vzOGLrqxzL+c3uYiAIR21z8AoHsh0VZGZpjc9MVOh2wmAKUYyAuOenHqcShdOiJOtwe8+KQQHSvYT/vL2Hn2sAZXBP17FfbXPG2oOaYQL+bSxkxnkciw6AJFk1L7PQZsxVKuG4GbWNVoS/CiezGi+ZojloONiDdFbnAOs3JU4dQbHMgWYOKBYaxHVaRHl4b2vwLHTER3PmtwtJW8rPkxl54zCfJBSwR9Ycw/tcWmmnlwPavqChvF//p6+y04FMje6sCUKfK03qw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hOhWIps7jFz+Jl/EudXdDn8R+AHJlKvdzoV1TqhRCFQ=; b=FgMT43VpKX03aS3Uaf7q6bXy0VjPuLEpQN/m0fJlFXoXJZwHpVw3qfki+QP4hkvtCMc6TFdop4RtzuB5ppiMt0pNlF1U4KFqLjBObkKz0/f79l6t+NSoJBb6PRua3GWSt0inc+BxYSWsVQqmbLGUWd/gwgUTEMI+8V0n2TEYvUeGGSHRnI3De4useVpQEuQixRrgsR01QoULI+95UHi38cuIXA8irtPd4VMVqNxMfAt5CjdSvbbSE4gEndCNxOdVqSEaZnSp+xWFTjIN6rKnNjIk01qgC0FiyQ6Kkak40sqx4/2b+gsHK7e4qb+peANwFPjrN7hDrkwmaT35oHa/Uw== Received: from DM6PR17CA0015.namprd17.prod.outlook.com (2603:10b6:5:1b3::28) by BL3PR12MB6476.namprd12.prod.outlook.com (2603:10b6:208:3bc::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.12; Thu, 9 Jan 2025 16:07:21 +0000 Received: from DS1PEPF0001708E.namprd03.prod.outlook.com (2603:10b6:5:1b3:cafe::27) by DM6PR17CA0015.outlook.office365.com (2603:10b6:5:1b3::28) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.12 via Frontend Transport; Thu, 9 Jan 2025 16:07:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0001708E.mail.protection.outlook.com (10.167.17.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Thu, 9 Jan 2025 16:07:21 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:07 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:06 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:03 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 04/15] net/mlx5: fs, add HWS actions pool Date: Thu, 9 Jan 2025 18:05:35 +0200 Message-ID: <20250109160546.1733647-5-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0001708E:EE_|BL3PR12MB6476:EE_ X-MS-Office365-Filtering-Correlation-Id: ee367a87-914c-4830-f171-08dd30c7b2df X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|1800799024|36860700013|376014; X-Microsoft-Antispam-Message-Info: bM57n5Y6WgAKm418NTP3NPs+Po+tzddvyXWLYX4l1PgcMWdhAz8udzGxAeCyUl5XTSfEVaY4EWViOS3cwG9bXBf7e6lPkcS3eXhR8kEN44PsVE0GtSUeEEW3kHDf1rNXKEDUga3qFDSsqCH3PgwoPFloPkJUUHsVDJyOkL38lLse7evtPg9nsYsTBWVT74Zka3DU8ha5eEwgdWq+ZAAFl2F8W0TMilAl9s2pmnOk+KUN1Hf7nLB2T6sI6Cyk7SAqhE5k0RF5yJG3IcTSWR/wDdutKQXAklC90hYffDmWl1fqmfEaN937GYfNugf1+v3ToCt5Xh1xNgk1QAe6WsDRnUOoivT2Z133oxPGqy+wKWOOTkocnBcsFIkJIk9PBj8amkjsNB+OMxzr+EFCJlYe2SNO0E7tegGWktsPFctB2h9DQ7otJMH4Wqmryyyo1hEITBKBfYkWwQnbDThlj18t9uUEieaJR0526Ih6zDKS7HooPicIRV0/uhEk9E/6E3dRTmJU+iGrdseWSo/FuXpUlWrnRsJqCh3JY5Uu82MsTCxHWlDVbBkXyaF1ozc8DmLCS3pSrarBlGboPbVGaxFHieConIjwEzH4141pJl4XSP4JsYShL9Je6y1CbzAi/bWiWz14kYZBfRsUCdjcXbiwyfV5Z06uOiM5N0Ifx805hPgUUILIi3YmgAFzX5iLMf0Ig2byiBT13Zm6msALJ0mMzt1T7FkpwasKBZ71BaM9AYO6R7IGMKvs0ubjNOCR2EldtCthHFNRKGlqXLVAjRnZLKqf4XE7WBYjKFL8ezse2ByAivZjbmWUBvJDGP6FtRT1nMoOn8/ZI7WPClz+bOo6+9B5Rq9msCYY+2CuPzhCtRS6g2Sh/hu0pj149fthSJYAxaroToE5lLHZ0e5toOxDHEF5Jdx2jX12xqNG7FjUAoV4qo3ks7EkdapEN1bdTvzUxNwhhH75x5iThFtaUeG7zExae4Ayjo04k707lsF4l1cszCsepvKhKbkrGKccxUn/k+BDsID/yj+9g1zp2ZbIaeF3k0Sy0MKzsmKDniN/KbJKw2WxZfwSSSIjSs8o37oXuLQmXQDdBJTrCDiqJa2NYAOLj/vY9Ekf7I0kqFaArWY/VbSScXSJ53daxt1qTU3Q0e42XJoSIviO1BYcB6RimPBLoLki3HeirtU3NfMQJ0FdGl20039gmVWJ2tcQiLOH9Tpp8tUQSzyLsclSiQ2h2AGa1fSg26RyUECwMBGx/ZP5eYjwC7TYTU8KMHoK3D+RB8xehROfufDU1MrGHvvi/CNGeGA0nzYurYA1eHCOayVVnvrgrCxDAu+Ue7xfR+27u2YQN6pfKYcobpJgTFlCkZNCL0MvQxJvi2ne+Md050klN0NFHjw5yDGN30a+GLaxiefDUKwKnr/vIINNVt86DXzisCxH+iSfDvF4amBmWTIy7Ld6qRq70CVC9bLmB4XWGbRUwUw2x+QIcJ+DRZLFiPOXocuCZ1XjJwsECstOaMU= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(1800799024)(36860700013)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:21.2718 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ee367a87-914c-4830-f171-08dd30c7b2df X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0001708E.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6476 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh The HW Steering actions pool will help utilize the option in HW Steering to share steering actions among different rules. Create pool on root namespace creation and add few HW Steering actions that don't depend on the steering rule itself and thus can be shared between rules, created on same namespace: tag, pop_vlan, push_vlan, drop, decap l2. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 58 +++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 9 +++ 2 files changed, 67 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index f0cbc9996456..5987710f8706 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -9,9 +9,60 @@ #define MLX5HWS_CTX_MAX_NUM_OF_QUEUES 16 #define MLX5HWS_CTX_QUEUE_SIZE 256 +static int mlx5_fs_init_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; + struct mlx5hws_action_reformat_header reformat_hdr = {}; + struct mlx5hws_context *ctx = fs_ctx->hws_ctx; + enum mlx5hws_action_type action_type; + + hws_pool->tag_action = mlx5hws_action_create_tag(ctx, flags); + if (!hws_pool->tag_action) + return -ENOSPC; + hws_pool->pop_vlan_action = mlx5hws_action_create_pop_vlan(ctx, flags); + if (!hws_pool->pop_vlan_action) + goto destroy_tag; + hws_pool->push_vlan_action = mlx5hws_action_create_push_vlan(ctx, flags); + if (!hws_pool->push_vlan_action) + goto destroy_pop_vlan; + hws_pool->drop_action = mlx5hws_action_create_dest_drop(ctx, flags); + if (!hws_pool->drop_action) + goto destroy_push_vlan; + action_type = MLX5HWS_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; + hws_pool->decapl2_action = + mlx5hws_action_create_reformat(ctx, action_type, 1, + &reformat_hdr, 0, flags); + if (!hws_pool->decapl2_action) + goto destroy_drop; + return 0; + +destroy_drop: + mlx5hws_action_destroy(hws_pool->drop_action); +destroy_push_vlan: + mlx5hws_action_destroy(hws_pool->push_vlan_action); +destroy_pop_vlan: + mlx5hws_action_destroy(hws_pool->pop_vlan_action); +destroy_tag: + mlx5hws_action_destroy(hws_pool->tag_action); + return -ENOSPC; +} + +static void mlx5_fs_cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) +{ + struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; + + mlx5hws_action_destroy(hws_pool->decapl2_action); + mlx5hws_action_destroy(hws_pool->drop_action); + mlx5hws_action_destroy(hws_pool->push_vlan_action); + mlx5hws_action_destroy(hws_pool->pop_vlan_action); + mlx5hws_action_destroy(hws_pool->tag_action); +} + static int mlx5_cmd_hws_create_ns(struct mlx5_flow_root_namespace *ns) { struct mlx5hws_context_attr hws_ctx_attr = {}; + int err; hws_ctx_attr.queues = min_t(int, num_online_cpus(), MLX5HWS_CTX_MAX_NUM_OF_QUEUES); @@ -23,11 +74,18 @@ static int mlx5_cmd_hws_create_ns(struct mlx5_flow_root_namespace *ns) mlx5_core_err(ns->dev, "Failed to create hws flow namespace\n"); return -EINVAL; } + err = mlx5_fs_init_hws_actions_pool(&ns->fs_hws_context); + if (err) { + mlx5_core_err(ns->dev, "Failed to init hws actions pool\n"); + mlx5hws_context_close(ns->fs_hws_context.hws_ctx); + return err; + } return 0; } static int mlx5_cmd_hws_destroy_ns(struct mlx5_flow_root_namespace *ns) { + mlx5_fs_cleanup_hws_actions_pool(&ns->fs_hws_context); return mlx5hws_context_close(ns->fs_hws_context.hws_ctx); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index a54b426d99b2..a2580b39d728 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -6,8 +6,17 @@ #include "mlx5hws.h" +struct mlx5_fs_hws_actions_pool { + struct mlx5hws_action *tag_action; + struct mlx5hws_action *pop_vlan_action; + struct mlx5hws_action *push_vlan_action; + struct mlx5hws_action *drop_action; + struct mlx5hws_action *decapl2_action; +}; + struct mlx5_fs_hws_context { struct mlx5hws_context *hws_ctx; + struct mlx5_fs_hws_actions_pool hws_pool; }; struct mlx5_fs_hws_table { From patchwork Thu Jan 9 16:05:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932959 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2068.outbound.protection.outlook.com [40.107.220.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A0FF220693 for ; Thu, 9 Jan 2025 16:07:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.68 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438861; cv=fail; b=su3/uSYJRrFeAiuulsZ+AJTn9W9ODq0wbWRiiMJp39THIAeJub8+GCQXmTA3iFoo4yofUlKYYwI5mwddw0Qni33a/SWl83g3EKX06VT4T1Jaa14vVurqt6GZPdMfOUI6vqBNvyRhXglRctQY1C4K8ypZDrnjCraU+Qye6jMxoRc= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438861; c=relaxed/simple; bh=vI6Xn5R05+SUtZKIMTiEb4tln0BhE9xGKPnXFrvSsUw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=RziBQxNVFE0X6rmk2fgRnf0XUwZXRp77/owl5SPfZloh3QO1XnSH7cCbKyS/7Li+wBBflBMmIeIhVLRLxoV2NbswyVmChXK8kVzA4PzD28CWbjJzKwaPsJcdsy1bQCcSdRonkXCEAyOCFeeCOEoprGM4aAIJJbWK5FZFaa1aXLc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=LBd2h6nQ; arc=fail smtp.client-ip=40.107.220.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="LBd2h6nQ" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=wSvi7k6pksaB1LCJNENLEB9XtMl4WdbJ4Iub/MNiW+pf0mOFTonVGi2tFrv23lDaBO0anIF/irA4tYQZaZAlr2NNTLzM03QqdR3BXrsUAmFKL11aqXuJdQBwMO+IM6QlbSK5UNcGzxj9b59K2bffFj8veCt8Ab+rAJ/s2cAWajS62ZjXE8yGuOtCitDwH2vH1/axsBwA8mpL3A3OQDiaXz8uBXsGptSfu4eh3B+OQkV43TcVx7jisJ2/8/es1o2mLEbctcgwwQ4geB0Ab78ZFOvLjIqDvD550LjqFXHyqwCTH12U1/4EDQLDL7DItZVAvZEISEKQou+dPXi9bhkKYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=x9YBNNCZd6xB27vrK/n+lmzdjjXyUd7CxV8vYL26aVo=; b=bnrB0nR0FeBlWgP+dmDL4s/6KcvrARYTR3IxPWCDIhXVKxwn4bs2iDi9Q8B3VfPlPlJWqRw7edShhazjJEED1kbU4RyLD90bll5XHkdfvtt4aYzv/4CTvOvLWqrg/8bvjCroEJX82QU3t5phYkBBeg2DRLj/cT9eBr5V9r2SKVbQjkrUDSMCUt+wswUAvSQOUGaFWpJjLcO9cSpL/CIlwwRNN7QxWGRVg7K3A19DIwEk+iQlzVhYg2LCrksjeb3P0M7/O7IkwmXrxE6d01rK3dKcLGBbnYYYkC9DLsz7PJl+OcuzAj+SB6nqk3zQNtDCRkcZpt0dxydNeGcIuaqryQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=x9YBNNCZd6xB27vrK/n+lmzdjjXyUd7CxV8vYL26aVo=; b=LBd2h6nQOAYsVdsbBqJ7OTmsSWFCF+/WGeuVR7SKWI1GyMe79C1dBEELcMvSjEq4pOZlLwAB4IgWxSh/4bMmBePF7f5Ghju8yl8Vqe4LfPNrWCz3nnZRAq5zMqkAB5VbAg9I6v37z8GMx7GaSItByxGr3LIme7nuSR/VhJnLqTfWmIP+s/KMviRjMmYzt9xp5pcj+EphA7PkkLHtmg29IOdu2OjRyyFJTk6jzHpazgcpnIYQ/meoB0o/t99RtLT9vaSwHCouwjNKNE0B0Bmo9bV6VOWkGGSA6DJf9s70yBVxfdBCbk6c/+bdFslQ9cenr0buJoQCjNmSDWMFTN3A5A== Received: from MW4PR03CA0247.namprd03.prod.outlook.com (2603:10b6:303:b4::12) by DM4PR12MB6009.namprd12.prod.outlook.com (2603:10b6:8:69::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.10; Thu, 9 Jan 2025 16:07:33 +0000 Received: from CO1PEPF000044FB.namprd21.prod.outlook.com (2603:10b6:303:b4:cafe::e1) by MW4PR03CA0247.outlook.office365.com (2603:10b6:303:b4::12) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.11 via Frontend Transport; Thu, 9 Jan 2025 16:07:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044FB.mail.protection.outlook.com (10.167.241.201) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.0 via Frontend Transport; Thu, 9 Jan 2025 16:07:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:10 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:10 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:07 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 05/15] net/mlx5: fs, add HWS packet reformat API function Date: Thu, 9 Jan 2025 18:05:36 +0200 Message-ID: <20250109160546.1733647-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044FB:EE_|DM4PR12MB6009:EE_ X-MS-Office365-Filtering-Correlation-Id: a3990a99-bc9d-426d-ced7-08dd30c7b9d7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|376014|82310400026; X-Microsoft-Antispam-Message-Info: 13rrOSG4eBe3ZqvnK6w/FfO8Ep8XzTdyKiEx6UfMbm14EBapWe/HbVFhNjJDF67UbH1HuKkYMdfk2pbPxjA3p8HzrRuKn9ZzwylaD/F8bIYh7GiHntBEGzGrBSUkcod+RIaw9gyuEafIgu4tbPpOx2s8+YiTa53NccAfdEGupGeHC6Ip7MLkNiVsM6EuuY8Q5LhbZdtmnVOBu3zpzy1ykl6v90jTdAzTjhIkV4h5bbHXPTKU0CsNyBBXrDa8ncCPgmS4rZZFPyHxS8/RPuYZFhK5VNyhbN30+7Ej2yxadGEp2UD3iAocx0ATV4u/CSgZZ/CZ/X3l3G2md+d/nEmSNBjaRCkwwm+XNs6zxe+LQywmXX9h2lsQNPu4NRq26vomBS6tqYE67zqGZySUjkXM3C1zwTCiW5fYknT0Xz/O40PnfMrnSPGgfuwpftim/tucHhlBexiEsvtVcydD4vG8L7tnp9hxiJchTJlEPLrQAKyk9dTnLUi+P+10ckoDDLpl5sjhe3GMibBuxajm00DxyxVhoX7J9O0d10LgcOINfFejb3dpFhml9eghSNl2wYYDi5KbyuTZHTh4Fbiq35GYyRUNiVW4xU8ncM6yVViGPTOFAZztsB8Q3eqC8n10V8pFEyp5lbWDa9cvXnbquL8RIafDYbqwnh70902G1JYQ14U33Dk6bmS+7oiIZJ0jH7KDsaDiWf+jVC14d8OMMIeV/c6rUnEQ8xa6q/NpQ71a+/f+Tcs9TVr5fa5SJvg0ds/B4wbohKfijWHo42ua/zmdd8W2BhdidIX5Vb/GrpIlTrOTBOMCM4fNXZR2rZ9R5vA5AAVdBd9WIq5rarexwNbiOiAkcYGGFZqHFizbyqeSDp3ahDezumg0ozyMBgPvYlZDlDHIsibBkbD9VQiVV5SMuu5lo1xn6eA5jEHXhtLSnQpZANJpG/knxZ6X2UzeumpA82yEHw6lh711gGxDfzUHYTBcy42NBElEa4q99J/u8B82vtF/RF93qrlHfJ5Bn5SW6Ckzrph6fvARnyrRRlUxBEHzvxcpz3F7dxIWZltneokuu0E5EyjjH7Iltvj4fw8bCoJU9N9jqphw2irDtWcMSMcvHCPXwEOjZwPw2YOKj+kL+Fp8QCTbLi4arargi37VwqyK+Fo85PZe/pzhU5cEmRcuJ3Wx7wvPEqDmYve4KYpJHRJ1xi6bwC5KK9tMRwK18pME5RimDzb9q0GivvKLeBwf07YGIakOG1O4ithUWRh/ZMtnatf2JMhjRa+/y/Wsswf4IPIxIfBJAOJFdcKrA02hiEOiBrSDZBoUuIh3Nao24eliq8pY7NPyKF4xp2lPiwBaTCjx5ByoBcMm8fGa2+NY/PwETA5OqBBCM7QdXw7lxlRG7Cb1TETKKhN0RmwFfAZ4JEqqOpXj+3xn+xFkJsPtSQdL1B3CRHIb90s05dfRj50fY1YEOdjKiZ8tvRpy876X+v3v5clBfNMevDFqBlkHmw/rr8EH84XZcctxjf0= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:32.9156 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a3990a99-bc9d-426d-ced7-08dd30c7b9d7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044FB.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6009 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add packet reformat alloc and dealloc API functions to provide packet reformat actions for steering rules. Add HWS action pools for each of the following packet reformat types: - decapl3: decapsulate l3 tunnel to l2 - encapl2: encapsulate l2 to tunnel l2 - encapl3: encapsulate l2 to tunnel l3 - insert_hdr: insert header In addition cache remove header action for remove vlan header as this is currently the only use case of remove header action in the driver. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/Makefile | 1 + .../net/ethernet/mellanox/mlx5/core/fs_core.h | 1 + .../ethernet/mellanox/mlx5/core/fs_counters.c | 5 +- .../net/ethernet/mellanox/mlx5/core/fs_pool.c | 5 +- .../net/ethernet/mellanox/mlx5/core/fs_pool.h | 5 +- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 293 +++++++++++++++++- .../mellanox/mlx5/core/steering/hws/fs_hws.h | 12 + .../mlx5/core/steering/hws/fs_hws_pools.c | 238 ++++++++++++++ .../mlx5/core/steering/hws/fs_hws_pools.h | 48 +++ include/linux/mlx5/mlx5_ifc.h | 1 + 10 files changed, 598 insertions(+), 11 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index 0008b22417c8..d9a8817bb33c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -152,6 +152,7 @@ mlx5_core-$(CONFIG_MLX5_HW_STEERING) += steering/hws/cmd.o \ steering/hws/debug.o \ steering/hws/vport.o \ steering/hws/bwc_complex.o \ + steering/hws/fs_hws_pools.o \ steering/hws/fs_hws.o # diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index 915cd3277dfb..b40e5310bef7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -75,6 +75,7 @@ struct mlx5_pkt_reformat { enum mlx5_flow_resource_owner owner; union { struct mlx5_fs_dr_action fs_dr_action; + struct mlx5_fs_hws_action fs_hws_action; u32 id; }; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c index d8e1c4ebd364..94d9caacd50f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c @@ -449,7 +449,8 @@ static void mlx5_fc_init(struct mlx5_fc *counter, struct mlx5_fc_bulk *bulk, counter->id = id; } -static struct mlx5_fs_bulk *mlx5_fc_bulk_create(struct mlx5_core_dev *dev) +static struct mlx5_fs_bulk *mlx5_fc_bulk_create(struct mlx5_core_dev *dev, + void *pool_ctx) { enum mlx5_fc_bulk_alloc_bitmask alloc_bitmask; struct mlx5_fc_bulk *fc_bulk; @@ -518,7 +519,7 @@ static const struct mlx5_fs_pool_ops mlx5_fc_pool_ops = { static void mlx5_fc_pool_init(struct mlx5_fs_pool *fc_pool, struct mlx5_core_dev *dev) { - mlx5_fs_pool_init(fc_pool, dev, &mlx5_fc_pool_ops); + mlx5_fs_pool_init(fc_pool, dev, &mlx5_fc_pool_ops, NULL); } static void mlx5_fc_pool_cleanup(struct mlx5_fs_pool *fc_pool) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.c index b891d7b9e3e0..f6c226664602 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.c @@ -56,11 +56,12 @@ static int mlx5_fs_bulk_release_index(struct mlx5_fs_bulk *fs_bulk, int index) } void mlx5_fs_pool_init(struct mlx5_fs_pool *pool, struct mlx5_core_dev *dev, - const struct mlx5_fs_pool_ops *ops) + const struct mlx5_fs_pool_ops *ops, void *pool_ctx) { WARN_ON_ONCE(!ops || !ops->bulk_destroy || !ops->bulk_create || !ops->update_threshold); pool->dev = dev; + pool->pool_ctx = pool_ctx; mutex_init(&pool->pool_lock); INIT_LIST_HEAD(&pool->fully_used); INIT_LIST_HEAD(&pool->partially_used); @@ -91,7 +92,7 @@ mlx5_fs_pool_alloc_new_bulk(struct mlx5_fs_pool *fs_pool) struct mlx5_core_dev *dev = fs_pool->dev; struct mlx5_fs_bulk *new_bulk; - new_bulk = fs_pool->ops->bulk_create(dev); + new_bulk = fs_pool->ops->bulk_create(dev, fs_pool->pool_ctx); if (new_bulk) fs_pool->available_units += new_bulk->bulk_len; fs_pool->ops->update_threshold(fs_pool); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.h index 3b149863260c..f04ec3107498 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_pool.h @@ -21,7 +21,8 @@ struct mlx5_fs_pool; struct mlx5_fs_pool_ops { int (*bulk_destroy)(struct mlx5_core_dev *dev, struct mlx5_fs_bulk *bulk); - struct mlx5_fs_bulk * (*bulk_create)(struct mlx5_core_dev *dev); + struct mlx5_fs_bulk * (*bulk_create)(struct mlx5_core_dev *dev, + void *pool_ctx); void (*update_threshold)(struct mlx5_fs_pool *pool); }; @@ -44,7 +45,7 @@ void mlx5_fs_bulk_cleanup(struct mlx5_fs_bulk *fs_bulk); int mlx5_fs_bulk_get_free_amount(struct mlx5_fs_bulk *bulk); void mlx5_fs_pool_init(struct mlx5_fs_pool *pool, struct mlx5_core_dev *dev, - const struct mlx5_fs_pool_ops *ops); + const struct mlx5_fs_pool_ops *ops, void *pool_ctx); void mlx5_fs_pool_cleanup(struct mlx5_fs_pool *pool); int mlx5_fs_pool_acquire_index(struct mlx5_fs_pool *fs_pool, struct mlx5_fs_pool_index *pool_index); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 5987710f8706..a584aa16d2d1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -4,22 +4,31 @@ #include #include #include +#include "fs_hws_pools.h" #include "mlx5hws.h" #define MLX5HWS_CTX_MAX_NUM_OF_QUEUES 16 #define MLX5HWS_CTX_QUEUE_SIZE 256 -static int mlx5_fs_init_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) +static struct mlx5hws_action * +mlx5_fs_create_action_remove_header_vlan(struct mlx5hws_context *ctx); +static void +mlx5_fs_destroy_pr_pool(struct mlx5_fs_pool *pool, struct xarray *pr_pools, + unsigned long index); + +static int mlx5_fs_init_hws_actions_pool(struct mlx5_core_dev *dev, + struct mlx5_fs_hws_context *fs_ctx) { u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; struct mlx5hws_action_reformat_header reformat_hdr = {}; struct mlx5hws_context *ctx = fs_ctx->hws_ctx; enum mlx5hws_action_type action_type; + int err = -ENOSPC; hws_pool->tag_action = mlx5hws_action_create_tag(ctx, flags); if (!hws_pool->tag_action) - return -ENOSPC; + return err; hws_pool->pop_vlan_action = mlx5hws_action_create_pop_vlan(ctx, flags); if (!hws_pool->pop_vlan_action) goto destroy_tag; @@ -35,8 +44,28 @@ static int mlx5_fs_init_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) &reformat_hdr, 0, flags); if (!hws_pool->decapl2_action) goto destroy_drop; + hws_pool->remove_hdr_vlan_action = + mlx5_fs_create_action_remove_header_vlan(ctx); + if (!hws_pool->remove_hdr_vlan_action) + goto destroy_decapl2; + err = mlx5_fs_hws_pr_pool_init(&hws_pool->insert_hdr_pool, dev, 0, + MLX5HWS_ACTION_TYP_INSERT_HEADER); + if (err) + goto destroy_remove_hdr; + err = mlx5_fs_hws_pr_pool_init(&hws_pool->dl3tnltol2_pool, dev, 0, + MLX5HWS_ACTION_TYP_REFORMAT_TNL_L3_TO_L2); + if (err) + goto cleanup_insert_hdr; + xa_init(&hws_pool->el2tol3tnl_pools); + xa_init(&hws_pool->el2tol2tnl_pools); return 0; +cleanup_insert_hdr: + mlx5_fs_hws_pr_pool_cleanup(&hws_pool->insert_hdr_pool); +destroy_remove_hdr: + mlx5hws_action_destroy(hws_pool->remove_hdr_vlan_action); +destroy_decapl2: + mlx5hws_action_destroy(hws_pool->decapl2_action); destroy_drop: mlx5hws_action_destroy(hws_pool->drop_action); destroy_push_vlan: @@ -45,13 +74,24 @@ static int mlx5_fs_init_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) mlx5hws_action_destroy(hws_pool->pop_vlan_action); destroy_tag: mlx5hws_action_destroy(hws_pool->tag_action); - return -ENOSPC; + return err; } static void mlx5_fs_cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) { struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; - + struct mlx5_fs_pool *pool; + unsigned long i; + + xa_for_each(&hws_pool->el2tol2tnl_pools, i, pool) + mlx5_fs_destroy_pr_pool(pool, &hws_pool->el2tol2tnl_pools, i); + xa_destroy(&hws_pool->el2tol2tnl_pools); + xa_for_each(&hws_pool->el2tol3tnl_pools, i, pool) + mlx5_fs_destroy_pr_pool(pool, &hws_pool->el2tol3tnl_pools, i); + xa_destroy(&hws_pool->el2tol3tnl_pools); + mlx5_fs_hws_pr_pool_cleanup(&hws_pool->dl3tnltol2_pool); + mlx5_fs_hws_pr_pool_cleanup(&hws_pool->insert_hdr_pool); + mlx5hws_action_destroy(hws_pool->remove_hdr_vlan_action); mlx5hws_action_destroy(hws_pool->decapl2_action); mlx5hws_action_destroy(hws_pool->drop_action); mlx5hws_action_destroy(hws_pool->push_vlan_action); @@ -74,7 +114,7 @@ static int mlx5_cmd_hws_create_ns(struct mlx5_flow_root_namespace *ns) mlx5_core_err(ns->dev, "Failed to create hws flow namespace\n"); return -EINVAL; } - err = mlx5_fs_init_hws_actions_pool(&ns->fs_hws_context); + err = mlx5_fs_init_hws_actions_pool(ns->dev, &ns->fs_hws_context); if (err) { mlx5_core_err(ns->dev, "Failed to init hws actions pool\n"); mlx5hws_context_close(ns->fs_hws_context.hws_ctx); @@ -251,6 +291,247 @@ static int mlx5_cmd_hws_destroy_flow_group(struct mlx5_flow_root_namespace *ns, return mlx5hws_bwc_matcher_destroy(fg->fs_hws_matcher.matcher); } +static struct mlx5hws_action * +mlx5_fs_create_action_remove_header_vlan(struct mlx5hws_context *ctx) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5hws_action_remove_header_attr remove_hdr_vlan = {}; + + /* MAC anchor not supported in HWS reformat, use VLAN anchor */ + remove_hdr_vlan.anchor = MLX5_REFORMAT_CONTEXT_ANCHOR_VLAN_START; + remove_hdr_vlan.offset = 0; + remove_hdr_vlan.size = sizeof(struct vlan_hdr); + return mlx5hws_action_create_remove_header(ctx, &remove_hdr_vlan, flags); +} + +static struct mlx5hws_action * +mlx5_fs_get_action_remove_header_vlan(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_pkt_reformat_params *params) +{ + if (!params || + params->param_0 != MLX5_REFORMAT_CONTEXT_ANCHOR_MAC_START || + params->param_1 != offsetof(struct vlan_ethhdr, h_vlan_proto) || + params->size != sizeof(struct vlan_hdr)) + return NULL; + + return fs_ctx->hws_pool.remove_hdr_vlan_action; +} + +static int +mlx5_fs_verify_insert_header_params(struct mlx5_core_dev *mdev, + struct mlx5_pkt_reformat_params *params) +{ + if ((!params->data && params->size) || (params->data && !params->size) || + MLX5_CAP_GEN_2(mdev, max_reformat_insert_size) < params->size || + MLX5_CAP_GEN_2(mdev, max_reformat_insert_offset) < params->param_1) { + mlx5_core_err(mdev, "Invalid reformat params for INSERT_HDR\n"); + return -EINVAL; + } + if (params->param_0 != MLX5_FS_INSERT_HDR_VLAN_ANCHOR || + params->param_1 != MLX5_FS_INSERT_HDR_VLAN_OFFSET || + params->size != MLX5_FS_INSERT_HDR_VLAN_SIZE) { + mlx5_core_err(mdev, "Only vlan insert header supported\n"); + return -EOPNOTSUPP; + } + return 0; +} + +static int +mlx5_fs_verify_encap_decap_params(struct mlx5_core_dev *dev, + struct mlx5_pkt_reformat_params *params) +{ + if (params->param_0 || params->param_1) { + mlx5_core_err(dev, "Invalid reformat params\n"); + return -EINVAL; + } + return 0; +} + +static struct mlx5_fs_pool * +mlx5_fs_get_pr_encap_pool(struct mlx5_core_dev *dev, struct xarray *pr_pools, + enum mlx5hws_action_type reformat_type, size_t size) +{ + struct mlx5_fs_pool *pr_pool; + unsigned long index = size; + int err; + + pr_pool = xa_load(pr_pools, index); + if (pr_pool) + return pr_pool; + + pr_pool = kzalloc(sizeof(*pr_pool), GFP_KERNEL); + if (!pr_pool) + return ERR_PTR(-ENOMEM); + err = mlx5_fs_hws_pr_pool_init(pr_pool, dev, size, reformat_type); + if (err) + goto free_pr_pool; + err = xa_insert(pr_pools, index, pr_pool, GFP_KERNEL); + if (err) + goto cleanup_pr_pool; + return pr_pool; + +cleanup_pr_pool: + mlx5_fs_hws_pr_pool_cleanup(pr_pool); +free_pr_pool: + kfree(pr_pool); + return ERR_PTR(err); +} + +static void +mlx5_fs_destroy_pr_pool(struct mlx5_fs_pool *pool, struct xarray *pr_pools, + unsigned long index) +{ + xa_erase(pr_pools, index); + mlx5_fs_hws_pr_pool_cleanup(pool); + kfree(pool); +} + +static int +mlx5_cmd_hws_packet_reformat_alloc(struct mlx5_flow_root_namespace *ns, + struct mlx5_pkt_reformat_params *params, + enum mlx5_flow_namespace_type namespace, + struct mlx5_pkt_reformat *pkt_reformat) +{ + struct mlx5_fs_hws_context *fs_ctx = &ns->fs_hws_context; + struct mlx5_fs_hws_actions_pool *hws_pool; + struct mlx5hws_action *hws_action = NULL; + struct mlx5_fs_hws_pr *pr_data = NULL; + struct mlx5_fs_pool *pr_pool = NULL; + struct mlx5_core_dev *dev = ns->dev; + u8 hdr_idx = 0; + int err; + + if (!params) + return -EINVAL; + + hws_pool = &fs_ctx->hws_pool; + + switch (params->type) { + case MLX5_REFORMAT_TYPE_L2_TO_VXLAN: + case MLX5_REFORMAT_TYPE_L2_TO_NVGRE: + case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL: + if (mlx5_fs_verify_encap_decap_params(dev, params)) + return -EINVAL; + pr_pool = mlx5_fs_get_pr_encap_pool(dev, &hws_pool->el2tol2tnl_pools, + MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2, + params->size); + if (IS_ERR(pr_pool)) + return PTR_ERR(pr_pool); + break; + case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL: + if (mlx5_fs_verify_encap_decap_params(dev, params)) + return -EINVAL; + pr_pool = mlx5_fs_get_pr_encap_pool(dev, &hws_pool->el2tol3tnl_pools, + MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L3, + params->size); + if (IS_ERR(pr_pool)) + return PTR_ERR(pr_pool); + break; + case MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2: + if (mlx5_fs_verify_encap_decap_params(dev, params)) + return -EINVAL; + pr_pool = &hws_pool->dl3tnltol2_pool; + hdr_idx = params->size == ETH_HLEN ? + MLX5_FS_DL3TNLTOL2_MAC_HDR_IDX : + MLX5_FS_DL3TNLTOL2_MAC_VLAN_HDR_IDX; + break; + case MLX5_REFORMAT_TYPE_INSERT_HDR: + err = mlx5_fs_verify_insert_header_params(dev, params); + if (err) + return err; + pr_pool = &hws_pool->insert_hdr_pool; + break; + case MLX5_REFORMAT_TYPE_REMOVE_HDR: + hws_action = mlx5_fs_get_action_remove_header_vlan(fs_ctx, params); + if (!hws_action) + mlx5_core_err(dev, "Only vlan remove header supported\n"); + break; + default: + mlx5_core_err(ns->dev, "Packet-reformat not supported(%d)\n", + params->type); + return -EOPNOTSUPP; + } + + if (pr_pool) { + pr_data = mlx5_fs_hws_pr_pool_acquire_pr(pr_pool); + if (IS_ERR_OR_NULL(pr_data)) + return !pr_data ? -EINVAL : PTR_ERR(pr_data); + hws_action = pr_data->bulk->hws_action; + if (!hws_action) { + mlx5_core_err(dev, + "Failed allocating packet-reformat action\n"); + err = -EINVAL; + goto release_pr; + } + pr_data->data = kmemdup(params->data, params->size, GFP_KERNEL); + if (!pr_data->data) { + err = -ENOMEM; + goto release_pr; + } + pr_data->hdr_idx = hdr_idx; + pr_data->data_size = params->size; + pkt_reformat->fs_hws_action.pr_data = pr_data; + } + + pkt_reformat->owner = MLX5_FLOW_RESOURCE_OWNER_SW; + pkt_reformat->fs_hws_action.hws_action = hws_action; + return 0; + +release_pr: + if (pr_pool && pr_data) + mlx5_fs_hws_pr_pool_release_pr(pr_pool, pr_data); + return err; +} + +static void mlx5_cmd_hws_packet_reformat_dealloc(struct mlx5_flow_root_namespace *ns, + struct mlx5_pkt_reformat *pkt_reformat) +{ + struct mlx5_fs_hws_actions_pool *hws_pool = &ns->fs_hws_context.hws_pool; + struct mlx5_core_dev *dev = ns->dev; + struct mlx5_fs_hws_pr *pr_data; + struct mlx5_fs_pool *pr_pool; + + if (pkt_reformat->reformat_type == MLX5_REFORMAT_TYPE_REMOVE_HDR) + return; + + if (!pkt_reformat->fs_hws_action.pr_data) { + mlx5_core_err(ns->dev, "Failed release packet-reformat\n"); + return; + } + pr_data = pkt_reformat->fs_hws_action.pr_data; + + switch (pkt_reformat->reformat_type) { + case MLX5_REFORMAT_TYPE_L2_TO_VXLAN: + case MLX5_REFORMAT_TYPE_L2_TO_NVGRE: + case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL: + pr_pool = mlx5_fs_get_pr_encap_pool(dev, &hws_pool->el2tol2tnl_pools, + MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2, + pr_data->data_size); + break; + case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL: + pr_pool = mlx5_fs_get_pr_encap_pool(dev, &hws_pool->el2tol2tnl_pools, + MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2, + pr_data->data_size); + break; + case MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2: + pr_pool = &hws_pool->dl3tnltol2_pool; + break; + case MLX5_REFORMAT_TYPE_INSERT_HDR: + pr_pool = &hws_pool->insert_hdr_pool; + break; + default: + mlx5_core_err(ns->dev, "Unknown packet-reformat type\n"); + return; + } + if (!pkt_reformat->fs_hws_action.pr_data || IS_ERR(pr_pool)) { + mlx5_core_err(ns->dev, "Failed release packet-reformat\n"); + return; + } + kfree(pr_data->data); + mlx5_fs_hws_pr_pool_release_pr(pr_pool, pr_data); + pkt_reformat->fs_hws_action.pr_data = NULL; +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, @@ -258,6 +539,8 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .update_root_ft = mlx5_cmd_hws_update_root_ft, .create_flow_group = mlx5_cmd_hws_create_flow_group, .destroy_flow_group = mlx5_cmd_hws_destroy_flow_group, + .packet_reformat_alloc = mlx5_cmd_hws_packet_reformat_alloc, + .packet_reformat_dealloc = mlx5_cmd_hws_packet_reformat_dealloc, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index a2580b39d728..19786838f6d6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -5,6 +5,7 @@ #define _MLX5_FS_HWS_ #include "mlx5hws.h" +#include "fs_hws_pools.h" struct mlx5_fs_hws_actions_pool { struct mlx5hws_action *tag_action; @@ -12,6 +13,11 @@ struct mlx5_fs_hws_actions_pool { struct mlx5hws_action *push_vlan_action; struct mlx5hws_action *drop_action; struct mlx5hws_action *decapl2_action; + struct mlx5hws_action *remove_hdr_vlan_action; + struct mlx5_fs_pool insert_hdr_pool; + struct mlx5_fs_pool dl3tnltol2_pool; + struct xarray el2tol3tnl_pools; + struct xarray el2tol2tnl_pools; }; struct mlx5_fs_hws_context { @@ -24,6 +30,12 @@ struct mlx5_fs_hws_table { bool miss_ft_set; }; +struct mlx5_fs_hws_action { + struct mlx5hws_action *hws_action; + struct mlx5_fs_pool *fs_pool; + struct mlx5_fs_hws_pr *pr_data; +}; + struct mlx5_fs_hws_matcher { struct mlx5hws_bwc_matcher *matcher; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c new file mode 100644 index 000000000000..b12b96c94dae --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c @@ -0,0 +1,238 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2025 NVIDIA Corporation & Affiliates */ + +#include +#include "fs_hws_pools.h" + +#define MLX5_FS_HWS_DEFAULT_BULK_LEN 65536 +#define MLX5_FS_HWS_POOL_MAX_THRESHOLD BIT(18) +#define MLX5_FS_HWS_POOL_USED_BUFF_RATIO 10 + +static struct mlx5hws_action * +mlx5_fs_dl3tnltol2_bulk_action_create(struct mlx5hws_context *ctx) +{ + struct mlx5hws_action_reformat_header reformat_hdr[2] = {}; + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + enum mlx5hws_action_type reformat_type; + u32 log_bulk_size; + + reformat_type = MLX5HWS_ACTION_TYP_REFORMAT_TNL_L3_TO_L2; + reformat_hdr[MLX5_FS_DL3TNLTOL2_MAC_HDR_IDX].sz = ETH_HLEN; + reformat_hdr[MLX5_FS_DL3TNLTOL2_MAC_VLAN_HDR_IDX].sz = ETH_HLEN + VLAN_HLEN; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + return mlx5hws_action_create_reformat(ctx, reformat_type, 2, + reformat_hdr, log_bulk_size, flags); +} + +static struct mlx5hws_action * +mlx5_fs_el2tol3tnl_bulk_action_create(struct mlx5hws_context *ctx, size_t data_size) +{ + struct mlx5hws_action_reformat_header reformat_hdr = {}; + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + enum mlx5hws_action_type reformat_type; + u32 log_bulk_size; + + reformat_type = MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L3; + reformat_hdr.sz = data_size; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + return mlx5hws_action_create_reformat(ctx, reformat_type, 1, + &reformat_hdr, log_bulk_size, flags); +} + +static struct mlx5hws_action * +mlx5_fs_el2tol2tnl_bulk_action_create(struct mlx5hws_context *ctx, size_t data_size) +{ + struct mlx5hws_action_reformat_header reformat_hdr = {}; + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + enum mlx5hws_action_type reformat_type; + u32 log_bulk_size; + + reformat_type = MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; + reformat_hdr.sz = data_size; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + return mlx5hws_action_create_reformat(ctx, reformat_type, 1, + &reformat_hdr, log_bulk_size, flags); +} + +static struct mlx5hws_action * +mlx5_fs_insert_hdr_bulk_action_create(struct mlx5hws_context *ctx) +{ + struct mlx5hws_action_insert_header insert_hdr = {}; + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + u32 log_bulk_size; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + insert_hdr.hdr.sz = MLX5_FS_INSERT_HDR_VLAN_SIZE; + insert_hdr.anchor = MLX5_FS_INSERT_HDR_VLAN_ANCHOR; + insert_hdr.offset = MLX5_FS_INSERT_HDR_VLAN_OFFSET; + + return mlx5hws_action_create_insert_header(ctx, 1, &insert_hdr, + log_bulk_size, flags); +} + +static struct mlx5hws_action * +mlx5_fs_pr_bulk_action_create(struct mlx5_core_dev *dev, + struct mlx5_fs_hws_pr_pool_ctx *pr_pool_ctx) +{ + struct mlx5_flow_root_namespace *root_ns; + struct mlx5hws_context *ctx; + size_t encap_data_size; + + root_ns = mlx5_get_root_namespace(dev, MLX5_FLOW_NAMESPACE_FDB); + if (!root_ns || root_ns->mode != MLX5_FLOW_STEERING_MODE_HMFS) + return NULL; + + ctx = root_ns->fs_hws_context.hws_ctx; + if (!ctx) + return NULL; + + encap_data_size = pr_pool_ctx->encap_data_size; + switch (pr_pool_ctx->reformat_type) { + case MLX5HWS_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: + return mlx5_fs_dl3tnltol2_bulk_action_create(ctx); + case MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: + return mlx5_fs_el2tol3tnl_bulk_action_create(ctx, encap_data_size); + case MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: + return mlx5_fs_el2tol2tnl_bulk_action_create(ctx, encap_data_size); + case MLX5HWS_ACTION_TYP_INSERT_HEADER: + return mlx5_fs_insert_hdr_bulk_action_create(ctx); + default: + return NULL; + } + return NULL; +} + +static struct mlx5_fs_bulk * +mlx5_fs_hws_pr_bulk_create(struct mlx5_core_dev *dev, void *pool_ctx) +{ + struct mlx5_fs_hws_pr_pool_ctx *pr_pool_ctx; + struct mlx5_fs_hws_pr_bulk *pr_bulk; + int bulk_len; + int i; + + if (!pool_ctx) + return NULL; + pr_pool_ctx = pool_ctx; + bulk_len = MLX5_FS_HWS_DEFAULT_BULK_LEN; + pr_bulk = kvzalloc(struct_size(pr_bulk, prs_data, bulk_len), GFP_KERNEL); + if (!pr_bulk) + return NULL; + + if (mlx5_fs_bulk_init(dev, &pr_bulk->fs_bulk, bulk_len)) + goto free_pr_bulk; + + for (i = 0; i < bulk_len; i++) { + pr_bulk->prs_data[i].bulk = pr_bulk; + pr_bulk->prs_data[i].offset = i; + } + + pr_bulk->hws_action = mlx5_fs_pr_bulk_action_create(dev, pr_pool_ctx); + if (!pr_bulk->hws_action) + goto cleanup_fs_bulk; + + return &pr_bulk->fs_bulk; + +cleanup_fs_bulk: + mlx5_fs_bulk_cleanup(&pr_bulk->fs_bulk); +free_pr_bulk: + kvfree(pr_bulk); + return NULL; +} + +static int +mlx5_fs_hws_pr_bulk_destroy(struct mlx5_core_dev *dev, struct mlx5_fs_bulk *fs_bulk) +{ + struct mlx5_fs_hws_pr_bulk *pr_bulk; + + pr_bulk = container_of(fs_bulk, struct mlx5_fs_hws_pr_bulk, fs_bulk); + if (mlx5_fs_bulk_get_free_amount(fs_bulk) < fs_bulk->bulk_len) { + mlx5_core_err(dev, "Freeing bulk before all reformats were released\n"); + return -EBUSY; + } + + mlx5hws_action_destroy(pr_bulk->hws_action); + mlx5_fs_bulk_cleanup(fs_bulk); + kvfree(pr_bulk); + + return 0; +} + +static void mlx5_hws_pool_update_threshold(struct mlx5_fs_pool *hws_pool) +{ + hws_pool->threshold = min_t(int, MLX5_FS_HWS_POOL_MAX_THRESHOLD, + hws_pool->used_units / MLX5_FS_HWS_POOL_USED_BUFF_RATIO); +} + +static const struct mlx5_fs_pool_ops mlx5_fs_hws_pr_pool_ops = { + .bulk_create = mlx5_fs_hws_pr_bulk_create, + .bulk_destroy = mlx5_fs_hws_pr_bulk_destroy, + .update_threshold = mlx5_hws_pool_update_threshold, +}; + +int mlx5_fs_hws_pr_pool_init(struct mlx5_fs_pool *pr_pool, + struct mlx5_core_dev *dev, size_t encap_data_size, + enum mlx5hws_action_type reformat_type) +{ + struct mlx5_fs_hws_pr_pool_ctx *pr_pool_ctx; + + if (reformat_type != MLX5HWS_ACTION_TYP_INSERT_HEADER && + reformat_type != MLX5HWS_ACTION_TYP_REFORMAT_TNL_L3_TO_L2 && + reformat_type != MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L3 && + reformat_type != MLX5HWS_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) + return -EOPNOTSUPP; + + pr_pool_ctx = kzalloc(sizeof(*pr_pool_ctx), GFP_KERNEL); + if (!pr_pool_ctx) + return -ENOMEM; + pr_pool_ctx->reformat_type = reformat_type; + pr_pool_ctx->encap_data_size = encap_data_size; + mlx5_fs_pool_init(pr_pool, dev, &mlx5_fs_hws_pr_pool_ops, pr_pool_ctx); + return 0; +} + +void mlx5_fs_hws_pr_pool_cleanup(struct mlx5_fs_pool *pr_pool) +{ + struct mlx5_fs_hws_pr_pool_ctx *pr_pool_ctx; + + mlx5_fs_pool_cleanup(pr_pool); + pr_pool_ctx = pr_pool->pool_ctx; + if (!pr_pool_ctx) + return; + kfree(pr_pool_ctx); +} + +struct mlx5_fs_hws_pr * +mlx5_fs_hws_pr_pool_acquire_pr(struct mlx5_fs_pool *pr_pool) +{ + struct mlx5_fs_pool_index pool_index = {}; + struct mlx5_fs_hws_pr_bulk *pr_bulk; + int err; + + err = mlx5_fs_pool_acquire_index(pr_pool, &pool_index); + if (err) + return ERR_PTR(err); + pr_bulk = container_of(pool_index.fs_bulk, struct mlx5_fs_hws_pr_bulk, + fs_bulk); + return &pr_bulk->prs_data[pool_index.index]; +} + +void mlx5_fs_hws_pr_pool_release_pr(struct mlx5_fs_pool *pr_pool, + struct mlx5_fs_hws_pr *pr_data) +{ + struct mlx5_fs_bulk *fs_bulk = &pr_data->bulk->fs_bulk; + struct mlx5_fs_pool_index pool_index = {}; + struct mlx5_core_dev *dev = pr_pool->dev; + + pool_index.fs_bulk = fs_bulk; + pool_index.index = pr_data->offset; + if (mlx5_fs_pool_release_index(pr_pool, &pool_index)) + mlx5_core_warn(dev, "Attempted to release packet reformat which is not acquired\n"); +} + +struct mlx5hws_action *mlx5_fs_hws_pr_get_action(struct mlx5_fs_hws_pr *pr_data) +{ + return pr_data->bulk->hws_action; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h new file mode 100644 index 000000000000..544b277be3c5 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2025 NVIDIA Corporation & Affiliates */ + +#ifndef __MLX5_FS_HWS_POOLS_H__ +#define __MLX5_FS_HWS_POOLS_H__ + +#include +#include "fs_pool.h" +#include "fs_core.h" + +#define MLX5_FS_INSERT_HDR_VLAN_ANCHOR MLX5_REFORMAT_CONTEXT_ANCHOR_MAC_START +#define MLX5_FS_INSERT_HDR_VLAN_OFFSET offsetof(struct vlan_ethhdr, h_vlan_proto) +#define MLX5_FS_INSERT_HDR_VLAN_SIZE sizeof(struct vlan_hdr) + +enum { + MLX5_FS_DL3TNLTOL2_MAC_HDR_IDX = 0, + MLX5_FS_DL3TNLTOL2_MAC_VLAN_HDR_IDX, +}; + +struct mlx5_fs_hws_pr { + struct mlx5_fs_hws_pr_bulk *bulk; + u32 offset; + u8 hdr_idx; + u8 *data; + size_t data_size; +}; + +struct mlx5_fs_hws_pr_bulk { + struct mlx5_fs_bulk fs_bulk; + struct mlx5hws_action *hws_action; + struct mlx5_fs_hws_pr prs_data[]; +}; + +struct mlx5_fs_hws_pr_pool_ctx { + enum mlx5hws_action_type reformat_type; + size_t encap_data_size; +}; + +int mlx5_fs_hws_pr_pool_init(struct mlx5_fs_pool *pr_pool, + struct mlx5_core_dev *dev, size_t encap_data_size, + enum mlx5hws_action_type reformat_type); +void mlx5_fs_hws_pr_pool_cleanup(struct mlx5_fs_pool *pr_pool); + +struct mlx5_fs_hws_pr *mlx5_fs_hws_pr_pool_acquire_pr(struct mlx5_fs_pool *pr_pool); +void mlx5_fs_hws_pr_pool_release_pr(struct mlx5_fs_pool *pr_pool, + struct mlx5_fs_hws_pr *pr_data); +struct mlx5hws_action *mlx5_fs_hws_pr_get_action(struct mlx5_fs_hws_pr *pr_data); +#endif /* __MLX5_FS_HWS_POOLS_H__ */ diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 370f533da107..bb99a35fc6a2 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -7025,6 +7025,7 @@ struct mlx5_ifc_alloc_packet_reformat_context_out_bits { enum { MLX5_REFORMAT_CONTEXT_ANCHOR_MAC_START = 0x1, + MLX5_REFORMAT_CONTEXT_ANCHOR_VLAN_START = 0x2, MLX5_REFORMAT_CONTEXT_ANCHOR_IP_START = 0x7, MLX5_REFORMAT_CONTEXT_ANCHOR_TCP_UDP_START = 0x9, }; From patchwork Thu Jan 9 16:05:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932957 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2069.outbound.protection.outlook.com [40.107.236.69]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DEC6722068A for ; Thu, 9 Jan 2025 16:07:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.69 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438858; cv=fail; b=QOp+Djr0zyq/mYGmGuzu3OEgo7jqc9WxLPJksANEzWMOazljeAVEG5XnwozeUFHWDEOYTXR4mtr0M4j2J4uoVtWYtBS+4bRm5z82B66lwg8BdIm7HARVzkIC2zXMXOlWbNrEy8ZWrA7dg385OuoambXysVmPHAWeoVQU9kM6ZT8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438858; c=relaxed/simple; bh=kobJeBQSU3joO2hSwNOawZA6vmqCSQw9NCTmeupEGtc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=I5/bwgPaTjCJ842/35OzdAlmcTMw7wFpRFEZ/+wTlBtV4ErDCfrwusp2JrJI6lWI0tQkmR2bs1zkBDPxkrXK/4L/vk1H0q5iTOtD13WlfVFn75EcHSyHMvpH9JraAyOLkjCO1BVVAzJthF4J1iRtDwOC53sKIBTgDAHQvFLGZ8s= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=H0AtjI/O; arc=fail smtp.client-ip=40.107.236.69 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="H0AtjI/O" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=h5JbLcw+traNfzvgT0AYUniifJjzYrGrJIwR1bersTYimjYvjWsoP+ivsfKZr3O2W5omKpa/9sUt/0nUCoEyRAYQKwOe/2bRhb27T0AuPv7Z+rMpIA9ukYfqkgWj7786qHZKsTOHy+JkYJMlMfnqhrYyxsLGsNHWneq4sMS34//HWGuL4HTCDbkECs2WAWEI7+5tgyyHQyY4L5BRNl9NGYMjhphzFof6rLBLi/YGLREVNgXbkxWJ1ykGKGWCvKt0RlMuR9laeAnqC1PiKDvteyahTyOM0/UtJwh9wrSpmCTFLGHA4CcGsORxa7JrfBo/KOiWnRZiud6kVEnqZF16sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fGv0b+r+N2AD4x+0l2ZLSy9bLq2wA9d1apA3Lx3XPZ0=; b=CMui79mkwXYsTCM8ZyCoXz/N+l8fdRfiAY5Hzb4ZU0yMvAhd3XLjnr0Z0VBaAipNueCZlN4eTq+klVYH7Ad2xToP2+eqn+DUjc2c8MXg0SHwJDwYcA+1EWGhfQfo8bJTgIn2La883vtZofiMI0Cj/q7xyvfQyHTFVyb7Y1JsYiiV/2+U3v7ONPf+qFQ+Ei1LK3nAwJ3KrUIOr6qr/yWnEI9smMumS5Ht89REp4GzB0rKma16XzcxjDu3vPur0YSH1CWeIVztsEHN1e8436wWR9F8PYUiTgmCQRPAyGwkY3ZkbCWVpyMeRJ2cijuFO7sz+TtkSKK59AIJ6uNzAxTv6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fGv0b+r+N2AD4x+0l2ZLSy9bLq2wA9d1apA3Lx3XPZ0=; b=H0AtjI/OvhKTGlm9Vt7TD4QOc42GtZ0ItmF/a74k1as1T+nhk15jRCaXSs40iTv0RtHk0zbFsIJOS/7FxRWun2xtd6SYYhtg3sn+OyJER3i9KrmY9kJ/4n1XX4N6X6A8L8d3FQ7nSl2XUbFgHNvAT9hlGjEjADGO8R2MstILsLOZKHuLwPBNk7f1B+M58ZWp3YVX8ny8+7OtOjmtZD6cvV83Vl+YMesK+nls2l/8/FfWV2khCJ2Cvu0kEgM9aMhpcAc25qrl+Imzjw3zMwuj6aNixDNJDMNxkVsTnD1/v9SQSBSinJGXgrnnmicM6oGYFaI33G38RVZCybEV7DBjvA== Received: from DM5PR07CA0092.namprd07.prod.outlook.com (2603:10b6:4:ae::21) by PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.12; Thu, 9 Jan 2025 16:07:27 +0000 Received: from DS1PEPF0001708F.namprd03.prod.outlook.com (2603:10b6:4:ae:cafe::fd) by DM5PR07CA0092.outlook.office365.com (2603:10b6:4:ae::21) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.11 via Frontend Transport; Thu, 9 Jan 2025 16:07:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0001708F.mail.protection.outlook.com (10.167.17.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Thu, 9 Jan 2025 16:07:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:14 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:14 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:10 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 06/15] net/mlx5: fs, add HWS modify header API function Date: Thu, 9 Jan 2025 18:05:37 +0200 Message-ID: <20250109160546.1733647-7-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0001708F:EE_|PH8PR12MB7277:EE_ X-MS-Office365-Filtering-Correlation-Id: 0c2c5873-60e8-466b-4abd-08dd30c7b62b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|1800799024|376014; X-Microsoft-Antispam-Message-Info: /Q2TV6gMAtZKKUX87713O+uqnaGxXxeGG0MGnoanZhH5Cnml0kV12+8vvKGL4/gV/IHfenDuD0SAjMOrinGBFGQWKmxFvC8eVP46KcH8kgvf0otClHUWqiplGh/gcHILsR4+p8AAep5owYscSKhkfOrbSgFqTSc/bfJKxgVYAJz+ANVJ7UVW8MUljHZ/1WuG6oMUz+jnHhql9ZDFd3kFbKAqHQCrNBhrf4Fo6rMvSe+LD5XRXCNoiCo1E1xEWXgLzDPYHmMQOmrwINRgpzxaI0FEWaDwRlrHk8l2/uqAMqqmffx0gjpXVuirfHGxnANdAGWU8I+wFuybOOas9yTYkh06x4jO9fGKAg9D7eyYVmHjtSS6h5s7T7Jpjz1fewuO+1k7vcyYOdbedb+dNeArEhIAcOjFML0vDBfjrf2cDHFlAOD+loKX6mqRUz3P3MUobFA/CIpwXcudVCXFUHEqkSYhCbfA9NqUcvUbx0f6/0Wt6YWqfdSvCyWeJljPO6xjveYfpRNMSYrWLc2xD72KFHkWHHQIkyOFHfdNg2vz3baPRn3Algdq5w73MLqxAGllckDRYznckeSPIgvsvsUrNM0ze64quvQ6yk6OtTE1snf9A0J/ItDz55IuROo6oXtU7kriuiUOq5OdNTET4NrNxIjCNr1ednwFX5ojLRXIENk9antPSLqqRwDdwsrxfxZiitbIklKJRrNoR5IzNLMAiL5VKSg16DbosyuUNxXih4Hp/e2YE1OiSGrOw7TxVBkViy+TacpwFQa/1ND/zY8Nr2EIHGDCD0cXrz5fvFy7JcYjVXha+34V12ItGatMP2f/6nPaIU8ZyGcr2Xtvswokg6YbIGOaKmZKx8AcswAueRCeRUz5+/7248/YAqdrbJa+flUQw4oNvldZFV7h6jG3PgLH+6hqeqT6ZyjzDba4oTczcg8pxnLrShpMRncp960cTBVZqZ+osHGcANDAWWeWZpBUuLVLelYqKYFbYATib4SgXePWfgtX6k0u52pGKLAeDzngrunhnGNLQHXurIeSvYS2e6aNK6YzvXgPeh0okJxRko4Dvwn1otuuEs8a8jcRHpOs+EyK7hp4em8mRWWUFOCS2HT5TwjoS/gVtnVJW+96vOuzlgNDCcAuTEL+8Cip50qRGWDcEOpBNksR5pM0j3jSM5MoJSACati0IP8zg7fPZGKlitdi0nJGQnHyslfvaA1tdSUGm4Jp4SDyYvRFlPLan2isPPLaVwoU9UeUnRYegA5/vF1NWBvUljAG9+XInB0wlI0dhAkb/H2/XQ6nLU7m69qgBgte/n70Di8LJRgubTtdg/5pvuL/ORYamjnP7WPWY1YCrenNlGdAiWYQpAvAypWgx+91KGdBFtcDOCw2+S93h1UZNdm5+QQkGhrZ9od08t3/kmfeipDQ24IoqZBJCPaVEx51nUmq+ZHhW1KBJGWZQns/JiZg/mkaUKCvrGSiv21ffU4p8a6277r1aV97z+l5eJtGrpfT7ZF4Hzk= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(1800799024)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:26.7362 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0c2c5873-60e8-466b-4abd-08dd30c7b62b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0001708F.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7277 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add modify header alloc and dealloc API functions to provide modify header actions for steering rules. Use fs hws pools to get actions from shared bulks of modify header actions. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 1 + .../mellanox/mlx5/core/steering/hws/fs_hws.c | 120 +++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 2 + .../mlx5/core/steering/hws/fs_hws_pools.c | 165 ++++++++++++++++++ .../mlx5/core/steering/hws/fs_hws_pools.h | 22 +++ 5 files changed, 310 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index b40e5310bef7..5875364cef4b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -65,6 +65,7 @@ struct mlx5_modify_hdr { enum mlx5_flow_resource_owner owner; union { struct mlx5_fs_dr_action fs_dr_action; + struct mlx5_fs_hws_action fs_hws_action; u32 id; }; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index a584aa16d2d1..543a7b2f0dff 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -15,6 +15,9 @@ mlx5_fs_create_action_remove_header_vlan(struct mlx5hws_context *ctx); static void mlx5_fs_destroy_pr_pool(struct mlx5_fs_pool *pool, struct xarray *pr_pools, unsigned long index); +static void +mlx5_fs_destroy_mh_pool(struct mlx5_fs_pool *pool, struct xarray *mh_pools, + unsigned long index); static int mlx5_fs_init_hws_actions_pool(struct mlx5_core_dev *dev, struct mlx5_fs_hws_context *fs_ctx) @@ -58,6 +61,7 @@ static int mlx5_fs_init_hws_actions_pool(struct mlx5_core_dev *dev, goto cleanup_insert_hdr; xa_init(&hws_pool->el2tol3tnl_pools); xa_init(&hws_pool->el2tol2tnl_pools); + xa_init(&hws_pool->mh_pools); return 0; cleanup_insert_hdr: @@ -83,6 +87,9 @@ static void mlx5_fs_cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) struct mlx5_fs_pool *pool; unsigned long i; + xa_for_each(&hws_pool->mh_pools, i, pool) + mlx5_fs_destroy_mh_pool(pool, &hws_pool->mh_pools, i); + xa_destroy(&hws_pool->mh_pools); xa_for_each(&hws_pool->el2tol2tnl_pools, i, pool) mlx5_fs_destroy_pr_pool(pool, &hws_pool->el2tol2tnl_pools, i); xa_destroy(&hws_pool->el2tol2tnl_pools); @@ -532,6 +539,117 @@ static void mlx5_cmd_hws_packet_reformat_dealloc(struct mlx5_flow_root_namespace pkt_reformat->fs_hws_action.pr_data = NULL; } +static struct mlx5_fs_pool * +mlx5_fs_create_mh_pool(struct mlx5_core_dev *dev, + struct mlx5hws_action_mh_pattern *pattern, + struct xarray *mh_pools, unsigned long index) +{ + struct mlx5_fs_pool *pool; + int err; + + pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return ERR_PTR(-ENOMEM); + err = mlx5_fs_hws_mh_pool_init(pool, dev, pattern); + if (err) + goto free_pool; + err = xa_insert(mh_pools, index, pool, GFP_KERNEL); + if (err) + goto cleanup_pool; + return pool; + +cleanup_pool: + mlx5_fs_hws_mh_pool_cleanup(pool); +free_pool: + kfree(pool); + return ERR_PTR(err); +} + +static void +mlx5_fs_destroy_mh_pool(struct mlx5_fs_pool *pool, struct xarray *mh_pools, + unsigned long index) +{ + xa_erase(mh_pools, index); + mlx5_fs_hws_mh_pool_cleanup(pool); + kfree(pool); +} + +static int mlx5_cmd_hws_modify_header_alloc(struct mlx5_flow_root_namespace *ns, + u8 namespace, u8 num_actions, + void *modify_actions, + struct mlx5_modify_hdr *modify_hdr) +{ + struct mlx5_fs_hws_actions_pool *hws_pool = &ns->fs_hws_context.hws_pool; + struct mlx5hws_action_mh_pattern pattern = {}; + struct mlx5_fs_hws_mh *mh_data = NULL; + struct mlx5hws_action *hws_action; + struct mlx5_fs_pool *pool; + unsigned long i, cnt = 0; + bool known_pattern; + int err; + + pattern.sz = MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto) * num_actions; + pattern.data = modify_actions; + + known_pattern = false; + xa_for_each(&hws_pool->mh_pools, i, pool) { + if (mlx5_fs_hws_mh_pool_match(pool, &pattern)) { + known_pattern = true; + break; + } + cnt++; + } + + if (!known_pattern) { + pool = mlx5_fs_create_mh_pool(ns->dev, &pattern, + &hws_pool->mh_pools, cnt); + if (IS_ERR(pool)) + return PTR_ERR(pool); + } + mh_data = mlx5_fs_hws_mh_pool_acquire_mh(pool); + if (IS_ERR(mh_data)) { + err = PTR_ERR(mh_data); + goto destroy_pool; + } + hws_action = mh_data->bulk->hws_action; + mh_data->data = kmemdup(pattern.data, pattern.sz, GFP_KERNEL); + if (!mh_data->data) { + err = -ENOMEM; + goto release_mh; + } + modify_hdr->fs_hws_action.mh_data = mh_data; + modify_hdr->fs_hws_action.fs_pool = pool; + modify_hdr->owner = MLX5_FLOW_RESOURCE_OWNER_SW; + modify_hdr->fs_hws_action.hws_action = hws_action; + + return 0; + +release_mh: + mlx5_fs_hws_mh_pool_release_mh(pool, mh_data); +destroy_pool: + if (!known_pattern) + mlx5_fs_destroy_mh_pool(pool, &hws_pool->mh_pools, cnt); + return err; +} + +static void mlx5_cmd_hws_modify_header_dealloc(struct mlx5_flow_root_namespace *ns, + struct mlx5_modify_hdr *modify_hdr) +{ + struct mlx5_fs_hws_mh *mh_data; + struct mlx5_fs_pool *pool; + + if (!modify_hdr->fs_hws_action.fs_pool || !modify_hdr->fs_hws_action.mh_data) { + mlx5_core_err(ns->dev, "Failed release modify-header\n"); + return; + } + + mh_data = modify_hdr->fs_hws_action.mh_data; + kfree(mh_data->data); + pool = modify_hdr->fs_hws_action.fs_pool; + mlx5_fs_hws_mh_pool_release_mh(pool, mh_data); + modify_hdr->fs_hws_action.mh_data = NULL; +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, @@ -541,6 +659,8 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .destroy_flow_group = mlx5_cmd_hws_destroy_flow_group, .packet_reformat_alloc = mlx5_cmd_hws_packet_reformat_alloc, .packet_reformat_dealloc = mlx5_cmd_hws_packet_reformat_dealloc, + .modify_header_alloc = mlx5_cmd_hws_modify_header_alloc, + .modify_header_dealloc = mlx5_cmd_hws_modify_header_dealloc, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index 19786838f6d6..1e53c0156338 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -18,6 +18,7 @@ struct mlx5_fs_hws_actions_pool { struct mlx5_fs_pool dl3tnltol2_pool; struct xarray el2tol3tnl_pools; struct xarray el2tol2tnl_pools; + struct xarray mh_pools; }; struct mlx5_fs_hws_context { @@ -34,6 +35,7 @@ struct mlx5_fs_hws_action { struct mlx5hws_action *hws_action; struct mlx5_fs_pool *fs_pool; struct mlx5_fs_hws_pr *pr_data; + struct mlx5_fs_hws_mh *mh_data; }; struct mlx5_fs_hws_matcher { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c index b12b96c94dae..2a2175b6cfc0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c @@ -236,3 +236,168 @@ struct mlx5hws_action *mlx5_fs_hws_pr_get_action(struct mlx5_fs_hws_pr *pr_data) { return pr_data->bulk->hws_action; } + +static struct mlx5hws_action * +mlx5_fs_mh_bulk_action_create(struct mlx5hws_context *ctx, + struct mlx5hws_action_mh_pattern *pattern) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB; + u32 log_bulk_size; + + log_bulk_size = ilog2(MLX5_FS_HWS_DEFAULT_BULK_LEN); + return mlx5hws_action_create_modify_header(ctx, 1, pattern, + log_bulk_size, flags); +} + +static struct mlx5_fs_bulk * +mlx5_fs_hws_mh_bulk_create(struct mlx5_core_dev *dev, void *pool_ctx) +{ + struct mlx5hws_action_mh_pattern *pattern; + struct mlx5_flow_root_namespace *root_ns; + struct mlx5_fs_hws_mh_bulk *mh_bulk; + struct mlx5hws_context *ctx; + int bulk_len; + + if (!pool_ctx) + return NULL; + + root_ns = mlx5_get_root_namespace(dev, MLX5_FLOW_NAMESPACE_FDB); + if (!root_ns || root_ns->mode != MLX5_FLOW_STEERING_MODE_HMFS) + return NULL; + + ctx = root_ns->fs_hws_context.hws_ctx; + if (!ctx) + return NULL; + + pattern = pool_ctx; + bulk_len = MLX5_FS_HWS_DEFAULT_BULK_LEN; + mh_bulk = kvzalloc(struct_size(mh_bulk, mhs_data, bulk_len), GFP_KERNEL); + if (!mh_bulk) + return NULL; + + if (mlx5_fs_bulk_init(dev, &mh_bulk->fs_bulk, bulk_len)) + goto free_mh_bulk; + + for (int i = 0; i < bulk_len; i++) { + mh_bulk->mhs_data[i].bulk = mh_bulk; + mh_bulk->mhs_data[i].offset = i; + } + + mh_bulk->hws_action = mlx5_fs_mh_bulk_action_create(ctx, pattern); + if (!mh_bulk->hws_action) + goto cleanup_fs_bulk; + + return &mh_bulk->fs_bulk; + +cleanup_fs_bulk: + mlx5_fs_bulk_cleanup(&mh_bulk->fs_bulk); +free_mh_bulk: + kvfree(mh_bulk); + return NULL; +} + +static int +mlx5_fs_hws_mh_bulk_destroy(struct mlx5_core_dev *dev, + struct mlx5_fs_bulk *fs_bulk) +{ + struct mlx5_fs_hws_mh_bulk *mh_bulk; + + mh_bulk = container_of(fs_bulk, struct mlx5_fs_hws_mh_bulk, fs_bulk); + if (mlx5_fs_bulk_get_free_amount(fs_bulk) < fs_bulk->bulk_len) { + mlx5_core_err(dev, "Freeing bulk before all modify header were released\n"); + return -EBUSY; + } + + mlx5hws_action_destroy(mh_bulk->hws_action); + mlx5_fs_bulk_cleanup(fs_bulk); + kvfree(mh_bulk); + + return 0; +} + +static const struct mlx5_fs_pool_ops mlx5_fs_hws_mh_pool_ops = { + .bulk_create = mlx5_fs_hws_mh_bulk_create, + .bulk_destroy = mlx5_fs_hws_mh_bulk_destroy, + .update_threshold = mlx5_hws_pool_update_threshold, +}; + +int mlx5_fs_hws_mh_pool_init(struct mlx5_fs_pool *fs_hws_mh_pool, + struct mlx5_core_dev *dev, + struct mlx5hws_action_mh_pattern *pattern) +{ + struct mlx5hws_action_mh_pattern *pool_pattern; + + pool_pattern = kzalloc(sizeof(*pool_pattern), GFP_KERNEL); + if (!pool_pattern) + return -ENOMEM; + pool_pattern->data = kmemdup(pattern->data, pattern->sz, GFP_KERNEL); + if (!pool_pattern->data) { + kfree(pool_pattern); + return -ENOMEM; + } + pool_pattern->sz = pattern->sz; + mlx5_fs_pool_init(fs_hws_mh_pool, dev, &mlx5_fs_hws_mh_pool_ops, + pool_pattern); + return 0; +} + +void mlx5_fs_hws_mh_pool_cleanup(struct mlx5_fs_pool *fs_hws_mh_pool) +{ + struct mlx5hws_action_mh_pattern *pool_pattern; + + mlx5_fs_pool_cleanup(fs_hws_mh_pool); + pool_pattern = fs_hws_mh_pool->pool_ctx; + if (!pool_pattern) + return; + kfree(pool_pattern->data); + kfree(pool_pattern); +} + +struct mlx5_fs_hws_mh * +mlx5_fs_hws_mh_pool_acquire_mh(struct mlx5_fs_pool *mh_pool) +{ + struct mlx5_fs_pool_index pool_index = {}; + struct mlx5_fs_hws_mh_bulk *mh_bulk; + int err; + + err = mlx5_fs_pool_acquire_index(mh_pool, &pool_index); + if (err) + return ERR_PTR(err); + mh_bulk = container_of(pool_index.fs_bulk, struct mlx5_fs_hws_mh_bulk, + fs_bulk); + return &mh_bulk->mhs_data[pool_index.index]; +} + +void mlx5_fs_hws_mh_pool_release_mh(struct mlx5_fs_pool *mh_pool, + struct mlx5_fs_hws_mh *mh_data) +{ + struct mlx5_fs_bulk *fs_bulk = &mh_data->bulk->fs_bulk; + struct mlx5_fs_pool_index pool_index = {}; + struct mlx5_core_dev *dev = mh_pool->dev; + + pool_index.fs_bulk = fs_bulk; + pool_index.index = mh_data->offset; + if (mlx5_fs_pool_release_index(mh_pool, &pool_index)) + mlx5_core_warn(dev, "Attempted to release modify header which is not acquired\n"); +} + +bool mlx5_fs_hws_mh_pool_match(struct mlx5_fs_pool *mh_pool, + struct mlx5hws_action_mh_pattern *pattern) +{ + struct mlx5hws_action_mh_pattern *pool_pattern; + int num_actions, i; + + pool_pattern = mh_pool->pool_ctx; + if (WARN_ON_ONCE(!pool_pattern)) + return false; + + if (pattern->sz != pool_pattern->sz) + return false; + num_actions = pattern->sz / MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto); + for (i = 0; i < num_actions; i++) { + if ((__force __be32)pattern->data[i] != + (__force __be32)pool_pattern->data[i]) + return false; + } + return true; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h index 544b277be3c5..30157db4d40e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h @@ -36,6 +36,19 @@ struct mlx5_fs_hws_pr_pool_ctx { size_t encap_data_size; }; +struct mlx5_fs_hws_mh { + struct mlx5_fs_hws_mh_bulk *bulk; + u32 offset; + u8 *data; +}; + +struct mlx5_fs_hws_mh_bulk { + struct mlx5_fs_bulk fs_bulk; + struct mlx5_fs_pool *mh_pool; + struct mlx5hws_action *hws_action; + struct mlx5_fs_hws_mh mhs_data[]; +}; + int mlx5_fs_hws_pr_pool_init(struct mlx5_fs_pool *pr_pool, struct mlx5_core_dev *dev, size_t encap_data_size, enum mlx5hws_action_type reformat_type); @@ -45,4 +58,13 @@ struct mlx5_fs_hws_pr *mlx5_fs_hws_pr_pool_acquire_pr(struct mlx5_fs_pool *pr_po void mlx5_fs_hws_pr_pool_release_pr(struct mlx5_fs_pool *pr_pool, struct mlx5_fs_hws_pr *pr_data); struct mlx5hws_action *mlx5_fs_hws_pr_get_action(struct mlx5_fs_hws_pr *pr_data); +int mlx5_fs_hws_mh_pool_init(struct mlx5_fs_pool *fs_hws_mh_pool, + struct mlx5_core_dev *dev, + struct mlx5hws_action_mh_pattern *pattern); +void mlx5_fs_hws_mh_pool_cleanup(struct mlx5_fs_pool *fs_hws_mh_pool); +struct mlx5_fs_hws_mh *mlx5_fs_hws_mh_pool_acquire_mh(struct mlx5_fs_pool *mh_pool); +void mlx5_fs_hws_mh_pool_release_mh(struct mlx5_fs_pool *mh_pool, + struct mlx5_fs_hws_mh *mh_data); +bool mlx5_fs_hws_mh_pool_match(struct mlx5_fs_pool *mh_pool, + struct mlx5hws_action_mh_pattern *pattern); #endif /* __MLX5_FS_HWS_POOLS_H__ */ From patchwork Thu Jan 9 16:05:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932958 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2084.outbound.protection.outlook.com [40.107.223.84]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FEC3220696 for ; Thu, 9 Jan 2025 16:07:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.84 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438860; cv=fail; b=LDxlqhr6QxHD8Z7ipplmYGcKNBrHbqOEWIvXS3ffHJKhMF5C5TiFP+zg3hw5n3evrXLt49BDKepIRU0T2ZDSivsZcEQ8BDFZr9Tiq4a9SAxx4Yxw7/LIkzSbuav9ASyBSTQ0yDnpCVB+rwV8LxCwrp/bEgGjJuvsaNlvHfNTeOg= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438860; c=relaxed/simple; bh=nqtLsmRyPxwZiC7k53W9Pf6OwAcmuJQuvox8iSXWoSY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=o10f0lNolO3RCT7u0TtocQBeucKAQDAq5Vtpur/5vn2+yJsEZOE1fmsxjFmkHGHbIcYD3Z7GZcI3v6AQZ/VbkQA3U0XzAJcr5y8eJV3Rtqu5ck0VvpLXVxLMKb2tP1pAFeqbGQqE616GFyAHh++fzpTUrszag4NsjDS0VJqXN5s= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=r+FBh6SD; arc=fail smtp.client-ip=40.107.223.84 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="r+FBh6SD" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=m7A1pjNp6PeticPYkEmgxGfD+k6SbHmFwxcqWePlIZ/ozDGDPUOCIEzJly96KqqKg4cn8rzXb2g8V2RE4508A4MsMFNKvkyzfgd+b+RjP3K/MT5SqWpdYeGZ0nYNruRkU6L5FENaeYE6aOXZBTWPKdLYtW8VHpI75Velaih6kAvcvEJ7tJ3G7k7wLiMK4sYkZ30XGvwaRZyY+GzMHnEj30XX2hdwFBo/7L9RMfU4XFK8FZ91rhln+qskXu8247Uw3HmUuAX8wKzg4H9m4EvORoWQS5XMW0pad5AqbfkDzGYdaCMfVmHqDuvOizpmka/o8r5zUHvZJ5VxJsrE/WwdMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NkEF6qo9pPmRH0kE2mvXAKe6bmVFc/eUtiqDO1+8th4=; b=yfNkabZY9Z1IUhvwweU/ey0FREUFlGOacpPpthu+qzjTizU7bkGSO3TOBb1sKmvDwTa33JASyja5sDvL/A7e70MaVkvTaWuGojcMoQseom0KrvUpovw44MUYx6h9JLAATQgoRqy54XQpsja37I1DAy7Px+LpLLkugofNMH6wDMQKLlOwZ7b03sCfZhzM4D3yNkmaBQZh+veZgle69dE9Gn31haqldbGvVU0GbvVRVIRrRO7S9QguHhoQYXGZYtMHSYN5YAIq07YvTm9d+Zi8nFW4tjFu+tEAU04F3IhhZ6k7yvN1O20E67q3Gy13u+O1SycThABNuSoVx222ylWEwA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NkEF6qo9pPmRH0kE2mvXAKe6bmVFc/eUtiqDO1+8th4=; b=r+FBh6SDN56zGLL/0d1U1jW6VwYD4HD+Tcr3/5OejoriXp1eauOdQo71JYQkkbkv05PkYBrA4SYf1Y+ilIAdeqxDZnX5gbB5DAP9mzRTZXzL2S0DP/CnbjYzFWNHAoBapKjdeUy6BQAEOwTYhfilvnzncNZik0Mfgliv51LlK2QdIZaGqjyame4FSabqFFqKNJC/Vfkzmkr8SuNqdevgZtjnnbsYYf4IlgFLMXx7rVlrcdoitew+tUdQftTOa11ZeCrUjRV63l7AwI5b78woXlWYfqjp6aYBfP0m+7g5mNhJvhFYAgZmkKqOAdAOP8I2Gk5r3Gt1apLHZRAUNP6l8g== Received: from DM6PR07CA0118.namprd07.prod.outlook.com (2603:10b6:5:330::31) by BL1PR12MB5706.namprd12.prod.outlook.com (2603:10b6:208:385::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.13; Thu, 9 Jan 2025 16:07:31 +0000 Received: from DS1PEPF00017090.namprd03.prod.outlook.com (2603:10b6:5:330:cafe::70) by DM6PR07CA0118.outlook.office365.com (2603:10b6:5:330::31) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.11 via Frontend Transport; Thu, 9 Jan 2025 16:07:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF00017090.mail.protection.outlook.com (10.167.17.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Thu, 9 Jan 2025 16:07:31 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:18 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:17 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:14 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 07/15] net/mlx5: fs, manage flow counters HWS action sharing by refcount Date: Thu, 9 Jan 2025 18:05:38 +0200 Message-ID: <20250109160546.1733647-8-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017090:EE_|BL1PR12MB5706:EE_ X-MS-Office365-Filtering-Correlation-Id: aa35eb4e-7bd9-4c41-49cf-08dd30c7b91d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: +qNqW7d7WQkNKeR87NqTpx59g2+Qix1sx00vY2ZcqfxyOHR2nrQ5XV3iKqL1kGI4gzpi5v4UzGXgI5gKy99tJGSav2ZeKK12a07Z+2rVIVCuIq+gdY9r8BZzACK/hHMVFwDM91H8DNLufEMT0QQOJaC6Xh/V4sQcS3drhUtaDDb3CKEDBY/tta8qIq9w7ruy3y94zG+rqzKir88HooUBfMq+NvL9Ord5rk+aY9maBLKj7rJ1lfTnCpsag4kWpKSf7YB6LkOwThYsx8LwgqXBRvCjArswxlGYWTXDqFJJEvGCrU+ekTG5SegYVtbMBVKehifayw9W3NHFWYSGeB5azMo9wgx+EOjbVn71PX5etRktP2qIwABxS2OsbXajAU1mLA7sIve0kGBXteLArxsxvHe1krcPYc9OnCHJ4AlYke33sWMUXna52rezxcv6Z9Vk6ZJUj5VeTE6deyElLRn1bIltuIdHjunByboNqYfT44LOXgG+j2eUk16Y/XB+qHQh5+pngPP+TViYH4/85B43LGCovMMJ1gBCMKtlqkldKHzDpSdFscDhHqZp59wCzHaFnd87cXSiCzJZYTpuLC4pziKZg/pRGx6UmcrnuyEbC/SM9AAyQRmESzBbZvxFN0/X6F/gnDCRAZA7Fw24TQk8l6O9sRDeh78OBpGVcYxEKqDphttJBhAK7kjp0mf/6bm4/Qrv/Q+WcW+PaKRoz/o83Cy/zwBAwYcSk1Htm8IzO2WTVghury+q/Q4qQICrOpX9QA4eT6CHy871FULDPAl7PG32LgNpgJVqy6h2SFUGk88ZSeX5+ril7OTHARU5LGYkk6J/q3EezpkMabMDKuygIJ5yS1P3HJvFEW58tsmO+klc/wst6q9G/bZ4JRfHyRXm/8Hw0bdjr4OEuGmPCGfPoPWe1IMeO2hRwTfsv4WOyrj0T4ap39BjyenzjqJu37J7Le9ElUNed5edRuIFfiHpZa343VlXzfaYCX+LCm1Cb668/KTpBIM0mVvebc8XIITcW7vbXBVN9skn0lpzUJ68L76fUhnPtV09kT+aK8H2N4U/dFMP1yK+Wa2/1/yaBhGQVliSuWLD12YaQDmdmlB7P0udEgxCJZ7xyNUaewewrfRUEPTFhrZW1BESAeNKJcWJPkNOBYUgV+j94/cuitegJspMegtq5vejjzZKC6NGE3L/4ITb/PU/C3doAuokQMK9bYC85Cke0FUHQW76ermXAyamFMnZThOstiQZ6nXC5VMViazR3WO5GLnHcxo47qw5510aD3+iXe4ejuR6Eq+9YepOt5fiCtCdNHkskuUuGWKojp5rMnqW693rYstDgRjMycQuITsDPoQruYQSebb7Mx4ms7Lm/3EFwVT+axqA9B6BAKMx5Lsv17njEJ2GyKpTqsP3cqMyPA4zPx2u/86iqSD3QsraBbfQEGMxvbkWRRoLQMESYetOB7tZ/OyFeHT1UKrF1ljLMHeo1szZIFoGMNiMhkewKxjqiYbYnVrSPLk= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:31.7401 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: aa35eb4e-7bd9-4c41-49cf-08dd30c7b91d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017090.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5706 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Multiple flow counters can utilize a single Hardware Steering (HWS) action for Hardware Steering rules. Given that these counter bulks are not exclusively created for Hardware Steering, but also serve purposes such as statistics gathering and other steering modes, it's more efficient to create the HWS action only when it's first needed by a Hardware Steering rule. This approach allows for better resource management through the use of a reference count, rather than automatically creating an HWS action for every bulk of flow counters. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 36 ++++++++++++++ .../ethernet/mellanox/mlx5/core/fs_counters.c | 37 ++++----------- .../mlx5/core/steering/hws/fs_hws_pools.c | 47 +++++++++++++++++++ .../mlx5/core/steering/hws/fs_hws_pools.h | 3 ++ 4 files changed, 94 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index 5875364cef4b..1c5d687f45f0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -316,6 +316,42 @@ struct mlx5_flow_root_namespace { const struct mlx5_flow_cmds *cmds; }; +enum mlx5_fc_type { + MLX5_FC_TYPE_ACQUIRED = 0, + MLX5_FC_TYPE_LOCAL, +}; + +struct mlx5_fc_cache { + u64 packets; + u64 bytes; + u64 lastuse; +}; + +struct mlx5_fc { + u32 id; + bool aging; + enum mlx5_fc_type type; + struct mlx5_fc_bulk *bulk; + struct mlx5_fc_cache cache; + /* last{packets,bytes} are used for calculating deltas since last reading. */ + u64 lastpackets; + u64 lastbytes; +}; + +struct mlx5_fc_bulk_hws_data { + struct mlx5hws_action *hws_action; + struct mutex lock; /* protects hws_action */ + refcount_t hws_action_refcount; +}; + +struct mlx5_fc_bulk { + struct mlx5_fs_bulk fs_bulk; + u32 base_id; + struct mlx5_fc_bulk_hws_data hws_data; + struct mlx5_fc fcs[]; +}; + +u32 mlx5_fc_get_base_id(struct mlx5_fc *counter); int mlx5_init_fc_stats(struct mlx5_core_dev *dev); void mlx5_cleanup_fc_stats(struct mlx5_core_dev *dev); void mlx5_fc_queue_stats_work(struct mlx5_core_dev *dev, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c index 94d9caacd50f..492775d3d193 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c @@ -44,28 +44,6 @@ #define MLX5_FC_POOL_MAX_THRESHOLD BIT(18) #define MLX5_FC_POOL_USED_BUFF_RATIO 10 -enum mlx5_fc_type { - MLX5_FC_TYPE_ACQUIRED = 0, - MLX5_FC_TYPE_LOCAL, -}; - -struct mlx5_fc_cache { - u64 packets; - u64 bytes; - u64 lastuse; -}; - -struct mlx5_fc { - u32 id; - bool aging; - enum mlx5_fc_type type; - struct mlx5_fc_bulk *bulk; - struct mlx5_fc_cache cache; - /* last{packets,bytes} are used for calculating deltas since last reading. */ - u64 lastpackets; - u64 lastbytes; -}; - struct mlx5_fc_stats { struct xarray counters; @@ -434,13 +412,7 @@ void mlx5_fc_update_sampling_interval(struct mlx5_core_dev *dev, fc_stats->sampling_interval); } -/* Flow counter bluks */ - -struct mlx5_fc_bulk { - struct mlx5_fs_bulk fs_bulk; - u32 base_id; - struct mlx5_fc fcs[]; -}; +/* Flow counter bulks */ static void mlx5_fc_init(struct mlx5_fc *counter, struct mlx5_fc_bulk *bulk, u32 id) @@ -449,6 +421,11 @@ static void mlx5_fc_init(struct mlx5_fc *counter, struct mlx5_fc_bulk *bulk, counter->id = id; } +u32 mlx5_fc_get_base_id(struct mlx5_fc *counter) +{ + return counter->bulk->base_id; +} + static struct mlx5_fs_bulk *mlx5_fc_bulk_create(struct mlx5_core_dev *dev, void *pool_ctx) { @@ -474,6 +451,8 @@ static struct mlx5_fs_bulk *mlx5_fc_bulk_create(struct mlx5_core_dev *dev, for (i = 0; i < bulk_len; i++) mlx5_fc_init(&fc_bulk->fcs[i], fc_bulk, base_id + i); + refcount_set(&fc_bulk->hws_data.hws_action_refcount, 0); + mutex_init(&fc_bulk->hws_data.lock); return &fc_bulk->fs_bulk; fs_bulk_cleanup: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c index 2a2175b6cfc0..2ae4ac62b0e2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c @@ -401,3 +401,50 @@ bool mlx5_fs_hws_mh_pool_match(struct mlx5_fs_pool *mh_pool, } return true; } + +struct mlx5hws_action *mlx5_fc_get_hws_action(struct mlx5hws_context *ctx, + struct mlx5_fc *counter) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_fc_bulk *fc_bulk = counter->bulk; + struct mlx5_fc_bulk_hws_data *fc_bulk_hws; + + fc_bulk_hws = &fc_bulk->hws_data; + /* try avoid locking if not necessary */ + if (refcount_inc_not_zero(&fc_bulk_hws->hws_action_refcount)) + return fc_bulk_hws->hws_action; + + mutex_lock(&fc_bulk_hws->lock); + if (refcount_inc_not_zero(&fc_bulk_hws->hws_action_refcount)) { + mutex_unlock(&fc_bulk_hws->lock); + return fc_bulk_hws->hws_action; + } + fc_bulk_hws->hws_action = + mlx5hws_action_create_counter(ctx, fc_bulk->base_id, flags); + if (!fc_bulk_hws->hws_action) { + mutex_unlock(&fc_bulk_hws->lock); + return NULL; + } + refcount_set(&fc_bulk_hws->hws_action_refcount, 1); + mutex_unlock(&fc_bulk_hws->lock); + + return fc_bulk_hws->hws_action; +} + +void mlx5_fc_put_hws_action(struct mlx5_fc *counter) +{ + struct mlx5_fc_bulk_hws_data *fc_bulk_hws = &counter->bulk->hws_data; + + /* try avoid locking if not necessary */ + if (refcount_dec_not_one(&fc_bulk_hws->hws_action_refcount)) + return; + + mutex_lock(&fc_bulk_hws->lock); + if (!refcount_dec_and_test(&fc_bulk_hws->hws_action_refcount)) { + mutex_unlock(&fc_bulk_hws->lock); + return; + } + mlx5hws_action_destroy(fc_bulk_hws->hws_action); + fc_bulk_hws->hws_action = NULL; + mutex_unlock(&fc_bulk_hws->lock); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h index 30157db4d40e..34072551dd21 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.h @@ -67,4 +67,7 @@ void mlx5_fs_hws_mh_pool_release_mh(struct mlx5_fs_pool *mh_pool, struct mlx5_fs_hws_mh *mh_data); bool mlx5_fs_hws_mh_pool_match(struct mlx5_fs_pool *mh_pool, struct mlx5hws_action_mh_pattern *pattern); +struct mlx5hws_action *mlx5_fc_get_hws_action(struct mlx5hws_context *ctx, + struct mlx5_fc *counter); +void mlx5_fc_put_hws_action(struct mlx5_fc *counter); #endif /* __MLX5_FS_HWS_POOLS_H__ */ From patchwork Thu Jan 9 16:05:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932961 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2076.outbound.protection.outlook.com [40.107.101.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85E93220688 for ; Thu, 9 Jan 2025 16:07:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.101.76 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438871; cv=fail; b=QzDolJJwdbqapWZcFKJG9FzdCy0IdKeYJhhvgl8ijREZfdj6k+6H6wWR1djhvlGDlREfpN5tBsjLx/Q2FVjemH5hBrp4LWuXJqsmRN+qdevxnqHNFbj0GpKtuVR1Vtxwy3chKMBQwgRbNmFgkxah9nNzoMRwLj/2cd0WVCvPURw= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438871; c=relaxed/simple; bh=eON5bpbm7Q9olETMVVdMHJoC7tpOz4XKyK+WYZ8TqXI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=e2ovnkfSYekGQk1p48LgpU0SbunexPHmfRISajKtxfKdryIppsbUtr4uYIU9NPemOwsNCQ96uIvEaM1sKCR5NYImVSFUlviJaYVHnWaGjl3gHt0UJswJF8944AvBAoTYTxEQngPHtyeGYn287I+knEYCykJtOFCXl5VE7/XaB1k= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=dR1PqK9b; arc=fail smtp.client-ip=40.107.101.76 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="dR1PqK9b" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=w9BWVtkY3Y2KjnfAiNJvfHQ6GjiqYOlwSAGJcp8UN81/rNzdT4I7J4Wx+V26vi9URPRTEUB4RiAxeON1LUfapLYB2Pupjr6l+CojNhPUG1OCWWX9OFMAn+AD+Uz70voKyjFq4/y4UhIKpIGU5oxw5tBNpwuDZNh8qKl8mflCgg91qUH157h/a9GUJQ9SW5eyF6iemPPpHrpVl5GuHqAszjvPgOLWkek41Pp+5y9MMp3kebLCYiKvMeqkr6m8vSGdIwu9f8UO2Cw5DHogbKt1tPZwDh/eaAMCb8dd4jGM7xSG8TnyWgsqFXhHZS8PnXT2N24y4ASVyUG04Fr4SBXJLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=w93bXKOo3XN/mUg5m7QkVCo5Awvmcz9CKhqMNzEF4Qo=; b=GX1Ul4YCMuHKif/z/46zcR2kGPEfvnxy+EJ3PerIrltt1UJJRthv3XMMObfz7sxL3UDiMBstMuyWolplW7uH7gbGA05NhacZ01f9dEV1ewzAYv3o9AZBB5j9Y/xFrPqR5hJ4wEv6Xmx4E3q0WtKhq+ITNIzD9AzksSroXt1p11dY3Otm+p4Ng6ZZgTVzb6CNRiK6LImLenb5cAGJZhWMujJAsZ3y3VdbmJDo+JNqHPFroBeBU14viokqfJSZBO7QLPA+T2MluIWpUTR1GOOELVRN+9d5KVpRfXKoKGCOE0VLbBL9SgWBQY4VEL1rzuQhcj7Kb0IcStgxa3/plki2rA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=w93bXKOo3XN/mUg5m7QkVCo5Awvmcz9CKhqMNzEF4Qo=; b=dR1PqK9bPkhQge4Fp6uuBKIrs7eQIfwBIyFoJTv6Ua5Wayvi/i5X5vYT6RWKMohJgi1XyTJLiKQXpCEq/ppy+NIlNQWXFaG9KON9QSzcqGyMMzlM9EP5UnyZQbBgImlaKbESqO/7YjPRBWq50gnW0dCPXtC6jaIke+2+4QXdE9jnSvgbaQCCqy6hQBwjAJ1R4apprQiGyjnRINzIvsVGqp+Y6RhlrwfdrCD9zlCHjQUUBP0jJPX7OQRLV7iba7emgnrElSSLeUZw4HlN3m9N2b6pDF4pCUDk01cCbfJqCKO8tw7UNZVm1yrQ6YIimFI3H0p+sCsQV5RDu6iOUFHQ2A== Received: from SJ0PR03CA0292.namprd03.prod.outlook.com (2603:10b6:a03:39e::27) by DS7PR12MB6143.namprd12.prod.outlook.com (2603:10b6:8:99::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Thu, 9 Jan 2025 16:07:42 +0000 Received: from CO1PEPF000044FA.namprd21.prod.outlook.com (2603:10b6:a03:39e:cafe::ff) by SJ0PR03CA0292.outlook.office365.com (2603:10b6:a03:39e::27) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.12 via Frontend Transport; Thu, 9 Jan 2025 16:07:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044FA.mail.protection.outlook.com (10.167.241.200) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.0 via Frontend Transport; Thu, 9 Jan 2025 16:07:41 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:21 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:21 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:18 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 08/15] net/mlx5: fs, add dest table cache Date: Thu, 9 Jan 2025 18:05:39 +0200 Message-ID: <20250109160546.1733647-9-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044FA:EE_|DS7PR12MB6143:EE_ X-MS-Office365-Filtering-Correlation-Id: 12e61fa5-c192-4183-a185-08dd30c7bf36 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|376014|1800799024; X-Microsoft-Antispam-Message-Info: cwj4kcvqA47dK/k1ZigFawy0WgDg08kfC6PlekrFpTIEfMp6RIib4p2d98Kirp1kxRB3RBR/4er02i7D9wtbktEwWeWcYn9XK0OuxQYYxLdmvWp3QzDNH1ghAfjkfRhezBu3seB4enGrLcy9udRuOdI51godLOYPsd2SpDMCXHev7a6w0kkJk+O5FQ0l/VTEPUpvmDyuZksrGlLuz/+HONyiiFMu9HxasKsgXJ4ohRvHusxFtSYeehBPF7B95ahpt0RyAUAM86IcYdCjjfygaizjv1zzmDSGIobTGwCkgy24+Hjp4kGh3OsMY564197C5xSF9unx3QNfUGwPSkrnQyz2VgvpwVRaPDBa7BphpuHsVrpqsfF60HqWuup3JsSTluUx9wizpmOO5FuCkyKv/QGrR1JbKJwmFeLGtCNsoJwZu1Jgl0a9sQxiElvGhw1bpvarczBn9jNVnbLeZ8SSx9lwqFCTFBsUv316BWWRJHB1UOo/gfBThuLcicpniakUmzjWt5auIvA/AwQgdNilWaGQ5QFiPbrf6Fo9G/KOsXE/TmeBbjndIT8tnWp5wxiAocK/0cgzZwdKY39Y3J1wh0iCdLw71fSm+l07ZeCp9A1H7q7RKtu2RAftON9wVfLGfqVxluLjyN3k1iDfOcCX6xog5yPOr1r3PZzpaomyyK5PeIFsDGYURTtB8//N5dvr0zDb4pVHf4e0eqFljMAFyRELegNFwhw7p9nWiPGw4LNqBwYJIxI6FHAeM7Ag2Vr/V9thvwVTQXFZqR5uFsKpbYetqAMDovCsuiAvVbhZvhy5TtuCL5N7qFRmvBLvf7mO6dUyxDcqnTSyWtyEqwl2TIF7LFiFpzbZUK2+0DCuMsJbBzMEISrQLMi/KN0goHC9O+2wL3lFNPMDQGwg02Ejg5lTUeQKSIA+TXnxjqB+crW04Li+AfbaOopVNBoAeU6+ARrJj6h2DaNMz572ZUi6Nw0A4lbJGFwNw+MN/FiVhp0hA1OXGcqj/LLFnyafXj/OhCTemO4KMHnFSiZDAqJGtVbJ1vlC9eyCC2Ed9pkuy14yCLmVU3FJ9RD+Xdki6PvytHMiytUsbJhLJMDOvOTaOhQ8aCusGhNVteAD2rbnynKxWzJdYSzYy5pjdaeO29VnY6Yuo4xus+OixunqjYSz1gHn/O7hl4FhhA/Rb6KUb0OO4lUK3L7kdS0n+xMh+NBfq3vx6HcApNsQ/y4fu2p1ws7YAfJa4MIl06w+8Th3Rx7iOzbEkniDWi84bufLKqmWCNuVDXwu08gXbEItJOdZwiWF9EW1BvFHte9q23E3/HIdlhOdNyXfnZCiJBE/WJi8fDza4vr4fWx2pbsKAkA9yja8w9159278ZiawsTUr80f4xDT0+ObBTfNOlERICDSlBSflaGtYO0xhu52meyvF2NXlE7mSSXoad1NCNuIe6lVyo6vneZiqpjW/ImSds4uHQ4cFWfiMSBdCjOORZp7UCg== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(376014)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:41.9725 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 12e61fa5-c192-4183-a185-08dd30c7bf36 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044FA.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6143 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add cache of destination flow table HWS action per HWS table. For each flow table created cache a destination action towards this table. The cached action will be used on the downstream patch whenever a rule requires such action. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 68 ++++++++++++++++++- .../mellanox/mlx5/core/steering/hws/fs_hws.h | 1 + 2 files changed, 66 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 543a7b2f0dff..7146cdd791fc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -62,6 +62,7 @@ static int mlx5_fs_init_hws_actions_pool(struct mlx5_core_dev *dev, xa_init(&hws_pool->el2tol3tnl_pools); xa_init(&hws_pool->el2tol2tnl_pools); xa_init(&hws_pool->mh_pools); + xa_init(&hws_pool->table_dests); return 0; cleanup_insert_hdr: @@ -87,6 +88,7 @@ static void mlx5_fs_cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) struct mlx5_fs_pool *pool; unsigned long i; + xa_destroy(&hws_pool->table_dests); xa_for_each(&hws_pool->mh_pools, i, pool) mlx5_fs_destroy_mh_pool(pool, &hws_pool->mh_pools, i); xa_destroy(&hws_pool->mh_pools); @@ -173,6 +175,50 @@ static int mlx5_fs_set_ft_default_miss(struct mlx5_flow_root_namespace *ns, return 0; } +static int mlx5_fs_add_flow_table_dest_action(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_fs_hws_context *fs_ctx = &ns->fs_hws_context; + struct mlx5hws_action *dest_ft_action; + struct xarray *dests_xa; + int err; + + dest_ft_action = mlx5hws_action_create_dest_table_num(fs_ctx->hws_ctx, + ft->id, flags); + if (!dest_ft_action) { + mlx5_core_err(ns->dev, "Failed creating dest table action\n"); + return -ENOMEM; + } + + dests_xa = &fs_ctx->hws_pool.table_dests; + err = xa_insert(dests_xa, ft->id, dest_ft_action, GFP_KERNEL); + if (err) + mlx5hws_action_destroy(dest_ft_action); + return err; +} + +static int mlx5_fs_del_flow_table_dest_action(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft) +{ + struct mlx5_fs_hws_context *fs_ctx = &ns->fs_hws_context; + struct mlx5hws_action *dest_ft_action; + struct xarray *dests_xa; + int err; + + dests_xa = &fs_ctx->hws_pool.table_dests; + dest_ft_action = xa_erase(dests_xa, ft->id); + if (!dest_ft_action) { + mlx5_core_err(ns->dev, "Failed to erase dest ft action\n"); + return -ENOENT; + } + + err = mlx5hws_action_destroy(dest_ft_action); + if (err) + mlx5_core_err(ns->dev, "Failed to destroy dest ft action\n"); + return err; +} + static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns, struct mlx5_flow_table *ft, struct mlx5_flow_table_attr *ft_attr, @@ -183,9 +229,16 @@ static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns, struct mlx5hws_table *tbl; int err; - if (mlx5_fs_cmd_is_fw_term_table(ft)) - return mlx5_fs_cmd_get_fw_cmds()->create_flow_table(ns, ft, ft_attr, - next_ft); + if (mlx5_fs_cmd_is_fw_term_table(ft)) { + err = mlx5_fs_cmd_get_fw_cmds()->create_flow_table(ns, ft, ft_attr, + next_ft); + if (err) + return err; + err = mlx5_fs_add_flow_table_dest_action(ns, ft); + if (err) + mlx5_fs_cmd_get_fw_cmds()->destroy_flow_table(ns, ft); + return err; + } if (ns->table_type != FS_FT_FDB) { mlx5_core_err(ns->dev, "Table type %d not supported for HWS\n", @@ -212,8 +265,13 @@ static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns, ft->max_fte = INT_MAX; + err = mlx5_fs_add_flow_table_dest_action(ns, ft); + if (err) + goto clear_ft_miss; return 0; +clear_ft_miss: + mlx5_fs_set_ft_default_miss(ns, ft, NULL); destroy_table: mlx5hws_table_destroy(tbl); ft->fs_hws_table.hws_table = NULL; @@ -225,6 +283,10 @@ static int mlx5_cmd_hws_destroy_flow_table(struct mlx5_flow_root_namespace *ns, { int err; + err = mlx5_fs_del_flow_table_dest_action(ns, ft); + if (err) + mlx5_core_err(ns->dev, "Failed to remove dest action (%d)\n", err); + if (mlx5_fs_cmd_is_fw_term_table(ft)) return mlx5_fs_cmd_get_fw_cmds()->destroy_flow_table(ns, ft); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index 1e53c0156338..205d8d71e7d9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -19,6 +19,7 @@ struct mlx5_fs_hws_actions_pool { struct xarray el2tol3tnl_pools; struct xarray el2tol2tnl_pools; struct xarray mh_pools; + struct xarray table_dests; }; struct mlx5_fs_hws_context { From patchwork Thu Jan 9 16:05:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932962 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2054.outbound.protection.outlook.com [40.107.92.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E5D221E0A8 for ; Thu, 9 Jan 2025 16:07:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.54 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438876; cv=fail; b=GOSqRp6QKymjq70UTqCBg94zilME4yaZcJw8I/RRroyv2Gwr/WXFkGhd4XJeigc3jZEGFCr9fZmFhuV3A8IlqY/ZfqrSOpndF+onbs3zx7I1ZdbHAuYaK+LrSUof12/oEew3A+amMNSS2qKs5+vi1woephBfl9J+6eKvjxdaCUM= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438876; c=relaxed/simple; bh=XJODezJIQ94PIFcpCH6Ri6f8tMpAocT/42/8zT3Km+0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=PNrTQZgZ0gXdOkDsLCWWY+0hFliYGfOhctOV1mKsdXxMHqIAi4IYclOzCXZMNLIZd2BpvBzImp8LqkejbzFKHbo3eAeHjUB2WbDe2qRKoxytI/UBvpu7azChvjKn6rkkxuKmRlKrjL/wllhddYmkDV/3HQvIu/scbOUpC7QNOrc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=a1N0Wt/Y; arc=fail smtp.client-ip=40.107.92.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="a1N0Wt/Y" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=CuIeV8DtF7FgRQ6OmIZUxYgl6u20wMoa8K7+qqumEWSAverLyN1r6t1mpss/Gpt3aeR6JbXIp2g6h9ZN9W8nDD0MF2tW7MahJfRue0d746bo8/VOm2YN9p8qjQv+hGA8GSit9K5Bg3JPDQWIkCRCJMpr6tt+cINar2CxOduasnc3eaPeBffHaaCZn7j2sX1oe06au6pPDQfH6VoUQn/2zuAL/JAyFZksbI/6v2cGDqPQ3nX/jKoSeJduGb+D8Ay2VXH5Yt7CDX1G3Cp0bkzmiiX1IdihRQC1YcEb5Xo/BtrIqjhW0r/NJppK5cxYqotQYrg5y8OmQK/AkxtFkP7yaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2O7qDrk0RA3GOwE1UJ9cUzVyYL42JXzDWruWuGd06uU=; b=G2UzP5jfroJ3J4dRC4Q61mKpbAWVLR8mLTf2CQVcwIpYSeyNaVpPCv4+pXIxJiHB5OLUl8dinhOPnqE4Gy1kSIxPkUHZqij0RdQnlQM4qG8gOpoq/cz33ifRRT789/FiNVU8NPef7YLlqchl3Wzn+24h5fdah3wUrXzVqs4/s0Uq3Kww3oiC5O3v2qWSftYPq4ik7o648KiM2Ey2V8nBEsh5Un0KwkWFgp2MzoSwJFEvgNIelsggBMUUqi0KsB/SusNQ6ix5ifJj6ABEeR985SdKvU6uTDaDGsfmXkfhI4r02gQTyW7hpxKvKFjEISqi9cVDK5M2vu5nAAyJGVWwZQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2O7qDrk0RA3GOwE1UJ9cUzVyYL42JXzDWruWuGd06uU=; b=a1N0Wt/Yx4l47q4HruiVzH4iXiTzafq1ff2rGolwRUgzCGwOa9kHsm4CPtGVferbU918U3ezpMMMA9F+QMHJ6HHPYPsbqOt6e1v7MYUsx7+IBWTKta1zdg89J1FLU6G76OxtznnJZWA5ffON7KT5APoPYRCEIYWJiOPn5d2H/qvCs4YFdraMUzQN9lqy8WWY+SKK2zf6MhKbHIpyaqv+i816y71o+p3LWnmCnlbcjbIcaeiQE0pSPHEkvSebr+SDBxZiCtrx4tOGEnKbkhUeb+YkQQtx0fI/ghypOjng4hi8ThusnmZNxnAeqvrSGs6a1G33QBwC9NXLG2mASdHB1w== Received: from MW4PR04CA0079.namprd04.prod.outlook.com (2603:10b6:303:6b::24) by IA1PR12MB8555.namprd12.prod.outlook.com (2603:10b6:208:44f::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.11; Thu, 9 Jan 2025 16:07:46 +0000 Received: from CO1PEPF000044FD.namprd21.prod.outlook.com (2603:10b6:303:6b:cafe::5) by MW4PR04CA0079.outlook.office365.com (2603:10b6:303:6b::24) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.11 via Frontend Transport; Thu, 9 Jan 2025 16:07:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044FD.mail.protection.outlook.com (10.167.241.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.0 via Frontend Transport; Thu, 9 Jan 2025 16:07:45 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:25 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:25 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:21 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 09/15] net/mlx5: fs, add HWS fte API functions Date: Thu, 9 Jan 2025 18:05:40 +0200 Message-ID: <20250109160546.1733647-10-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044FD:EE_|IA1PR12MB8555:EE_ X-MS-Office365-Filtering-Correlation-Id: f4f50b24-9784-41b5-bd1c-08dd30c7c16e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|376014|1800799024; X-Microsoft-Antispam-Message-Info: tCaYhgI44BCg0Rcc098fk+Jm4gfBNiPSz/JThUusngPzDu8PXVVDR4UvzM61CGFda/QzyjKI2XC4KkOUxChNtekl8ecvIYK7ah6VD6jvE9ZAJu4iq7BA3UmysIQknjx4ir2OrEjysCIXQRP5NkMC4k+AysRpQoL7WWe1nWlfd/QG+Ql2btViyjFG89syAMmv7/gsJoe3ouLE8Xa3WgIvY+Anl/sQt9+dXwBeCuikLBKDro6hbKvhhPHtD5FCqP1+iHpho19hEBtLVdRKWds8OxMp6HCmCxRMBrGrzOPvgj1lzzsX8FTq4vPS7pHHNEka7GcxBuuKtF9W4v7KIrot+ssnpeHUnOK9/DcNotxtuQ9pbJGD8U9PLJBaneDN5MHqq0LyXqyoI8KCxYKZOIyIiNirn+ggIKwhYovf6gMju+PZDA40No3TeXAf6liPuTkupn14SJzba8BfxMtCUuoIfyAwPMeJJxN3mG4dPNJvPpkHvlUZxG5jchkMZl/4rTUP2Xc2N3Dgy5eWJGokM/2LdH7mQblv6NmtH4jEwGOb7Y1/zjSATksdvIm2bUIyQ3ueIKgf4DLw+a1ZfPvCSNCj7CnhnOQOBEDslsFQ2sLU7yiqi2axG3rkhu0P7TX/IjP9LOtMhJo2rrZNJiG7fGHIruLru0aMpet+6/eJvgv0DGagnnxeohCgLIyX7ij1Lbr62Nf9AwEWpZgXbGu+c0o/rnLdvj2Vl4k53fkaDZB3h6VxKeCVyW12Jk4YlGRp0R5B56PSusEWazc8nNOZlsYY24ADDqe4sO4JuhIWKMzbHiHTrcMO27mcmTp5BshkGGhq/i8ZAm/VT25oUh9i5r68txjdb6Cljyk1Q3QLEsEw1Vd/z3yUfuF7lFqzLvC+z08tO8uwaUuIxpwzvcY0eXRgGwfoOGyTqBZ4vfkjqUOz8eD7EPHdxYNt3A4mky0Gasmkvx3hn3PogebsgDRKuloRP4q8EIc7/hRWa2lNrXbTGb97z3SJ2nNVYNEek/2nO0c3aQ7xymlWNe06AjqL5JKTFlYuFYGvEytN8BBJlATJn80VjGuDkbwJlbHr7sdwSZlI3kj7K+Fl8XFOTrgY5L9GNR90H6RjFyIWilfAd9/lZ/EXaVjsagRKMP7Ox4HqePiOaGY/HX0UDbTSJaG2IHWcH8y+75J0sjty72ayO0ypRXPkhPgj20NUwZC2zmv9j1nv09CQtKMaQWAOy0KXMKhUn+YI5MOyGz/WhVxu4jRJc4KpcuGU7axNJCNi+Al/St+3cHOFTkaT4XE58+tldjRwzlBn6T/8wfqHqtA64Tk+8kDNV7tXsT5EXz0RIImPQ7+U5mkdR4PfRJl25HjtvPbyTpGh9muemivW0fSMdg6BeUQtqoh2AUwSi89sK6wAKpnbTROySEoyKkEsTej2dxGcmbBc4K1SMx74xyLh4m3RkKDLSSa7Hi6d/v1O6k+IJEXxPfPLjabvVCp6BmRVpv6CfRj+S+h3kWdhJzB/dZUOgWM= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(376014)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:45.6801 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f4f50b24-9784-41b5-bd1c-08dd30c7c16e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044FD.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8555 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add create, destroy and update fte API functions for adding, removing and updating flow steering rules in HW Steering mode. Get HWS actions according to required rule, use actions from pool whenever possible. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/fs_core.h | 5 +- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 549 ++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 13 + 3 files changed, 566 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index 1c5d687f45f0..20837e526679 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -254,7 +254,10 @@ struct fs_fte_dup { /* Type of children is mlx5_flow_rule */ struct fs_fte { struct fs_node node; - struct mlx5_fs_dr_rule fs_dr_rule; + union { + struct mlx5_fs_dr_rule fs_dr_rule; + struct mlx5_fs_hws_rule fs_hws_rule; + }; u32 val[MLX5_ST_SZ_DW_MATCH_PARAM]; struct fs_fte_action act_dests; struct fs_fte_dup *dup; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 7146cdd791fc..6a552b3b6e16 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -360,6 +360,552 @@ static int mlx5_cmd_hws_destroy_flow_group(struct mlx5_flow_root_namespace *ns, return mlx5hws_bwc_matcher_destroy(fg->fs_hws_matcher.matcher); } +static struct mlx5hws_action * +mlx5_fs_get_dest_action_ft(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_flow_rule *dst) +{ + return xa_load(&fs_ctx->hws_pool.table_dests, dst->dest_attr.ft->id); +} + +static struct mlx5hws_action * +mlx5_fs_get_dest_action_table_num(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_flow_rule *dst) +{ + u32 table_num = dst->dest_attr.ft_num; + + return xa_load(&fs_ctx->hws_pool.table_dests, table_num); +} + +static struct mlx5hws_action * +mlx5_fs_create_dest_action_table_num(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_flow_rule *dst) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5hws_context *ctx = fs_ctx->hws_ctx; + u32 table_num = dst->dest_attr.ft_num; + + return mlx5hws_action_create_dest_table_num(ctx, table_num, flags); +} + +static struct mlx5hws_action * +mlx5_fs_create_dest_action_range(struct mlx5hws_context *ctx, + struct mlx5_flow_rule *dst) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_flow_destination *dest_attr = &dst->dest_attr; + + return mlx5hws_action_create_dest_match_range(ctx, + dest_attr->range.field, + dest_attr->range.hit_ft, + dest_attr->range.miss_ft, + dest_attr->range.min, + dest_attr->range.max, + flags); +} + +static struct mlx5hws_action * +mlx5_fs_create_action_dest_array(struct mlx5hws_context *ctx, + struct mlx5hws_action_dest_attr *dests, + u32 num_of_dests, bool ignore_flow_level, + u32 flow_source) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + + return mlx5hws_action_create_dest_array(ctx, num_of_dests, dests, + ignore_flow_level, + flow_source, flags); +} + +static struct mlx5hws_action * +mlx5_fs_get_action_push_vlan(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.push_vlan_action; +} + +static u32 mlx5_fs_calc_vlan_hdr(struct mlx5_fs_vlan *vlan) +{ + u16 n_ethtype = vlan->ethtype; + u8 prio = vlan->prio; + u16 vid = vlan->vid; + + return (u32)n_ethtype << 16 | (u32)(prio) << 12 | (u32)vid; +} + +static struct mlx5hws_action * +mlx5_fs_get_action_pop_vlan(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.pop_vlan_action; +} + +static struct mlx5hws_action * +mlx5_fs_get_action_decap_tnl_l2_to_l2(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.decapl2_action; +} + +static struct mlx5hws_action * +mlx5_fs_get_dest_action_drop(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.drop_action; +} + +static struct mlx5hws_action * +mlx5_fs_get_action_tag(struct mlx5_fs_hws_context *fs_ctx) +{ + return fs_ctx->hws_pool.tag_action; +} + +static struct mlx5hws_action * +mlx5_fs_create_action_last(struct mlx5hws_context *ctx) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + + return mlx5hws_action_create_last(ctx, flags); +} + +static void mlx5_fs_destroy_fs_action(struct mlx5_fs_hws_rule_action *fs_action) +{ + switch (mlx5hws_action_get_type(fs_action->action)) { + case MLX5HWS_ACTION_TYP_CTR: + mlx5_fc_put_hws_action(fs_action->counter); + break; + default: + mlx5hws_action_destroy(fs_action->action); + } +} + +static void +mlx5_fs_destroy_fs_actions(struct mlx5_fs_hws_rule_action **fs_actions, + int *num_fs_actions) +{ + int i; + + /* Free in reverse order to handle action dependencies */ + for (i = *num_fs_actions - 1; i >= 0; i--) + mlx5_fs_destroy_fs_action(*fs_actions + i); + *num_fs_actions = 0; + kfree(*fs_actions); + *fs_actions = NULL; +} + +/* Splits FTE's actions into cached, rule and destination actions. + * The cached and destination actions are saved on the fte hws rule. + * The rule actions are returned as a parameter, together with their count. + * We want to support a rule with 32 destinations, which means we need to + * account for 32 destinations plus usually a counter plus one more action + * for a multi-destination flow table. + * 32 is SW limitation for array size, keep. HWS limitation is 16M STEs per matcher + */ +#define MLX5_FLOW_CONTEXT_ACTION_MAX 34 +static int mlx5_fs_fte_get_hws_actions(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_group *group, + struct fs_fte *fte, + struct mlx5hws_rule_action **ractions) +{ + struct mlx5_flow_act *fte_action = &fte->act_dests.action; + struct mlx5_fs_hws_context *fs_ctx = &ns->fs_hws_context; + struct mlx5hws_action_dest_attr *dest_actions; + struct mlx5hws_context *ctx = fs_ctx->hws_ctx; + struct mlx5_fs_hws_rule_action *fs_actions; + struct mlx5_core_dev *dev = ns->dev; + struct mlx5hws_action *dest_action; + struct mlx5hws_action *tmp_action; + struct mlx5_fs_hws_pr *pr_data; + struct mlx5_fs_hws_mh *mh_data; + bool delay_encap_set = false; + struct mlx5_flow_rule *dst; + int num_dest_actions = 0; + int num_fs_actions = 0; + int num_actions = 0; + int err; + + *ractions = kcalloc(MLX5_FLOW_CONTEXT_ACTION_MAX, sizeof(**ractions), + GFP_KERNEL); + if (!*ractions) { + err = -ENOMEM; + goto out_err; + } + + fs_actions = kcalloc(MLX5_FLOW_CONTEXT_ACTION_MAX, + sizeof(*fs_actions), GFP_KERNEL); + if (!fs_actions) { + err = -ENOMEM; + goto free_actions_alloc; + } + + dest_actions = kcalloc(MLX5_FLOW_CONTEXT_ACTION_MAX, + sizeof(*dest_actions), GFP_KERNEL); + if (!dest_actions) { + err = -ENOMEM; + goto free_fs_actions_alloc; + } + + /* The order of the actions are must to be kept, only the following + * order is supported by HW steering: + * HWS: decap -> remove_hdr -> pop_vlan -> modify header -> push_vlan + * -> reformat (insert_hdr/encap) -> ctr -> tag -> aso + * -> drop -> FWD:tbl/vport/sampler/tbl_num/range -> dest_array -> last + */ + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_DECAP) { + tmp_action = mlx5_fs_get_action_decap_tnl_l2_to_l2(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_dest_actions_alloc; + } + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) { + int reformat_type = fte_action->pkt_reformat->reformat_type; + + if (fte_action->pkt_reformat->owner == MLX5_FLOW_RESOURCE_OWNER_FW) { + mlx5_core_err(dev, "FW-owned reformat can't be used in HWS rule\n"); + err = -EINVAL; + goto free_actions; + } + + if (reformat_type == MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2) { + pr_data = fte_action->pkt_reformat->fs_hws_action.pr_data; + (*ractions)[num_actions].reformat.offset = pr_data->offset; + (*ractions)[num_actions].reformat.hdr_idx = pr_data->hdr_idx; + (*ractions)[num_actions].reformat.data = pr_data->data; + (*ractions)[num_actions++].action = + fte_action->pkt_reformat->fs_hws_action.hws_action; + } else if (reformat_type == MLX5_REFORMAT_TYPE_REMOVE_HDR) { + (*ractions)[num_actions++].action = + fte_action->pkt_reformat->fs_hws_action.hws_action; + } else { + delay_encap_set = true; + } + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP) { + tmp_action = mlx5_fs_get_action_pop_vlan(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP_2) { + tmp_action = mlx5_fs_get_action_pop_vlan(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { + mh_data = fte_action->modify_hdr->fs_hws_action.mh_data; + (*ractions)[num_actions].modify_header.offset = mh_data->offset; + (*ractions)[num_actions].modify_header.data = mh_data->data; + (*ractions)[num_actions++].action = + fte_action->modify_hdr->fs_hws_action.hws_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) { + tmp_action = mlx5_fs_get_action_push_vlan(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions].push_vlan.vlan_hdr = + htonl(mlx5_fs_calc_vlan_hdr(&fte_action->vlan[0])); + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2) { + tmp_action = mlx5_fs_get_action_push_vlan(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions].push_vlan.vlan_hdr = + htonl(mlx5_fs_calc_vlan_hdr(&fte_action->vlan[1])); + (*ractions)[num_actions++].action = tmp_action; + } + + if (delay_encap_set) { + pr_data = fte_action->pkt_reformat->fs_hws_action.pr_data; + (*ractions)[num_actions].reformat.offset = pr_data->offset; + (*ractions)[num_actions].reformat.data = pr_data->data; + (*ractions)[num_actions++].action = + fte_action->pkt_reformat->fs_hws_action.hws_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) { + list_for_each_entry(dst, &fte->node.children, node.list) { + struct mlx5_fc *counter; + + if (dst->dest_attr.type != + MLX5_FLOW_DESTINATION_TYPE_COUNTER) + continue; + + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + + counter = dst->dest_attr.counter; + tmp_action = mlx5_fc_get_hws_action(ctx, counter); + if (!tmp_action) { + err = -EINVAL; + goto free_actions; + } + + (*ractions)[num_actions].counter.offset = + mlx5_fc_id(counter) - mlx5_fc_get_base_id(counter); + (*ractions)[num_actions++].action = tmp_action; + fs_actions[num_fs_actions].action = tmp_action; + fs_actions[num_fs_actions++].counter = counter; + } + } + + if (fte->act_dests.flow_context.flow_tag) { + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + tmp_action = mlx5_fs_get_action_tag(fs_ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + (*ractions)[num_actions].tag.value = fte->act_dests.flow_context.flow_tag; + (*ractions)[num_actions++].action = tmp_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_EXECUTE_ASO) { + err = -EOPNOTSUPP; + goto free_actions; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_DROP) { + dest_action = mlx5_fs_get_dest_action_drop(fs_ctx); + if (!dest_action) { + err = -ENOMEM; + goto free_actions; + } + dest_actions[num_dest_actions++].dest = dest_action; + } + + if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { + list_for_each_entry(dst, &fte->node.children, node.list) { + struct mlx5_flow_destination *attr = &dst->dest_attr; + + if (num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || + num_dest_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + if (attr->type == MLX5_FLOW_DESTINATION_TYPE_COUNTER) + continue; + + switch (attr->type) { + case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE: + dest_action = mlx5_fs_get_dest_action_ft(fs_ctx, dst); + break; + case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE_NUM: + dest_action = mlx5_fs_get_dest_action_table_num(fs_ctx, + dst); + if (dest_action) + break; + dest_action = mlx5_fs_create_dest_action_table_num(fs_ctx, + dst); + fs_actions[num_fs_actions++].action = dest_action; + break; + case MLX5_FLOW_DESTINATION_TYPE_RANGE: + dest_action = mlx5_fs_create_dest_action_range(ctx, dst); + fs_actions[num_fs_actions++].action = dest_action; + break; + default: + err = -EOPNOTSUPP; + goto free_actions; + } + if (!dest_action) { + err = -ENOMEM; + goto free_actions; + } + dest_actions[num_dest_actions++].dest = dest_action; + } + } + + if (num_dest_actions == 1) { + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + (*ractions)[num_actions++].action = dest_actions->dest; + } else if (num_dest_actions > 1) { + u32 flow_source = fte->act_dests.flow_context.flow_source; + bool ignore_flow_level; + + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || + num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + ignore_flow_level = + !!(fte_action->flags & FLOW_ACT_IGNORE_FLOW_LEVEL); + tmp_action = mlx5_fs_create_action_dest_array(ctx, dest_actions, + num_dest_actions, + ignore_flow_level, + flow_source); + if (!tmp_action) { + err = -EOPNOTSUPP; + goto free_actions; + } + fs_actions[num_fs_actions++].action = tmp_action; + (*ractions)[num_actions++].action = tmp_action; + } + + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || + num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { + err = -EOPNOTSUPP; + goto free_actions; + } + + tmp_action = mlx5_fs_create_action_last(ctx); + if (!tmp_action) { + err = -ENOMEM; + goto free_actions; + } + fs_actions[num_fs_actions++].action = tmp_action; + (*ractions)[num_actions++].action = tmp_action; + + kfree(dest_actions); + + /* Actions created specifically for this rule will be destroyed + * once rule is deleted. + */ + fte->fs_hws_rule.num_fs_actions = num_fs_actions; + fte->fs_hws_rule.hws_fs_actions = fs_actions; + + return 0; + +free_actions: + mlx5_fs_destroy_fs_actions(&fs_actions, &num_fs_actions); +free_dest_actions_alloc: + kfree(dest_actions); +free_fs_actions_alloc: + kfree(fs_actions); +free_actions_alloc: + kfree(*ractions); + *ractions = NULL; +out_err: + return err; +} + +static int mlx5_cmd_hws_create_fte(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_group *group, + struct fs_fte *fte) +{ + struct mlx5hws_match_parameters params; + struct mlx5hws_rule_action *ractions; + struct mlx5hws_bwc_rule *rule; + int err = 0; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) { + /* Packet reformat on terminamtion table not supported yet */ + if (fte->act_dests.action.action & + MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) + return -EOPNOTSUPP; + return mlx5_fs_cmd_get_fw_cmds()->create_fte(ns, ft, group, fte); + } + + err = mlx5_fs_fte_get_hws_actions(ns, ft, group, fte, &ractions); + if (err) + goto out_err; + + params.match_sz = sizeof(fte->val); + params.match_buf = fte->val; + + rule = mlx5hws_bwc_rule_create(group->fs_hws_matcher.matcher, ¶ms, + fte->act_dests.flow_context.flow_source, + ractions); + kfree(ractions); + if (!rule) { + err = -EINVAL; + goto free_actions; + } + + fte->fs_hws_rule.bwc_rule = rule; + return 0; + +free_actions: + mlx5_fs_destroy_fs_actions(&fte->fs_hws_rule.hws_fs_actions, + &fte->fs_hws_rule.num_fs_actions); +out_err: + mlx5_core_err(ns->dev, "Failed to create hws rule err(%d)\n", err); + return err; +} + +static int mlx5_cmd_hws_delete_fte(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct fs_fte *fte) +{ + struct mlx5_fs_hws_rule *rule = &fte->fs_hws_rule; + int err; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->delete_fte(ns, ft, fte); + + err = mlx5hws_bwc_rule_destroy(rule->bwc_rule); + rule->bwc_rule = NULL; + + mlx5_fs_destroy_fs_actions(&rule->hws_fs_actions, &rule->num_fs_actions); + + return err; +} + +static int mlx5_cmd_hws_update_fte(struct mlx5_flow_root_namespace *ns, + struct mlx5_flow_table *ft, + struct mlx5_flow_group *group, + int modify_mask, + struct fs_fte *fte) +{ + int allowed_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_ACTION) | + BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST) | + BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_FLOW_COUNTERS); + struct mlx5_fs_hws_rule_action *saved_hws_fs_actions; + struct mlx5hws_rule_action *ractions; + int saved_num_fs_actions; + int ret; + + if (mlx5_fs_cmd_is_fw_term_table(ft)) + return mlx5_fs_cmd_get_fw_cmds()->update_fte(ns, ft, group, + modify_mask, fte); + + if ((modify_mask & ~allowed_mask) != 0) + return -EINVAL; + + saved_hws_fs_actions = fte->fs_hws_rule.hws_fs_actions; + saved_num_fs_actions = fte->fs_hws_rule.num_fs_actions; + + ret = mlx5_fs_fte_get_hws_actions(ns, ft, group, fte, &ractions); + if (ret) + return ret; + + ret = mlx5hws_bwc_rule_action_update(fte->fs_hws_rule.bwc_rule, ractions); + kfree(ractions); + if (ret) + goto restore_actions; + + mlx5_fs_destroy_fs_actions(&saved_hws_fs_actions, &saved_num_fs_actions); + return ret; + +restore_actions: + mlx5_fs_destroy_fs_actions(&fte->fs_hws_rule.hws_fs_actions, + &fte->fs_hws_rule.num_fs_actions); + fte->fs_hws_rule.hws_fs_actions = saved_hws_fs_actions; + fte->fs_hws_rule.num_fs_actions = saved_num_fs_actions; + return ret; +} + static struct mlx5hws_action * mlx5_fs_create_action_remove_header_vlan(struct mlx5hws_context *ctx) { @@ -719,6 +1265,9 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .update_root_ft = mlx5_cmd_hws_update_root_ft, .create_flow_group = mlx5_cmd_hws_create_flow_group, .destroy_flow_group = mlx5_cmd_hws_destroy_flow_group, + .create_fte = mlx5_cmd_hws_create_fte, + .delete_fte = mlx5_cmd_hws_delete_fte, + .update_fte = mlx5_cmd_hws_update_fte, .packet_reformat_alloc = mlx5_cmd_hws_packet_reformat_alloc, .packet_reformat_dealloc = mlx5_cmd_hws_packet_reformat_dealloc, .modify_header_alloc = mlx5_cmd_hws_modify_header_alloc, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index 205d8d71e7d9..d3f0c2f5026a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -43,6 +43,19 @@ struct mlx5_fs_hws_matcher { struct mlx5hws_bwc_matcher *matcher; }; +struct mlx5_fs_hws_rule_action { + struct mlx5hws_action *action; + union { + struct mlx5_fc *counter; + }; +}; + +struct mlx5_fs_hws_rule { + struct mlx5hws_bwc_rule *bwc_rule; + struct mlx5_fs_hws_rule_action *hws_fs_actions; + int num_fs_actions; +}; + #ifdef CONFIG_MLX5_HW_STEERING const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); From patchwork Thu Jan 9 16:05:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932963 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2048.outbound.protection.outlook.com [40.107.93.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE7832206BB for ; Thu, 9 Jan 2025 16:07:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.48 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438879; cv=fail; b=RO//6PSgUYgt7oDNqKhLFOHXEZWkxw1t6lPENUN2/YEetT/crSgVLdOcaZklpezYwEheE4GZDrBmXlStdynJjNFpJxz13x7sMaVAFvJQmFEZqo4f7fhIfHBT5KEsUJ5bj6bF+t5YT5D/jChAu0ToGzie6jGxD7lFG3m7eOgZZn4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438879; c=relaxed/simple; bh=fWj/sZ+2uXQU1qEL5t8w9rm4ThAtpCaP2J7JO6bE1g4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=m83URI+dGvqlobdX2E22vxmaeluxTTys+6JEoiW8HDy1nAxKCfnEWqQBS2RZ7Dr9nUYosVRS3oYodLy4aeoxt+tB5HYTbSSND7HvFofo0WLDnxssqsUB9gwiKxMIGOj6qlORpre41u0y1YebEzx8LLZAOSjO829tlCnny9Q+jsQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=qqjMJ/hy; arc=fail smtp.client-ip=40.107.93.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="qqjMJ/hy" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=b4pdu0kyAc7XViO7JqUHrgEgqSC35WsIdjJHz4IKx2lsucCqSvsPwulmwZJTFpgedhtldLHC7I9l7doMOqabhuJbNm53jtJGubkyhoszEpfI6zMhhAmRMs7rQkG89oo7O7IDGU+JAK6d+5L54j6pEfJdpeVyaszrbNeXSXQmVCgfVhcLTvrjTc469Umlnea9EtoSrI9KG7YnYpJom+m+s1/luGCuZmRP3PuNxR/NQujAuTw0L2T3sxKWDQMqtfJK3ADb/NHhSwss48QMbB4uhxfMD88TI623WO1fGSDpgHCncXspjqGkwyXKTJxs/fOT43s4FB6QVWJHsjFBOHn0Kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DKXIhlRPwEqEtNLbrkEPs86V1jRa5UYG+Ss9vztOKjI=; b=GiKvmgnpilJLtOzmJnj1+Zkimg1JAV7WjxsMih/QdesbT2E4bg5BUKuhjUjDixgob+uC0a+QIwEzMTbaLOpqz185vQWyv+wIajolK2/eT/kL6E5oGgZQC32n23t7OOEY8P0ER9OHcRu45y/Ay6zSt1y9eVUmWbhle1b+5/ojMYtOYbTmdlArBkyRPJQqx17QmAIIACPsve3j1k4fEymEOkYJ3EvCHxyZoER0LiKZ0kaoj50q9ET5C+NJKu3d3pJBgBp8BWcBUs1VzUzfxr+d7C1/PG9eLfGTYQs1svYd15jOG0GD0oby7prxGRZrJKhHmnEOukD5maEk5TqJcnhFIg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DKXIhlRPwEqEtNLbrkEPs86V1jRa5UYG+Ss9vztOKjI=; b=qqjMJ/hy5ztb+jVujdFDJ5E+jCTnsbKTKOpll0++5/Klrx8tIldc/s8mlERpkrmONEVidMwQUk4AhcSFMLFr2kS7tFreJK/rNdiJmLz4PsSOO3lVu1ahs6O8QPARQdi7Nwn8s8F6GYe68rkDU7d+WmuST8v+P2mELz3SaKAERDtr9bM022W+CU7ilfeaJOWheInJpGZAjVcz6m+31+/iOL3BID+e6VH3V0T/JNXH5PWQ5L/xrFXU62in5QDaIlVaQYlhj46IsL7zvIEJlaM+E0MZ4br+IkI0mlr3FQ88wjuZ0Gw5QfWVPvWAvIwimHX3osszCh0yAgEHzYcf7Qs9Kw== Received: from MW4PR03CA0245.namprd03.prod.outlook.com (2603:10b6:303:b4::10) by SN7PR12MB7228.namprd12.prod.outlook.com (2603:10b6:806:2ab::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8314.17; Thu, 9 Jan 2025 16:07:50 +0000 Received: from CO1PEPF000044FB.namprd21.prod.outlook.com (2603:10b6:303:b4:cafe::f3) by MW4PR03CA0245.outlook.office365.com (2603:10b6:303:b4::10) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.10 via Frontend Transport; Thu, 9 Jan 2025 16:07:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044FB.mail.protection.outlook.com (10.167.241.201) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.0 via Frontend Transport; Thu, 9 Jan 2025 16:07:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:29 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:28 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:25 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 10/15] net/mlx5: fs, add support for dest vport HWS action Date: Thu, 9 Jan 2025 18:05:41 +0200 Message-ID: <20250109160546.1733647-11-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044FB:EE_|SN7PR12MB7228:EE_ X-MS-Office365-Filtering-Correlation-Id: 6a08226f-8b27-44ff-f158-08dd30c7c43e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|82310400026|1800799024; X-Microsoft-Antispam-Message-Info: cPp8/Y5YBssMGYuSmePJ3y2orkMpDK6D5TUvOqEaeF7t6bk+lwokyGK8sx6Vbthit1D/1RR6eWNwRzJORt5XIey5k1PQ64Lef0gVk8aLCqDFQwRj7NDMTaLymcGBc/0/MZuaidjk2CioJShv8gS3AXmm/neR8lRgWYIChtXOwQuwJz5E25wPFyqLgHMZPvF2RLDUu2TjkqQJ2Kj8M9xHK6ppa/B+0j0ftattxFv82PoZAWEryXO+TevBgPdUQ34LuBx8yvLYgJFxEbEAmViW3DpzoByAGuiCiizWs4dROddVRucH0zaiy2qdhaoe/pCzgCqEgnFK7SYo/2EZbXmX03CM6M/ygF5AVveDl+7MjF0E4wOaqI7UE/SeVHxIhaX8f4UpIIoMTKd7gueCTd0ZR3Nmp794vxEHZhD580SWWAxF+QGXi4wwObfmRRQhjMdJjW2yc+JF5YMWlewnSRKR62gGSaMgmG80EEq1qX5BtH1HMJMpAO5avie64x7lL63ZA0LMDtaNtkHpTri+K7WdeJQZ6c2wrCyaS1cL4y2Ts6a5lWGjbFneXX4/9IijVKERq+gS/t/QzdXDzionrA/l0WeU46DDYdfq+4UPLy7PF3ya2q6TV0wE7gNl0nXtsCSYyKy6u3aN02zCudm3ckZfLlRKAjkUlGj9Mj9xNWBpDY3D/EHqqMK2xABAB2I33xXkJ8rRErcs0zfy2+bFp8Wo9ErAVg4J010948aKsOWau7MzbX/YtF0IjDGek05JZWmhDqWFOmFOPyEQ7stfciVfhx2K5QJ/CdkJOOMZZUveN9pDFCGXcYJcQ7Td+l72nQ7kmvTkTgbXT2We8cjoDFOMpmR5jS/+6BHpWW2vXYHXMPL7qcrqHdTxXotKWbFcP1GwsWWc9TU/fd341ZyqdFKjDPoMN12HIvaiskmFjucO9Z65uUdCyCLBzjB0j1FZOs59jQtRkbi5Hu/t0oi7GkexbXywHGg+7EOSgBI4CqzFBMwCuDREf7XQJMeMASpYGmC6knEJ2wzOh/35X3966GmVNncjJATA+T49B0PIqBokv03hBfiUD0d7EAqhgBGsMR40XayIBHZyj2w7gMLoCTpiWsK0ypcFAUQR72BczwG93SPychLUEdDErfQbXdKa4OH7Hw/aSKcB5uZouXKTG4oKtmAhYcCI8Y9qucctYdy1PoQXl2tz3S3v+onHGGZ97nQwUowUin06fp5p3BICVBGLEUmSfPw1jswkhUCtQ2/ctxk3d592mxMZ7voIzxD75pOVzY2qnJyAeu/dVpoGF3KtmWvC9IikW+lCoyb2UUezzyq6F0kDz1Wdw2A57FtAP186qbTxZiQjExPLxcBlyJoLDw1oe6hkUH2ZQXSbYVx0vR0LyTQ4jtPaH5Um+LQjOx1emdmG7hCIM6abJhF0Dhs/5msWmg4iyPeX96VigBamo+c6ZB5jqpTPVxse1hjG8uD+qkGo+GIcdoPW4PfplisnaQ== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(376014)(82310400026)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:50.4000 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6a08226f-8b27-44ff-f158-08dd30c7c43e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044FB.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7228 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add support for HW Steering action of vport destination. Add dest vport actions cache. Hold action in cache per vport / vport and vhca_id. Add action to cache on demand and remove on namespace closure to reduce actions creation and destroy. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 63 +++++++++++++++++++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 2 + 2 files changed, 65 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 6a552b3b6e16..58a9c03e6ef9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* Copyright (c) 2025 NVIDIA Corporation & Affiliates */ +#include #include #include #include @@ -63,6 +64,8 @@ static int mlx5_fs_init_hws_actions_pool(struct mlx5_core_dev *dev, xa_init(&hws_pool->el2tol2tnl_pools); xa_init(&hws_pool->mh_pools); xa_init(&hws_pool->table_dests); + xa_init(&hws_pool->vport_dests); + xa_init(&hws_pool->vport_vhca_dests); return 0; cleanup_insert_hdr: @@ -85,9 +88,16 @@ static int mlx5_fs_init_hws_actions_pool(struct mlx5_core_dev *dev, static void mlx5_fs_cleanup_hws_actions_pool(struct mlx5_fs_hws_context *fs_ctx) { struct mlx5_fs_hws_actions_pool *hws_pool = &fs_ctx->hws_pool; + struct mlx5hws_action *action; struct mlx5_fs_pool *pool; unsigned long i; + xa_for_each(&hws_pool->vport_vhca_dests, i, action) + mlx5hws_action_destroy(action); + xa_destroy(&hws_pool->vport_vhca_dests); + xa_for_each(&hws_pool->vport_dests, i, action) + mlx5hws_action_destroy(action); + xa_destroy(&hws_pool->vport_dests); xa_destroy(&hws_pool->table_dests); xa_for_each(&hws_pool->mh_pools, i, pool) mlx5_fs_destroy_mh_pool(pool, &hws_pool->mh_pools, i); @@ -387,6 +397,52 @@ mlx5_fs_create_dest_action_table_num(struct mlx5_fs_hws_context *fs_ctx, return mlx5hws_action_create_dest_table_num(ctx, table_num, flags); } +static struct mlx5hws_action * +mlx5_fs_get_dest_action_vport(struct mlx5_fs_hws_context *fs_ctx, + struct mlx5_flow_rule *dst, + bool is_dest_type_uplink) +{ + u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED; + struct mlx5_flow_destination *dest_attr = &dst->dest_attr; + struct mlx5hws_context *ctx = fs_ctx->hws_ctx; + struct mlx5hws_action *dest; + struct xarray *dests_xa; + bool vhca_id_valid; + unsigned long idx; + u16 vport_num; + int err; + + vhca_id_valid = is_dest_type_uplink || + (dest_attr->vport.flags & MLX5_FLOW_DEST_VPORT_VHCA_ID); + vport_num = is_dest_type_uplink ? MLX5_VPORT_UPLINK : dest_attr->vport.num; + if (vhca_id_valid) { + dests_xa = &fs_ctx->hws_pool.vport_vhca_dests; + idx = dest_attr->vport.vhca_id << 16 | vport_num; + } else { + dests_xa = &fs_ctx->hws_pool.vport_dests; + idx = vport_num; + } +dest_load: + dest = xa_load(dests_xa, idx); + if (dest) + return dest; + + dest = mlx5hws_action_create_dest_vport(ctx, vport_num, vhca_id_valid, + dest_attr->vport.vhca_id, flags); + + err = xa_insert(dests_xa, idx, dest, GFP_KERNEL); + if (err) { + mlx5hws_action_destroy(dest); + dest = NULL; + + if (err == -EBUSY) + /* xarray entry was already stored by another thread */ + goto dest_load; + } + + return dest; +} + static struct mlx5hws_action * mlx5_fs_create_dest_action_range(struct mlx5hws_context *ctx, struct mlx5_flow_rule *dst) @@ -695,6 +751,8 @@ static int mlx5_fs_fte_get_hws_actions(struct mlx5_flow_root_namespace *ns, if (fte_action->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { list_for_each_entry(dst, &fte->node.children, node.list) { struct mlx5_flow_destination *attr = &dst->dest_attr; + bool type_uplink = + attr->type == MLX5_FLOW_DESTINATION_TYPE_UPLINK; if (num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX || num_dest_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { @@ -721,6 +779,11 @@ static int mlx5_fs_fte_get_hws_actions(struct mlx5_flow_root_namespace *ns, dest_action = mlx5_fs_create_dest_action_range(ctx, dst); fs_actions[num_fs_actions++].action = dest_action; break; + case MLX5_FLOW_DESTINATION_TYPE_UPLINK: + case MLX5_FLOW_DESTINATION_TYPE_VPORT: + dest_action = mlx5_fs_get_dest_action_vport(fs_ctx, dst, + type_uplink); + break; default: err = -EOPNOTSUPP; goto free_actions; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index d3f0c2f5026a..9e970ac75d2a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -20,6 +20,8 @@ struct mlx5_fs_hws_actions_pool { struct xarray el2tol2tnl_pools; struct xarray mh_pools; struct xarray table_dests; + struct xarray vport_vhca_dests; + struct xarray vport_dests; }; struct mlx5_fs_hws_context { From patchwork Thu Jan 9 16:05:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932960 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2065.outbound.protection.outlook.com [40.107.223.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6826321A430 for ; Thu, 9 Jan 2025 16:07:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.65 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438868; cv=fail; b=W3gH6T0Bm/yiNdKSE37XtQ7Iw5ePSa5KnilzKrgUsQVtF8vSRg3GCx+ti7sGaLD1YiAzdeTtK9Kq994y/phYE4vtX+72wNzs2UDpYyUR9kEzIFJZ+QnBTCkWzEsQS26A0TvNRD1mkjjHeMrEE+vxFSZaJzdvtJv/Oax+9TELw7M= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438868; c=relaxed/simple; bh=r7zLLTl0Ti/EMHWEdn4tS7cjeHxCA9xGVzRV4yBvtlo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QRZSif4qA+8QB4cX7zW+I8Sg+4dOUe1V8JnWLlWVnZFe1cdJLfMIRawofVxYXcxprkumM+/eaPh4PC/nr0R4rHQh/ctghEkzUxXehlRRHzY7Qw3JFBEdMfnmPC5XCSOygWZbDvZCJHsZbkFI64fTqdQeHMFu7N0rtIThMAQWU8w= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=GxCfqHl3; arc=fail smtp.client-ip=40.107.223.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="GxCfqHl3" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VHWAP53Q5A1BNSRGHY/yFpAYb54+mwTHvzeoyYDpcLmiGenJajSqRsGQIx8G6yzEPwbswSlnQrih3VjEsYaRAxC8ongoKQhK3Xw4HiYkWJAEI73SyqO0ilop4gwpJ6cQjcktKPrzk4+krA2Yo3ba8AKJ53xDS/it+ZQq8e1t+oB6l9ndm6osW+UhUVPFnQ2fglQXaRFEAqnXO/M96C8XODY1R72mUZJdbiAUaNGhnp1PvvzzLLD5BH4EL1F8MGJJzN80W6nWWd0JIUx1sfjCoq6W9W9RgeOCoh+NYbJXasDlC7q4klglJFUUEL1p4w8vygMh8JmfrqRs0557hGHz+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3B8HNam9u2jmId9Kxtv2Mboui9q1xJ69jqwej0dPxrk=; b=GbFDLdTHwfzCiqgNj1jiJnC3mqp5Tg1pjf7HXP33/EyneOdFNViBKNekf2SInKprWn5T/LnHFQxGNqvbjBYu4PvtpRomSVO3IJabKcZIZgdAjNjEmaPA11bDTVVgHlsQOT3EU/oL2L7WJZisESv+tgHBm6g/kVAtrjsc1SRCtIlleRIFJHkEccsJJcugKJyM01LhVJJze8taA4dKQ8HICfNjjje2IImWoX3dwLwTD1rvmRyoylVAWNV60qwtyzg7x66J/t7kPI6uFTma0YdZ3erObs+ogNAWyZr2UuKe4j+6paN+vv7yO+EU7STG9rvSBlx1o4mPVSPObe1nRUDuGg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3B8HNam9u2jmId9Kxtv2Mboui9q1xJ69jqwej0dPxrk=; b=GxCfqHl3EOb+dFwuu6PI4FvDu76eJdzbaWsaYwTs6Dsi0blVdvcUMOtDxxPC+4BzO8srNE7bsOel92PkLDw1M8zRz28PBN3kYR/cXz/PQLwjQWATsU0cElFxrhZCSoAuQYbAA3JD/hU6wv7S5kPLgjsKaI0wyLKAuYTcVsPiM4Fgy5w2dYe0EtoV6zX/9qTK0IECIhS/njGe2WIgyJj8oBaSnMAvIArKywW4tKpMnA1RgPdhU5z5UoGIw+F3jO8mkRfuIdXCBVm5EUtijtNhjeVbOkZlTizqEldGKmGit4DdoI7DzUt0p53vMikqKtTiyXyJLJw24UOA4Yqcio0ypw== Received: from DM5PR07CA0119.namprd07.prod.outlook.com (2603:10b6:4:ae::48) by SN7PR12MB7812.namprd12.prod.outlook.com (2603:10b6:806:329::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.12; Thu, 9 Jan 2025 16:07:43 +0000 Received: from DS1PEPF00017094.namprd03.prod.outlook.com (2603:10b6:4:ae:cafe::3a) by DM5PR07CA0119.outlook.office365.com (2603:10b6:4:ae::48) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.11 via Frontend Transport; Thu, 9 Jan 2025 16:07:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF00017094.mail.protection.outlook.com (10.167.17.137) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Thu, 9 Jan 2025 16:07:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:33 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:32 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:29 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 11/15] net/mlx5: fs, set create match definer to not supported by HWS Date: Thu, 9 Jan 2025 18:05:42 +0200 Message-ID: <20250109160546.1733647-12-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017094:EE_|SN7PR12MB7812:EE_ X-MS-Office365-Filtering-Correlation-Id: 3f0f6d4c-2eca-4643-4d74-08dd30c7bfad X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: u8sMzlyn5Aa2SrB7Lv9eJu6j3D8m9TJpHFZJfAkNKIWnETeS2gQuOyECrpPKMCEJ91InHVuuSJNFkECqdFMBu35X3M81+GTgwKbVjMrRPoO9/VLWl8Mfs80eO8Hngq8KOI9+PHoJnsDA2vWNYu8pB4pTtfqloDn/T4H5F74CVVpv7a/aik/CkgcV8k58t/IgsChtrNyPgW02ZtZtw6afLqnQsfe/eVUm4/ZgP3Ze4Sn1T+7VDBsyqxFz6Z31CkVWxgA59DLtM6W57ntY3U7tY8Mpxn9JwJuLFZNPVlnvsA1N03Ap2PaJsVJAPKVN4vivTHJErGKPOigjvP62TUraMlcdoj+Ne3lDPfNyUbGz2mvJvDv51j+Doa5CpWQ+D7SWmt+cZQu9M/uP/fxNyTkGL4EHcloC0ffG4Drup1ooziFoiWC4kvmzf5h3P+lVY/HSQ4yga4WivUz1nqZhrE0lrRdvVCgwziUcQNRa47L9QRM445ojCeOU1zqeMVXV+r3gWCv95JzprtIJ2GSXA0M3l0qoscObhKIoAhDamPpoyxSbmWzKXRjzZ1agSId5SJDm//rax3rq/zp5GaVVqY7TGkBei/Fc+efmwLSMcAs5LQ7Py4A+qWKOGkVYkmvKeJz+0sWLZwlPN2h/g/AuvEYL9KgiUTseKyM5uwNtsg6ekpKmeEmv4vlrKIFlPxxzFCyzbXXOWWgYLL5LohXgW7aiiyVXTgDR6O1/A4fD20H0jcokXoN2V2f31lUY95M+hL79EuJDkkNIU5kcu1II3oZQ70r+Xo8MemGDMZvCRe2OaDMwyErgQCF20NvvTApYVWxJVQDF5QNmfxVJlw9qg6MJKSlveCDWYiEnXj+iR0Ll2hTYgT81TE2gd9h/53erwdfsTjuWpLuZggrve14Ym9ujkXPbzPWIH1E7d19N6JtOlavTdCAXBriMG1HztFxUbliggv3XcHhNswfSTJzbeMlH9GIG2eaeoyuYmDdJ9E02zcAk5PxVBwgUcHBEEQ+A7/VplcTsHXuOmzNEcT+j0CHr6xLpNfoGQljiHfWy4eUD8TtDCu9mULhDPvMvneskdmYcFUGArUeEgMGcXs1zKCWWnOUtThVy+eRBFhmPkc/pVWFYx9MXfS8kSo/V6Xi4SIJBbbQZDYhzrSWoKYtDa+VopzQXHxxya8wcV8QcoHPd8taflJN4NQK8ZRoYLfUfZjpTMulkedE+c04askgzAuss9PSrnl2sY/RtK5sADxRSfEW+i1e6Se59g3vu1yoE8jkkIjBSHcyYm6/E84uY0jlGAhPegR1zasnOWzEjnhn6GHIQvbqCbAhRZdrM8dKbJWI3otvUk7ITLq8fqX/r02UpTiRpSwKlpacSepC2afHGwvKbVnauaMc5r5ykqQn8hjr2q8eQPfZkx+fC6arkCeU6JNXbfwcL3C+8Axf+J7sxZ4xlStXjuGp4oUoVBWBEboYGr8C/7q4Yza6NN0ppdEVMoKXfz6r7zLfze9Ry7Qd+nNY= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:42.7503 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3f0f6d4c-2eca-4643-4d74-08dd30c7bfad X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017094.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7812 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Currently HW Steering does not support the API functions of create and destroy match definer. Return not supported error in case requested. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index 58a9c03e6ef9..dd9afde60070 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -1321,6 +1321,18 @@ static void mlx5_cmd_hws_modify_header_dealloc(struct mlx5_flow_root_namespace * modify_hdr->fs_hws_action.mh_data = NULL; } +static int mlx5_cmd_hws_create_match_definer(struct mlx5_flow_root_namespace *ns, + u16 format_id, u32 *match_mask) +{ + return -EOPNOTSUPP; +} + +static int mlx5_cmd_hws_destroy_match_definer(struct mlx5_flow_root_namespace *ns, + int definer_id) +{ + return -EOPNOTSUPP; +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, @@ -1335,6 +1347,8 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .packet_reformat_dealloc = mlx5_cmd_hws_packet_reformat_dealloc, .modify_header_alloc = mlx5_cmd_hws_modify_header_alloc, .modify_header_dealloc = mlx5_cmd_hws_modify_header_dealloc, + .create_match_definer = mlx5_cmd_hws_create_match_definer, + .destroy_match_definer = mlx5_cmd_hws_destroy_match_definer, .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, From patchwork Thu Jan 9 16:05:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932964 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2086.outbound.protection.outlook.com [40.107.244.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3867221DBD for ; Thu, 9 Jan 2025 16:08:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.86 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438885; cv=fail; b=KGSzH5q6thH/LVLj9HHgAYo+b+tYOs0B5ewQsyFNyBdq+XN6GI7J5cXJeJEGyYskfnRQ6y2dCx6FO8x95xFprtkhbQFvbF15dFIByd+c7ia/bmF6agxkpd62Xg3pC0F15n7BxdvH6fx09hPs+TTXS1Cvgejlq2IbEcKsEgnQN5Q= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438885; c=relaxed/simple; bh=4m2hPGq03vduWzQmpCyHABrkAyTpspRm5yAZD6+3tUg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WUUB5MrX+2BX4cxWWZ6CVCfySm5XK74UsnZzuOLkslEUl8OyWylqbS/qq8jip7nwMPLYVcjyu0kZzG3jkBcUrFRe49eNqSqKHp9zhsZuVNONuLC+2as2ADH7kxZjFfJN22QkiPyOMYjxhLY94dToIQ6AktWjUFQ/aiJ3WR8uPgU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=HTB0w4aZ; arc=fail smtp.client-ip=40.107.244.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="HTB0w4aZ" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Zb+hzEE5X8dzq/HEEbvM6N+9agCgSjkVjtAVidEduGoP91mj/P1VDjy5hGNtA3TpJZyXKdPjKty/tLbytvv4nw5lMCDRbsb1ub6qoM977SRYJ/6kQuZuTC5HgKk4d4ggndoJaH3wtM62YcMRWG9LRk/flU33zhIqdatuUQh+KAi+z32M20k8p//GHXC0CTWxCrLIHdA2SSv6GYLQp/VTKHPtcZiGP84hqFK3bLMoxJ5GHUtYV2CK9zNaCTjmJ1jn2MfGLM//PbKFwB2aZyjOUFhyA4lRL2GmaA5LUHpgHLwWudLuHbz0oSy9DnBbeexkeVb81XjnIVDrllkm/ct5Yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rZHBoa+ezQW8Rnp+v6R8oMq79ZSji0z+L4U/EjAln6M=; b=A0gn/iUe6PTtX4+0e7qy9bX/89NK+79mdY0npKV2DNdKoKMhLsG/jqdyOrX8FfOFIwDPDpnUIJH1pwZjqWtkra2WhfM6DNPXqnbZXVT2T9uD+dgLldLeg1DBA5YALBiqvr5LKVreVutRU5OLXD7R4Brw4QEtgcJouveLYYX+DFLL2MCh66tK0orzKHsgA29zo8fu5QM6BgzP1UqhB4LxpB9MEQsELbQkXRDQGeOQFAuc3IJTeDPF+nc9leSUzHoOqdq8vdiZDR/xOpfVSMD9SNkIvsLrn9kyH0ehbFw69GSOUwjKf0gl7eazRVobRYJjEWS+X3P4+mk7VWcKKprG0g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rZHBoa+ezQW8Rnp+v6R8oMq79ZSji0z+L4U/EjAln6M=; b=HTB0w4aZ0qNLivnxB+2zm6bPBKm+znqt2SxYKA73w2akOJ7mwhrQMpl84CinA+p0zInB9TwgpQN3PZvEisMo9uX2P6junGsR3I9a+Fo0CM9yaivl/xXQZboGPG/qHZN6MhrAlZszIAVOn8UUL/NBZt8h8uidXrErBv49SJijDl5qPk7jiHhHEL7c1JMB64TFu46nG/a0jdGNPIxZctWT2vN1Po1IF6JNsE4OT30d9W436FXZNKpRrKvijESWhxjgZ8pyPC/MLZzylYkl+q+5kNwzxcSHY6FXegwUABXiDh7O6rVcOa8BLIBNMz4NQ7KsY69PYeMatSjIqO3dlMAp9Q== Received: from MW4PR03CA0247.namprd03.prod.outlook.com (2603:10b6:303:b4::12) by SA3PR12MB8022.namprd12.prod.outlook.com (2603:10b6:806:307::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.12; Thu, 9 Jan 2025 16:07:59 +0000 Received: from CO1PEPF000044FB.namprd21.prod.outlook.com (2603:10b6:303:b4:cafe::3) by MW4PR03CA0247.outlook.office365.com (2603:10b6:303:b4::12) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.11 via Frontend Transport; Thu, 9 Jan 2025 16:07:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044FB.mail.protection.outlook.com (10.167.241.201) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.0 via Frontend Transport; Thu, 9 Jan 2025 16:07:59 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:36 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:36 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:33 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 12/15] net/mlx5: fs, add HWS get capabilities Date: Thu, 9 Jan 2025 18:05:43 +0200 Message-ID: <20250109160546.1733647-13-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044FB:EE_|SA3PR12MB8022:EE_ X-MS-Office365-Filtering-Correlation-Id: 3842ba13-89eb-43b6-8b61-08dd30c7c98b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: Z9YTTcteaFzF1n7znTogHWOGVaHqrBN01zORmH9yY6malyU0bzNmeTYjpZ86dNYDaT01waGnXFch5ec1CqP1IJCsbF5SIYdy8VKWnPoJgKh2bTqsFs/4axnSICU6nuW2cFyLMQuVQiRLYJHc0bAArVVE6qcFLLNsgqCuQ2eLn7WObskSlUiBUbxYM9u0kFJBD1CBaxR9Sg+heEOUhyF5JtmodjOj/j41fEQYLbK87yBzYlKkkIvoDEZ3rkbo3Bi1kjVscDIq97aMEz3olW8142MHe3bkccdmQl+TLrR1FEtdJlTtT9SzU1MVlosfIGCZCpogxBSd6UQUnWnr7NuryBtqmeOYRHYcgM2OLjeMAJA/dEeWBmsJFjZBjJ9LRS0WNJgDH2ejGvoWpP3lqH43gXWcyVqD4k9C+rhmr/ulN2FBLUv8BN+pcvr8du5berO5A0w4+jkPIL0vW1ISp6qDpFMKsVR2GxBASq2i1HepdKXkU4j7F3zVRNaNWj0SYzC9Fyde3SPdFMZcOnNfVgB55jdlSQT+LZl28ylPMuw0QqgGisF1FKJQ28d7RP9NJuOec+ypcPnaGGyQfNzQCEXbq8HK3BGlSA5TyIQBR7HQlidKRSbeKCVbgfTHV0GG+AAEOLZkgiyFRYyh0meMCwAiWjP/aFHxCjjRRxaZx49lxJscFWkdwRn7R4Qou4kjfxLMt3r9JgpLO1u7K57xHpZZvftvQmtACgIiI+RnT5dAtQ3CGB5xfB3b8OY0b4MwQwQaBYQoCnTIBN29ertiGNSDWhuLwV5do0YX1qHLAnPHNSbY2CPtYSsKmJ/oqUpKFmSKMnIAouG9+wxf7bHwv8ujdZkMJCXBSJwK0xQvN+xzMkIrx4HyTY0pavIRqMvIfRE4vxvglXWYfAJEsob+Xg6iEGeqYzTUvPCLdJ2lZfTz+PtL/Me/ey+2BIQsk5KeKeVntn7bLZUNet4mPN2BAW15N9Id5wnq6TaWVmNqKJQ1Dpai4y1F2mcl2CjOsZAWlxTAGzb3h4IptxLoefXQdUXOGbwhyLarLgacMB1Orifq3A3hDj4ACP9SvSeARKlT9UN36uYPdQewb4YN+MLvyVWXXzSk0U03gXWGkpcxtMMbVINotKjX9zPDE5Ue4RvkYsrphzKSiElpHjMhj730at5WZdAA1ReGgHtkJEE5OnDkNYrXYBqGFuN05JVuc5NsuFFRJYqGEFWUY7DzFlj0cOVoIOSnBd7aShvsQexP8ZLzka2tHOq8V2KQBMJus/jeF6xgmXPCXWWGPVlLme5ZiN2rgcIfRCRryeOgQu9KT+9Hw+2aKBdWWIprdeX8KznKPxqBusTEPOMFBLmhOr/vXseNSh5MSCusn015kazf4nr8SjFPbi3cTVqq/2DDnRMO680QLUBaKF3rxsdkqe3Zd5QI+Nc7it4ePapDLYdx4enFEjn4HH9xFiApiyqky7NxM5F5cKgUV+2SKVnddsfHKKtbmgkubIG33UdwJAB/alHZfwg= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:59.3062 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3842ba13-89eb-43b6-8b61-08dd30c7c98b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044FB.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB8022 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add API function get capabilities to HW Steering flow commands. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/fs_hws.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index dd9afde60070..ccee230b3992 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -1333,6 +1333,17 @@ static int mlx5_cmd_hws_destroy_match_definer(struct mlx5_flow_root_namespace *n return -EOPNOTSUPP; } +static u32 mlx5_cmd_hws_get_capabilities(struct mlx5_flow_root_namespace *ns, + enum fs_flow_table_type ft_type) +{ + if (ft_type != FS_FT_FDB) + return 0; + + return MLX5_FLOW_STEERING_CAP_VLAN_PUSH_ON_RX | + MLX5_FLOW_STEERING_CAP_VLAN_POP_ON_TX | + MLX5_FLOW_STEERING_CAP_MATCH_RANGES; +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, @@ -1352,6 +1363,7 @@ static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_ns = mlx5_cmd_hws_create_ns, .destroy_ns = mlx5_cmd_hws_destroy_ns, .set_peer = mlx5_cmd_hws_set_peer, + .get_capabilities = mlx5_cmd_hws_get_capabilities, }; const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void) From patchwork Thu Jan 9 16:05:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932966 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2041.outbound.protection.outlook.com [40.107.94.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A94522069A for ; Thu, 9 Jan 2025 16:08:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.41 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438889; cv=fail; b=IxoaN69kWL2gn9wKDsQB5s/V28pE3RWY3zix9EbLNxypHVxoB7B/0hrfR0YvdOewMkzdelLXYID5a3Ba6I7hNJM4w862H6WPMffcOX37Kb8+LCj5SNJT+zjmYcO8esR7J8wjwd8Vdue50ffydRaY7FxysXe5wRf9MSP4uKg2jkY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438889; c=relaxed/simple; bh=3gcI7Czko5s8iiwReKiRxMckqrkbMgyARn7mB+GQFeE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=iNrsFaHyX2DCtczC2vz9iJBIVy/eLSdmqYwGU7yocRoRgrY27MO6tWRC/sRQOYbTs7uYA221mxCZx70loulUFeT89dWMbM1yAMa0SD3BxCkyZqyC5uV6O1pZ8sx85O9qQ0xOjroU2cjzbLsVq00cg/3x800U/Owb1bb8Vid1GV4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=pcm6Ezwm; arc=fail smtp.client-ip=40.107.94.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="pcm6Ezwm" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=PnrmVLGdCMghCb3vDI+o+eKsgxKRNx52WcD1E+xSphWfhcdoIZ8oXSG7rLmpLJ5/WJVT4y4eVX5dakqHOvjeW7DeDqYL837PYmJfl1r2HXLCgkeM0Z0PSaVy6fYCwTiOynb2nQZVUJh+gTfBZQYuPbSOr7QNtObNnu4nZfLvouV6aLxrbQshJ81Rc6+tQeF/8mwdCy8JZ9Y8e5tf9sADet/grp33yEPu+e75JgWSwwuwI0FBVpmLn2p1lXIhkEwSi0M2p++UgZcViS6iwxbkRbk/y5af/vtBX78mDhn07gBJrAtqa/wsIz9AUG3vIZ6E7jFzg+x+ZjUKQpDWE4lwvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zva29k1nSlCWjfTgCiddSqQ3GxFJDCpYa0KDPsavR+o=; b=o4198ZdE+l2hCizyG+0COAwinNsh6p3nfQEy47wnmvE0BgVVq34Rkn4xah0VdBhhaTNJj/JpB5K5SdFTD1FQovMOf7AdAQhqJb6s4W9hm+VDTp6JeP8VNEK7TZpQHkhrHARZ5B0YuoLl1jtaxcLye0gbs5jD/FhkyVaySLTlwZzkWs7eIn9XM/u6uIZcGOV819ZSboD2IrNhU6exAKJ3UaFXNx1No0tGVUeJ+RVeUkolSwrCvWZN2pqhggH778kXlhWxcjgZC3n9dE6br897uditz5VT+G9vXSs795nhhYnESF7Q9Un52KMGRufbiz/D0T/hIWR7u1QGVtEQ0cf+iA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zva29k1nSlCWjfTgCiddSqQ3GxFJDCpYa0KDPsavR+o=; b=pcm6EzwmiHQ8Yn+Ay3wwBBK1nRb7Qdhj/LwoJ1UI6Aipjq1jMWL+ZNePxfwqifKSB0gfqRwJh3Oj7XkOsSSHX47IgHiJJ1R6f/I/KRIWpIxvGl1TgZUEADR+X1esi3GYDCaxejocQ4Fo0zxUPIjYan0QmVFhPu/gOhYx3X2TmNbWW+8BtBZm4pPuADfxPWAmy8J8ZbUM/A/ALkNfF14Og2TyVfT9jQ58DUVZaVePGnIGotZd3sebYgrT1oTPlAZb/zfujBf2PQZsKOhIcbvhwRcg7LibOw2Ip98BSVlGXpACUA9oeXRwPDiORnNym0C9olRHLM0y8hSN2FFJaBn7cw== Received: from MW4PR03CA0245.namprd03.prod.outlook.com (2603:10b6:303:b4::10) by PH7PR12MB7795.namprd12.prod.outlook.com (2603:10b6:510:278::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.12; Thu, 9 Jan 2025 16:08:03 +0000 Received: from CO1PEPF000044FB.namprd21.prod.outlook.com (2603:10b6:303:b4:cafe::58) by MW4PR03CA0245.outlook.office365.com (2603:10b6:303:b4::10) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.10 via Frontend Transport; Thu, 9 Jan 2025 16:08:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044FB.mail.protection.outlook.com (10.167.241.201) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.0 via Frontend Transport; Thu, 9 Jan 2025 16:08:03 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:40 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:39 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:36 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Tariq Toukan Subject: [PATCH net-next V2 13/15] net/mlx5: fs, add HWS to steering mode options Date: Thu, 9 Jan 2025 18:05:44 +0200 Message-ID: <20250109160546.1733647-14-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044FB:EE_|PH7PR12MB7795:EE_ X-MS-Office365-Filtering-Correlation-Id: de6929a1-c08a-49eb-d295-08dd30c7cbda X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: mOww++Eau5ZR9gGxi/Q++VGk9c41UIc3k3GyTLEyZjW/4JRcEu2Y5QQ1ZNjEAzizWg7uGGUwUGNus8K7DqMdOvHu4E1cHKWJTKiKwK6FpwKFgGAqUyhs8kGFL3cy2hEniNAf4wwpMrKbRqRnGDzu0YxMl44ZyiiQNQ0kBuaAoUCeE3fu8B+KygeYZa6SRhaL9wizx3IwEGvvmRHoDu7hCaruAV3ig6hjOZrr9i+HhNX5gOOrHjx1IS+7+qVvP/tt0gqYtt6Pd3H/qneY+OKeRvy29sajs86rLU0JhMILCPZvrSIvXir+wubdqi1DQYrghokcBnTns+cHzMa3CYW4Rpmj8UBrgmL86Z/7m6HytfIK1C5ztpXTHhi5oWTbKybmkVYTnDcmmBvxDBf2T4g1NLr6pJKcKNP0+0eE1WsWQFxQJn4WJ7wJ3UR1JtDZ7BNvn8YR6ze+P3KuajOn6kP+MIEcdZK9hKQXi3ZsFYoSUffGTBpKL/Ddc+4bobXVMDO/oMza71w8Ee6lHNCJL8w2cybwQgWk7OOJ5sdSpOPN4yjSyx0/LmtVaWvCetyS9GU7zGpZvrwPIuS1xYBm1Js3AtEbgi1VjNjZzzZItv0X6i5dPIPW+X0YtztBcUjmohzi06oqrRNLUsS1uaZLqfmOkakeSk9XXmSzmqV+uxOxW10UB9WY+a5xiwboVPJwkglvXNxO2RYAObMwuGI1XKDCAGzPhFGpqQ1MJcqBMCo2U860ioBJR7TcHcmXZpnWMEP7YyygvhhM4WiFz6G6s+tnI2sdtWNGGFsQTTpVFBxeqFNDF9SaWtgDMJWlkOt4iKZIn1VR7V4+InXFBfCH4u0r2cDnBwZS/PT/Ku+3jT6TOENAU+8ctyWl2OFux5v0cdXSWeghAhyEW3S7XvbB8ZN3zrOej4qMUDiqB1REnu9tEZdnt1eqMMsDjGur6Fv3jaP4UCOh/ctFmx5dGOlKDRcYa4LIQbpvQJVJIqFsGm3k3Dglh+y8GWYVUVURNMiQCJeXZD0cPaIVu9i/rgUFG92MvUqd4hDXYONv2dV2XHzBddeSSziTxPhXgCkxL2ND9eokLDlOBWLA6pB4cuXRADOJyQscTV3jRRVIloD1ai/nvILzHQ0Cc9cM9i9M/ck146xPujC6UwH0JPc9hbqxgc7qv/4Y2GPz62n70qw3ZBhg+XN5OUZX7Kit4ziMJKFOGOmTX+vmRMVIzaxFjYl0b3Up2/g1bQ/8up2VfJnP+A/rAK66U/PSdjS8ohrQMUFsxVapenOMJ/gUBMc89rr4nrS6bpaeNaHICw0O75OmzOGNjws0YDeBOG+wUjSwSZQ652csnf38Smn3Tl2AlJ1chAoMsjdzDVVlMzODgCBx6ouf2okkJQS7XLrzDjAA1zmllA+lu/qM3JZdhU1BZrQKVCsh97NNFxd/1ws/9ACfFpSmrT2NqNAT35uC92ZglcfNjZY9wAY4srIj3uid663OYIXpiQY7DN6mK3JjOPD1ADpwlso= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:08:03.1656 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: de6929a1-c08a-49eb-d295-08dd30c7cbda X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044FB.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7795 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add HW Steering mode to mlx5 devlink param of steering mode options. Signed-off-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- Documentation/networking/devlink/mlx5.rst | 3 ++ .../net/ethernet/mellanox/mlx5/core/fs_core.c | 50 +++++++++++++------ .../mellanox/mlx5/core/steering/hws/fs_hws.c | 5 ++ .../mellanox/mlx5/core/steering/hws/fs_hws.h | 7 +++ 4 files changed, 49 insertions(+), 16 deletions(-) diff --git a/Documentation/networking/devlink/mlx5.rst b/Documentation/networking/devlink/mlx5.rst index 456985407475..41618538fc70 100644 --- a/Documentation/networking/devlink/mlx5.rst +++ b/Documentation/networking/devlink/mlx5.rst @@ -53,6 +53,9 @@ parameters. * ``smfs`` Software managed flow steering. In SMFS mode, the HW steering entities are created and manage through the driver without firmware intervention. + * ``hmfs`` Hardware managed flow steering. In HMFS mode, the driver + is configuring steering rules directly to the HW using Work Queues with + a special new type of WQE (Work Queue Element). SMFS mode is faster and provides better rule insertion rate compared to default DMFS mode. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 41b5e98a0495..f43fd96a680d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -3535,35 +3535,42 @@ static int mlx5_fs_mode_validate(struct devlink *devlink, u32 id, { struct mlx5_core_dev *dev = devlink_priv(devlink); char *value = val.vstr; - int err = 0; + u8 eswitch_mode; - if (!strcmp(value, "dmfs")) { + if (!strcmp(value, "dmfs")) return 0; - } else if (!strcmp(value, "smfs")) { - u8 eswitch_mode; - bool smfs_cap; - eswitch_mode = mlx5_eswitch_mode(dev); - smfs_cap = mlx5_fs_dr_is_supported(dev); + if (!strcmp(value, "smfs")) { + bool smfs_cap = mlx5_fs_dr_is_supported(dev); if (!smfs_cap) { - err = -EOPNOTSUPP; NL_SET_ERR_MSG_MOD(extack, "Software managed steering is not supported by current device"); + return -EOPNOTSUPP; } + } else if (!strcmp(value, "hmfs")) { + bool hmfs_cap = mlx5_fs_hws_is_supported(dev); - else if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { + if (!hmfs_cap) { NL_SET_ERR_MSG_MOD(extack, - "Software managed steering is not supported when eswitch offloads enabled."); - err = -EOPNOTSUPP; + "Hardware steering is not supported by current device"); + return -EOPNOTSUPP; } } else { NL_SET_ERR_MSG_MOD(extack, - "Bad parameter: supported values are [\"dmfs\", \"smfs\"]"); - err = -EINVAL; + "Bad parameter: supported values are [\"dmfs\", \"smfs\", \"hmfs\"]"); + return -EINVAL; } - return err; + eswitch_mode = mlx5_eswitch_mode(dev); + if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { + NL_SET_ERR_MSG_FMT_MOD(extack, + "Moving to %s is not supported when eswitch offloads enabled.", + value); + return -EOPNOTSUPP; + } + + return 0; } static int mlx5_fs_mode_set(struct devlink *devlink, u32 id, @@ -3575,6 +3582,8 @@ static int mlx5_fs_mode_set(struct devlink *devlink, u32 id, if (!strcmp(ctx->val.vstr, "smfs")) mode = MLX5_FLOW_STEERING_MODE_SMFS; + else if (!strcmp(ctx->val.vstr, "hmfs")) + mode = MLX5_FLOW_STEERING_MODE_HMFS; else mode = MLX5_FLOW_STEERING_MODE_DMFS; dev->priv.steering->mode = mode; @@ -3587,10 +3596,17 @@ static int mlx5_fs_mode_get(struct devlink *devlink, u32 id, { struct mlx5_core_dev *dev = devlink_priv(devlink); - if (dev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_SMFS) + switch (dev->priv.steering->mode) { + case MLX5_FLOW_STEERING_MODE_SMFS: strscpy(ctx->val.vstr, "smfs", sizeof(ctx->val.vstr)); - else + break; + case MLX5_FLOW_STEERING_MODE_HMFS: + strscpy(ctx->val.vstr, "hmfs", sizeof(ctx->val.vstr)); + break; + default: strscpy(ctx->val.vstr, "dmfs", sizeof(ctx->val.vstr)); + } + return 0; } @@ -4009,6 +4025,8 @@ int mlx5_flow_namespace_set_mode(struct mlx5_flow_namespace *ns, if (mode == MLX5_FLOW_STEERING_MODE_SMFS) cmds = mlx5_fs_cmd_get_dr_cmds(); + else if (mode == MLX5_FLOW_STEERING_MODE_HMFS) + cmds = mlx5_fs_cmd_get_hws_cmds(); else cmds = mlx5_fs_cmd_get_fw_cmds(); if (!cmds) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c index ccee230b3992..05329afeb9ea 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c @@ -1344,6 +1344,11 @@ static u32 mlx5_cmd_hws_get_capabilities(struct mlx5_flow_root_namespace *ns, MLX5_FLOW_STEERING_CAP_MATCH_RANGES; } +bool mlx5_fs_hws_is_supported(struct mlx5_core_dev *dev) +{ + return mlx5hws_is_supported(dev); +} + static const struct mlx5_flow_cmds mlx5_flow_cmds_hws = { .create_flow_table = mlx5_cmd_hws_create_flow_table, .destroy_flow_table = mlx5_cmd_hws_destroy_flow_table, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h index 9e970ac75d2a..cbddb72d4362 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.h @@ -60,10 +60,17 @@ struct mlx5_fs_hws_rule { #ifdef CONFIG_MLX5_HW_STEERING +bool mlx5_fs_hws_is_supported(struct mlx5_core_dev *dev); + const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void); #else +static inline bool mlx5_fs_hws_is_supported(struct mlx5_core_dev *dev) +{ + return false; +} + static inline const struct mlx5_flow_cmds *mlx5_fs_cmd_get_hws_cmds(void) { return NULL; From patchwork Thu Jan 9 16:05:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932967 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2044.outbound.protection.outlook.com [40.107.95.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 360DF222563 for ; Thu, 9 Jan 2025 16:08:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.44 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438889; cv=fail; b=Iu1XWIffImBiD3T43sJ4RNq0YD8PQzFN37ASsWWWoz5UyK1IMqdp0F+doDdiBd+T1HBE0MtXtPD38XFfd2R0PKghElij7i401VMNaurgwFV+QpdXLAG/ywKLJOmgkAsv4U+251ziKOPKoGxYlg0WzYLylfeaUz04TEyY1fhfY7g= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438889; c=relaxed/simple; bh=vJ9YMUBmeJXkyQ3rt/CMsZRkqdX4QezPohho/OmKXhU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=bDe+NY2XWyTIoiqB5fhYVXtgaO/C66c4RM4mtVNAZ8eUzoCB54O8ydUZ6ORVECBuu4NRPdM0UuyABWsUltv0oz6kQBYw3lfS1ZmcgIHR8DUCC/vkYhVvsGf0d9i7QpyTz4R+dxjfLp9m5LZz4qsJskXYVNd6QLxU4oU4zwinLTI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=K8Ef8EpV; arc=fail smtp.client-ip=40.107.95.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="K8Ef8EpV" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=gknp3gqY5uG9qT3oRA1dzjeZ2cqtnEjdPmiv2W/fKwjByLp6CRSvLkupaYUSVdZm++vHe8TXiYthjbC90HAXagBm+fbRY0xjjWnFuxmY9pIN0UWqFVuVFl708Nye+p/S7dAK/9t/tFm2JuczUL09/22vqXus73Ls1gglNVm/Kf7YHIm+9X0UAS28/uAac9VSvzM+NBQdjOojFg/mJ3zsKy5M+2unJFNgJELg+ECqIX16hxlouHp0ZSlcSCKXKtc89XjSmglp2Qzd81+ejfY6giYFGpAP1yaXORNVlugTIHDGRMWmtujPWz5qAnz7qbTBbzPut+PQ+cOfAR5vMsbLyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BxlhVVwzUzU6H3AU1wo9Nu+ctHyeEcobJRCG4DR18Xw=; b=VhW9K0KS6EkRacNgsz3bHf8C6e3GnrWkqzPcgFYhLIlzm0eeX0vQjjFHItPQggaFGNpM3sAyYcc5Vp4FSPBmMRJPJu8BhC7wU7iV2M+FyV6vDDOtl/AJIUR5Z81NCHVz9Fbm7X1TAR30FSJQvlTDb4ub7E0ANseAjD2M2qm0NMfasLm2jcyS5Qy85J793e5ow013QSgkA0z9EV9E41vXeemKHq2+M3Fo/0MYSfmg+3ggRfFIUM/lul1zmLXtmWJ0QKHW2MFvs7RzuiY3rvyii4xypo7OmkC4DPREN9kGC1A96d5YO+2p/Dy9+eXEDfG0aOj14ypFbFIPp7qd74zz+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BxlhVVwzUzU6H3AU1wo9Nu+ctHyeEcobJRCG4DR18Xw=; b=K8Ef8EpVccEifIsGZPZMjs8QGc1Y+qGud0a9pjTI8ZUEt0XEtqyu9KO6/Luhl1HJx7szn+zfgklmO/76JwNZGKNMTBPnimBwJs1gKIjP4kpjlsELUXrKoFehJ25A8mEFqYxknFp2/lkWVp7w4wK3pWernQduzafYtDMSWYFGnvLrk8U4ySAMFeVb0Gk37S+c136MPD29vAKfk070Ty699QwDxZPJB4b5CS2cZiMdkRu/r0BFLoskZlf6Vt34yWXeSIRIZ6NG5MWuHMs4y58zU72J9mX/YowW30j9/RNnFfq8uJKtoc1zOO8MPJ04SPjTkswmAqY3Z0q7YYBG3UpLog== Received: from DM6PR12CA0009.namprd12.prod.outlook.com (2603:10b6:5:1c0::22) by PH7PR12MB7114.namprd12.prod.outlook.com (2603:10b6:510:1ed::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8293.15; Thu, 9 Jan 2025 16:07:56 +0000 Received: from DS1PEPF00017092.namprd03.prod.outlook.com (2603:10b6:5:1c0:cafe::84) by DM6PR12CA0009.outlook.office365.com (2603:10b6:5:1c0::22) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.12 via Frontend Transport; Thu, 9 Jan 2025 16:07:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF00017092.mail.protection.outlook.com (10.167.17.135) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Thu, 9 Jan 2025 16:07:55 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:44 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:43 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:40 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Vlad Dogaru , Tariq Toukan Subject: [PATCH net-next V2 14/15] net/mlx5: HWS, update flow - remove the use of dual RTCs Date: Thu, 9 Jan 2025 18:05:45 +0200 Message-ID: <20250109160546.1733647-15-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017092:EE_|PH7PR12MB7114:EE_ X-MS-Office365-Filtering-Correlation-Id: 7a170f38-1f8b-4819-9c77-08dd30c7c77a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|376014|82310400026|1800799024; X-Microsoft-Antispam-Message-Info: HA0ZTwsvrqWfiB80Elwby5S4GVEei0szrotp3HeiRqWli1oOBwv34vziQlZ7pAkNkNaWywkjSxcc1Bz2d8e9FB/xbVp+kr11frh+Em9q9oDD3xr3aDPQaeXf3ltbzBecabeG1oYFuXVG8WY5/lDHgBDCrt5cMt3vX2yxE+WMQmecU5mqJ1rIGrZSR7pGZ0HbnD6HX1XBYKmZkh/yutxxP1U8TuG2HVLTm0uYQ6YFwWZAwqAUN3iMp6/8sX5BWlc4YogdjtFoN2JjqMe2eMllZVnq29v729rbiYuA7IyzEK7KuG+eN5VcYZWIoWS3NDvmMo5nh8r2PaMcTlEH4YlpE+4NFmevBQgwyueOUiDmY9/RwKdCpNNe3q4L5BMK4hYFfy96tBwkaVENXvTFeFL8wHhVcHsVrni6FBuCTwKQnBkkpufHTUdPlDX6d12dNZpaPEXGUqBszK+rxPmjJi84wX1J/yBUS+eMtbD3vM1dmXcLMiiRoR15FSTADxsbsqrDcV3pTQeyTr1jYjq3E0Aw6JxrHoxu+7DBPqH/q2L5ocHrBAA2mdO9RVHwr9XeIO0cucw/HKRCs28i74mPgnMzTQBFUfu8VHalDhDrpz306qB6H59Ueh5s1fFi4LWO7MXxHOVi7ugG3zb5mXaWlhnWQPl3biODsmEsy8hNdOGHtmoVgd30vPWFMJTNgAeOUdn/ZHhdKTvlBLFAPIV+HovXSIH0G8y/cq1mXMXYaxdl01qfoIWu19UOqb7K6jhCvcnuR/pCxlebuyS2uvNxyDBOzCj6+jBcuwK4UkCuwqc0gH9HZc1gPX50+qAASKbPgp2EtEs6iokDq12MtQA/xs9KVJwVWXGgCBs6MVzyIKdmHuup+Xdj9IHxxfwDWe7pF+Gi1F2/YoUmD2GeW9pyk+/Q6y/HYulI4vj8h5SKgul3C3U0jo50VKwqumpoq7ZJ0hGDN9YP0Ws307J/KwVyR7itFCFlWQi4l5SRzKp09VKW310mr2bp01lK0adySu0MbH7yCwcj9hwFy+5TUP5YF9FdGTTLTFpHhcI4DCA1EAnJs+qXZSx0eaB+AdRw/k27Vek0PndH2ulXKeZs3lk2t4oOXdBNxbmE7S2XpnNf7mH6RCIJV6HNYnpvLwZ5jk6/TPOkcH072cnDDPAWGsMlbxpILFZzLznsrvtRsWwxvDtoyOvXyTqSmslPfS3UlmnEmY1IzoHHi5qlMBE95ytLRZfTNmJXY6upFyV0wChngPPXw7UhOaU9J366NVrDZ+WaMmBlAhNs6x5gpMWlBLHpJ3QDqGioyFpoz+yS1lu9rrcFi/diOFMVak7EJV6/Rq3NFs1Selbr4UDlf6aIpePUkFKvhBk5sk5UcrgS1lqg1Vs18wcLlewnUvt13VVlsRMS0BZzBoltOOQfDri+E62bSSi3OtlrMmR8UrsKafmBAqurH+tJMnMztOIvcLFd98ZMUwHfPYlDDVSNLYL2ylPPBsAzEbsj5iKtxlA2tpvs4xStyLg= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(376014)(82310400026)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:55.7930 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7a170f38-1f8b-4819-9c77-08dd30c7c77a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017092.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7114 X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik This patch is the first part of update flow implementation. Update flow should support rules with single STE (match STE only), as well as rules with multiple STEs (match STE plus action STEs). Supporting the rules with single STE is straightforward: we just overwrite the STE, which is an atomic operation. Supporting the rules with action STEs is a more complicated case. The existing implementation uses two action RTCs per matcher and alternates between the two for each update request. This implementation was unnecessarily complex and lead to some unhandled edge cases, so the support for rule update with multiple STEs wasn't really functional. This patch removes this code, and the next patch adds implementation of a different approach. Note that after applying this patch and before applying the next patch we still have support for update rule with single STE (only match STE w/o action STEs), but update will fail for rules with action STEs. Signed-off-by: Yevgeny Kliteynik Signed-off-by: Vlad Dogaru Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/debug.c | 10 +- .../mlx5/core/steering/hws/internal.h | 1 - .../mellanox/mlx5/core/steering/hws/matcher.c | 170 +++++++----------- .../mellanox/mlx5/core/steering/hws/matcher.h | 8 +- .../mellanox/mlx5/core/steering/hws/rule.c | 73 ++------ .../mellanox/mlx5/core/steering/hws/rule.h | 3 +- 6 files changed, 81 insertions(+), 184 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c index 60ada3143d60..696275fd0ce2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c @@ -148,8 +148,8 @@ static int hws_debug_dump_matcher(struct seq_file *f, struct mlx5hws_matcher *ma matcher->match_ste.rtc_1_id, (int)ste_1_id); - ste = &matcher->action_ste[0].ste; - ste_pool = matcher->action_ste[0].pool; + ste = &matcher->action_ste.ste; + ste_pool = matcher->action_ste.pool; if (ste_pool) { ste_0_id = mlx5hws_pool_chunk_get_base_id(ste_pool, ste); if (tbl_type == MLX5HWS_TABLE_TYPE_FDB) @@ -171,10 +171,8 @@ static int hws_debug_dump_matcher(struct seq_file *f, struct mlx5hws_matcher *ma return ret; seq_printf(f, ",%d,%d,%d,%d,%d,0x%llx,0x%llx\n", - matcher->action_ste[0].rtc_0_id, - (int)ste_0_id, - matcher->action_ste[0].rtc_1_id, - (int)ste_1_id, + matcher->action_ste.rtc_0_id, (int)ste_0_id, + matcher->action_ste.rtc_1_id, (int)ste_1_id, 0, mlx5hws_debug_icm_to_idx(icm_addr_0), mlx5hws_debug_icm_to_idx(icm_addr_1)); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/internal.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/internal.h index 3c8635f286ce..30ccd635b505 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/internal.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/internal.h @@ -39,7 +39,6 @@ #define mlx5hws_dbg(ctx, arg...) mlx5_core_dbg((ctx)->mdev, ##arg) #define MLX5HWS_TABLE_TYPE_BASE 2 -#define MLX5HWS_ACTION_STE_IDX_ANY 0 static inline bool is_mem_zero(const u8 *mem, size_t size) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c index 4419c72ad314..74a03fbabcf7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c @@ -200,7 +200,7 @@ static void hws_matcher_set_rtc_attr_sz(struct mlx5hws_matcher *matcher, enum mlx5hws_matcher_rtc_type rtc_type, bool is_mirror) { - struct mlx5hws_pool_chunk *ste = &matcher->action_ste[MLX5HWS_ACTION_STE_IDX_ANY].ste; + struct mlx5hws_pool_chunk *ste = &matcher->action_ste.ste; enum mlx5hws_matcher_flow_src flow_src = matcher->attr.optimize_flow_src; bool is_match_rtc = rtc_type == HWS_MATCHER_RTC_TYPE_MATCH; @@ -217,8 +217,7 @@ static void hws_matcher_set_rtc_attr_sz(struct mlx5hws_matcher *matcher, } static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, - enum mlx5hws_matcher_rtc_type rtc_type, - u8 action_ste_selector) + enum mlx5hws_matcher_rtc_type rtc_type) { struct mlx5hws_matcher_attr *attr = &matcher->attr; struct mlx5hws_cmd_rtc_create_attr rtc_attr = {0}; @@ -278,7 +277,7 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, break; case HWS_MATCHER_RTC_TYPE_STE_ARRAY: - action_ste = &matcher->action_ste[action_ste_selector]; + action_ste = &matcher->action_ste; rtc_0_id = &action_ste->rtc_0_id; rtc_1_id = &action_ste->rtc_1_id; @@ -350,8 +349,7 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, } static void hws_matcher_destroy_rtc(struct mlx5hws_matcher *matcher, - enum mlx5hws_matcher_rtc_type rtc_type, - u8 action_ste_selector) + enum mlx5hws_matcher_rtc_type rtc_type) { struct mlx5hws_matcher_action_ste *action_ste; struct mlx5hws_table *tbl = matcher->tbl; @@ -367,7 +365,7 @@ static void hws_matcher_destroy_rtc(struct mlx5hws_matcher *matcher, ste = &matcher->match_ste.ste; break; case HWS_MATCHER_RTC_TYPE_STE_ARRAY: - action_ste = &matcher->action_ste[action_ste_selector]; + action_ste = &matcher->action_ste; rtc_0_id = action_ste->rtc_0_id; rtc_1_id = action_ste->rtc_1_id; ste_pool = action_ste->pool; @@ -458,20 +456,13 @@ static int hws_matcher_resize_init(struct mlx5hws_matcher *src_matcher) if (!resize_data) return -ENOMEM; - resize_data->max_stes = src_matcher->action_ste[MLX5HWS_ACTION_STE_IDX_ANY].max_stes; - - resize_data->action_ste[0].stc = src_matcher->action_ste[0].stc; - resize_data->action_ste[0].rtc_0_id = src_matcher->action_ste[0].rtc_0_id; - resize_data->action_ste[0].rtc_1_id = src_matcher->action_ste[0].rtc_1_id; - resize_data->action_ste[0].pool = src_matcher->action_ste[0].max_stes ? - src_matcher->action_ste[0].pool : - NULL; - resize_data->action_ste[1].stc = src_matcher->action_ste[1].stc; - resize_data->action_ste[1].rtc_0_id = src_matcher->action_ste[1].rtc_0_id; - resize_data->action_ste[1].rtc_1_id = src_matcher->action_ste[1].rtc_1_id; - resize_data->action_ste[1].pool = src_matcher->action_ste[1].max_stes ? - src_matcher->action_ste[1].pool : - NULL; + resize_data->max_stes = src_matcher->action_ste.max_stes; + + resize_data->stc = src_matcher->action_ste.stc; + resize_data->rtc_0_id = src_matcher->action_ste.rtc_0_id; + resize_data->rtc_1_id = src_matcher->action_ste.rtc_1_id; + resize_data->pool = src_matcher->action_ste.max_stes ? + src_matcher->action_ste.pool : NULL; /* Place the new resized matcher on the dst matcher's list */ list_add(&resize_data->list_node, &src_matcher->resize_dst->resize_data); @@ -504,42 +495,60 @@ static void hws_matcher_resize_uninit(struct mlx5hws_matcher *matcher) if (resize_data->max_stes) { mlx5hws_action_free_single_stc(matcher->tbl->ctx, matcher->tbl->type, - &resize_data->action_ste[1].stc); - mlx5hws_action_free_single_stc(matcher->tbl->ctx, - matcher->tbl->type, - &resize_data->action_ste[0].stc); + &resize_data->stc); - if (matcher->tbl->type == MLX5HWS_TABLE_TYPE_FDB) { + if (matcher->tbl->type == MLX5HWS_TABLE_TYPE_FDB) mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, - resize_data->action_ste[1].rtc_1_id); - mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, - resize_data->action_ste[0].rtc_1_id); - } - mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, - resize_data->action_ste[1].rtc_0_id); + resize_data->rtc_1_id); + mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, - resize_data->action_ste[0].rtc_0_id); - if (resize_data->action_ste[MLX5HWS_ACTION_STE_IDX_ANY].pool) { - mlx5hws_pool_destroy(resize_data->action_ste[1].pool); - mlx5hws_pool_destroy(resize_data->action_ste[0].pool); - } + resize_data->rtc_0_id); + + if (resize_data->pool) + mlx5hws_pool_destroy(resize_data->pool); } kfree(resize_data); } } -static int -hws_matcher_bind_at_idx(struct mlx5hws_matcher *matcher, u8 action_ste_selector) +static int hws_matcher_bind_at(struct mlx5hws_matcher *matcher) { + bool is_jumbo = mlx5hws_matcher_mt_is_jumbo(matcher->mt); struct mlx5hws_cmd_stc_modify_attr stc_attr = {0}; struct mlx5hws_matcher_action_ste *action_ste; struct mlx5hws_table *tbl = matcher->tbl; struct mlx5hws_pool_attr pool_attr = {0}; struct mlx5hws_context *ctx = tbl->ctx; - int ret; + u32 required_stes; + u8 max_stes = 0; + int i, ret; - action_ste = &matcher->action_ste[action_ste_selector]; + if (matcher->flags & MLX5HWS_MATCHER_FLAGS_COLLISION) + return 0; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5hws_action_template *at = &matcher->at[i]; + + ret = hws_matcher_check_and_process_at(matcher, at); + if (ret) { + mlx5hws_err(ctx, "Invalid at %d", i); + return ret; + } + + required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); + max_stes = max(max_stes, required_stes); + + /* Future: Optimize reparse */ + } + + /* There are no additional STEs required for matcher */ + if (!max_stes) + return 0; + + matcher->action_ste.max_stes = max_stes; + + action_ste = &matcher->action_ste; /* Allocate action STE mempool */ pool_attr.table_type = tbl->type; @@ -555,7 +564,7 @@ hws_matcher_bind_at_idx(struct mlx5hws_matcher *matcher, u8 action_ste_selector) } /* Allocate action RTC */ - ret = hws_matcher_create_rtc(matcher, HWS_MATCHER_RTC_TYPE_STE_ARRAY, action_ste_selector); + ret = hws_matcher_create_rtc(matcher, HWS_MATCHER_RTC_TYPE_STE_ARRAY); if (ret) { mlx5hws_err(ctx, "Failed to create action RTC\n"); goto free_ste_pool; @@ -579,18 +588,18 @@ hws_matcher_bind_at_idx(struct mlx5hws_matcher *matcher, u8 action_ste_selector) return 0; free_rtc: - hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_STE_ARRAY, action_ste_selector); + hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_STE_ARRAY); free_ste_pool: mlx5hws_pool_destroy(action_ste->pool); return ret; } -static void hws_matcher_unbind_at_idx(struct mlx5hws_matcher *matcher, u8 action_ste_selector) +static void hws_matcher_unbind_at(struct mlx5hws_matcher *matcher) { struct mlx5hws_matcher_action_ste *action_ste; struct mlx5hws_table *tbl = matcher->tbl; - action_ste = &matcher->action_ste[action_ste_selector]; + action_ste = &matcher->action_ste; if (!action_ste->max_stes || matcher->flags & MLX5HWS_MATCHER_FLAGS_COLLISION || @@ -598,65 +607,10 @@ static void hws_matcher_unbind_at_idx(struct mlx5hws_matcher *matcher, u8 action return; mlx5hws_action_free_single_stc(tbl->ctx, tbl->type, &action_ste->stc); - hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_STE_ARRAY, action_ste_selector); + hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_STE_ARRAY); mlx5hws_pool_destroy(action_ste->pool); } -static int hws_matcher_bind_at(struct mlx5hws_matcher *matcher) -{ - bool is_jumbo = mlx5hws_matcher_mt_is_jumbo(matcher->mt); - struct mlx5hws_table *tbl = matcher->tbl; - struct mlx5hws_context *ctx = tbl->ctx; - u32 required_stes; - u8 max_stes = 0; - int i, ret; - - if (matcher->flags & MLX5HWS_MATCHER_FLAGS_COLLISION) - return 0; - - for (i = 0; i < matcher->num_of_at; i++) { - struct mlx5hws_action_template *at = &matcher->at[i]; - - ret = hws_matcher_check_and_process_at(matcher, at); - if (ret) { - mlx5hws_err(ctx, "Invalid at %d", i); - return ret; - } - - required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); - max_stes = max(max_stes, required_stes); - - /* Future: Optimize reparse */ - } - - /* There are no additional STEs required for matcher */ - if (!max_stes) - return 0; - - matcher->action_ste[0].max_stes = max_stes; - matcher->action_ste[1].max_stes = max_stes; - - ret = hws_matcher_bind_at_idx(matcher, 0); - if (ret) - return ret; - - ret = hws_matcher_bind_at_idx(matcher, 1); - if (ret) - goto free_at_0; - - return 0; - -free_at_0: - hws_matcher_unbind_at_idx(matcher, 0); - return ret; -} - -static void hws_matcher_unbind_at(struct mlx5hws_matcher *matcher) -{ - hws_matcher_unbind_at_idx(matcher, 1); - hws_matcher_unbind_at_idx(matcher, 0); -} - static int hws_matcher_bind_mt(struct mlx5hws_matcher *matcher) { struct mlx5hws_context *ctx = matcher->tbl->ctx; @@ -802,7 +756,7 @@ static int hws_matcher_create_and_connect(struct mlx5hws_matcher *matcher) goto unbind_at; /* Allocate the RTC for the new matcher */ - ret = hws_matcher_create_rtc(matcher, HWS_MATCHER_RTC_TYPE_MATCH, 0); + ret = hws_matcher_create_rtc(matcher, HWS_MATCHER_RTC_TYPE_MATCH); if (ret) goto destroy_end_ft; @@ -814,7 +768,7 @@ static int hws_matcher_create_and_connect(struct mlx5hws_matcher *matcher) return 0; destroy_rtc: - hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_MATCH, 0); + hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_MATCH); destroy_end_ft: hws_matcher_destroy_end_ft(matcher); unbind_at: @@ -828,7 +782,7 @@ static void hws_matcher_destroy_and_disconnect(struct mlx5hws_matcher *matcher) { hws_matcher_resize_uninit(matcher); hws_matcher_disconnect(matcher); - hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_MATCH, 0); + hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_MATCH); hws_matcher_destroy_end_ft(matcher); hws_matcher_unbind_at(matcher); hws_matcher_unbind_mt(matcher); @@ -962,10 +916,9 @@ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher, return ret; required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); - if (matcher->action_ste[MLX5HWS_ACTION_STE_IDX_ANY].max_stes < required_stes) { + if (matcher->action_ste.max_stes < required_stes) { mlx5hws_dbg(ctx, "Required STEs [%d] exceeds initial action template STE [%d]\n", - required_stes, - matcher->action_ste[MLX5HWS_ACTION_STE_IDX_ANY].max_stes); + required_stes, matcher->action_ste.max_stes); return -ENOMEM; } @@ -1149,8 +1102,7 @@ static int hws_matcher_resize_precheck(struct mlx5hws_matcher *src_matcher, return -EINVAL; } - if (src_matcher->action_ste[MLX5HWS_ACTION_STE_IDX_ANY].max_stes > - dst_matcher->action_ste[0].max_stes) { + if (src_matcher->action_ste.max_stes > dst_matcher->action_ste.max_stes) { mlx5hws_err(ctx, "Src/dst matcher max STEs mismatch\n"); return -EINVAL; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h index 81ff487f57be..cff4ae854a79 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h @@ -52,15 +52,11 @@ struct mlx5hws_matcher_action_ste { u8 max_stes; }; -struct mlx5hws_matcher_resize_data_node { +struct mlx5hws_matcher_resize_data { struct mlx5hws_pool_chunk stc; u32 rtc_0_id; u32 rtc_1_id; struct mlx5hws_pool *pool; -}; - -struct mlx5hws_matcher_resize_data { - struct mlx5hws_matcher_resize_data_node action_ste[2]; u8 max_stes; struct list_head list_node; }; @@ -78,7 +74,7 @@ struct mlx5hws_matcher { struct mlx5hws_matcher *col_matcher; struct mlx5hws_matcher *resize_dst; struct mlx5hws_matcher_match_ste match_ste; - struct mlx5hws_matcher_action_ste action_ste[2]; + struct mlx5hws_matcher_action_ste action_ste; struct list_head list_node; struct list_head resize_data; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c index 14f6307a1772..699a73ed2fd7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c @@ -142,14 +142,9 @@ hws_rule_save_resize_info(struct mlx5hws_rule *rule, return; } - rule->resize_info->max_stes = - rule->matcher->action_ste[MLX5HWS_ACTION_STE_IDX_ANY].max_stes; - rule->resize_info->action_ste_pool[0] = rule->matcher->action_ste[0].max_stes ? - rule->matcher->action_ste[0].pool : - NULL; - rule->resize_info->action_ste_pool[1] = rule->matcher->action_ste[1].max_stes ? - rule->matcher->action_ste[1].pool : - NULL; + rule->resize_info->max_stes = rule->matcher->action_ste.max_stes; + rule->resize_info->action_ste_pool = rule->matcher->action_ste.max_stes ? + rule->matcher->action_ste.pool : NULL; } memcpy(rule->resize_info->ctrl_seg, ste_attr->wqe_ctrl, @@ -204,15 +199,15 @@ hws_rule_load_delete_info(struct mlx5hws_rule *rule, } } -static int hws_rule_alloc_action_ste_idx(struct mlx5hws_rule *rule, - u8 action_ste_selector) +static int hws_rule_alloc_action_ste(struct mlx5hws_rule *rule, + struct mlx5hws_rule_attr *attr) { struct mlx5hws_matcher *matcher = rule->matcher; struct mlx5hws_matcher_action_ste *action_ste; struct mlx5hws_pool_chunk ste = {0}; int ret; - action_ste = &matcher->action_ste[action_ste_selector]; + action_ste = &matcher->action_ste; ste.order = ilog2(roundup_pow_of_two(action_ste->max_stes)); ret = mlx5hws_pool_chunk_alloc(action_ste->pool, &ste); if (unlikely(ret)) { @@ -225,8 +220,7 @@ static int hws_rule_alloc_action_ste_idx(struct mlx5hws_rule *rule, return 0; } -static void hws_rule_free_action_ste_idx(struct mlx5hws_rule *rule, - u8 action_ste_selector) +void mlx5hws_rule_free_action_ste(struct mlx5hws_rule *rule) { struct mlx5hws_matcher *matcher = rule->matcher; struct mlx5hws_pool_chunk ste = {0}; @@ -236,10 +230,10 @@ static void hws_rule_free_action_ste_idx(struct mlx5hws_rule *rule, if (mlx5hws_matcher_is_resizable(matcher)) { /* Free the original action pool if rule was resized */ max_stes = rule->resize_info->max_stes; - pool = rule->resize_info->action_ste_pool[action_ste_selector]; + pool = rule->resize_info->action_ste_pool; } else { - max_stes = matcher->action_ste[action_ste_selector].max_stes; - pool = matcher->action_ste[action_ste_selector].pool; + max_stes = matcher->action_ste.max_stes; + pool = matcher->action_ste.pool; } /* This release is safe only when the rule match part was deleted */ @@ -249,41 +243,6 @@ static void hws_rule_free_action_ste_idx(struct mlx5hws_rule *rule, mlx5hws_pool_chunk_free(pool, &ste); } -static int hws_rule_alloc_action_ste(struct mlx5hws_rule *rule, - struct mlx5hws_rule_attr *attr) -{ - int action_ste_idx; - int ret; - - ret = hws_rule_alloc_action_ste_idx(rule, 0); - if (unlikely(ret)) - return ret; - - action_ste_idx = rule->action_ste_idx; - - ret = hws_rule_alloc_action_ste_idx(rule, 1); - if (unlikely(ret)) { - hws_rule_free_action_ste_idx(rule, 0); - return ret; - } - - /* Both pools have to return the same index */ - if (unlikely(rule->action_ste_idx != action_ste_idx)) { - pr_warn("HWS: allocation of action STE failed - pool indexes mismatch\n"); - return -EINVAL; - } - - return 0; -} - -void mlx5hws_rule_free_action_ste(struct mlx5hws_rule *rule) -{ - if (rule->action_ste_idx > -1) { - hws_rule_free_action_ste_idx(rule, 1); - hws_rule_free_action_ste_idx(rule, 0); - } -} - static void hws_rule_create_init(struct mlx5hws_rule *rule, struct mlx5hws_send_ste_attr *ste_attr, struct mlx5hws_actions_apply_data *apply, @@ -298,9 +257,6 @@ static void hws_rule_create_init(struct mlx5hws_rule *rule, /* In update we use these rtc's */ rule->rtc_0 = 0; rule->rtc_1 = 0; - rule->action_ste_selector = 0; - } else { - rule->action_ste_selector = !rule->action_ste_selector; } rule->pending_wqes = 0; @@ -316,7 +272,7 @@ static void hws_rule_create_init(struct mlx5hws_rule *rule, /* Init default action apply */ apply->tbl_type = tbl->type; apply->common_res = &ctx->common_res; - apply->jump_to_action_stc = matcher->action_ste[0].stc.offset; + apply->jump_to_action_stc = matcher->action_ste.stc.offset; apply->require_dep = 0; } @@ -333,7 +289,6 @@ static void hws_rule_move_init(struct mlx5hws_rule *rule, rule->pending_wqes = 0; rule->action_ste_idx = -1; - rule->action_ste_selector = 0; rule->status = MLX5HWS_RULE_STATUS_CREATING; rule->resize_info->state = MLX5HWS_RULE_RESIZE_STATE_WRITING; } @@ -403,10 +358,8 @@ static int hws_rule_create_hws(struct mlx5hws_rule *rule, } } /* Skip RX/TX based on the dep_wqe init */ - ste_attr.rtc_0 = dep_wqe->rtc_0 ? - matcher->action_ste[rule->action_ste_selector].rtc_0_id : 0; - ste_attr.rtc_1 = dep_wqe->rtc_1 ? - matcher->action_ste[rule->action_ste_selector].rtc_1_id : 0; + ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0_id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1_id : 0; /* Action STEs are written to a specific index last to first */ ste_attr.direct_index = rule->action_ste_idx + action_stes; apply.next_direct_idx = ste_attr.direct_index; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h index 495cdd17e9f3..fd2bef87116b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h @@ -42,7 +42,7 @@ struct mlx5hws_rule_match_tag { }; struct mlx5hws_rule_resize_info { - struct mlx5hws_pool *action_ste_pool[2]; + struct mlx5hws_pool *action_ste_pool; u32 rtc_0; u32 rtc_1; u32 rule_idx; @@ -62,7 +62,6 @@ struct mlx5hws_rule { u32 rtc_1; /* The RTC into which the STE was inserted */ int action_ste_idx; /* STE array index */ u8 status; /* enum mlx5hws_rule_status */ - u8 action_ste_selector; /* For rule update - which action STE is in use */ u8 pending_wqes; bool skip_delete; /* For complex rules - another rule with same tag * still exists, so don't actually delete this rule. From patchwork Thu Jan 9 16:05:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13932965 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2084.outbound.protection.outlook.com [40.107.220.84]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A69A322069F for ; Thu, 9 Jan 2025 16:08:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.84 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438886; cv=fail; b=H3ta75Cy2HGOqnt0qcEy/5CwpV89vADskqFkb8lxlXph/saE3/zHQSUkASOygz8Ahdcw2ONCf97lfxtIVWBdLTTCgt2uI3FHOBcWsx1uRayw9Uf2NKdGZ6k3rs/UTS87bs7U6wsqW+p3zeSgo+5Qfu+4Pe4ik5WQW/4XahFCYf0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736438886; c=relaxed/simple; bh=qndlnd4Rr86f+YhvixDxC+DdlauK9bcKh97qYMbpl/E=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nyktW7awODQP8lWXqnY0Df76XyPqmxrUMX7ziTY4k5DjNK5IQszyIvGGRMmYjHih8gpwmV7PCHU4osXLmmLPb8OG56MaHRN97Xc80j0ed/Hj9Jx+E8Mz+vectwUY7eJCwHE54ZkKemBLt3sHqxPhJspF+iJR5MA8gK/GIfFv1CI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=E/f0ZVTn; arc=fail smtp.client-ip=40.107.220.84 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="E/f0ZVTn" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Lz/qfZZ9t2AjzYxB+sH/ozlj81Z/AiN1j6U6KPbPeFEN5shDybFHzW7hgWWiq8q+FHLDCoY7hIgurfE9lBAm22ynZMzW6XD/rA4vTZ4XJxnFdtTo0WFl9J0ZTrrbvoGmFnFr3DtHUHv6cZFc8iyGgy0zCz5KsXU4LWmGQXsVj+XVV8fLVPid8/efhDBinQi/tY/x0NQRuju8FoCWIti006XAuE3Hkg80e/ngnDGeHpO1hTLoJLMpmbEB+OvxfUh7vEgjY1C6cOX6/RwtTtEHMH2FLojjybzcbhw2qWi6ylvJ2g8NR+3hBm5B4IGwWozI/9psZ9MtCO6h3R2H11XozA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bFup79IpciqzqQD6IUaufWAUmDav2NuCJIeEr5y+Mh4=; b=kI6BHjqBvwR/F3TWqQCj+QQeGRUDwic5uuzKJuY68BW6ru+uxQmAubWVSiCvz3yQDfj7++A6CErhRIQJ+Dd4Z8T641not8AUVUCXDlsaYIQFOjJ4ZRbxzcvPtqyTVzDPCPdF/IKgxxBlZgEFfNPzc9eyZAJfVDtpPl9uf9hCntyrGZ/eTlq2IsGE2Pk5/VmxuI4yh7svI44EEm9DfIuBMFih26WjGb+2PuimCCLdr8u9CQWnwuZnVHaH/SXKd3Oh5bUq0jRI77yXLpYgjUXYdd7Mi6kVGJ8St1VqFCFWrtllr2nWyLZHUudIFGdEhQIeQZ+OWh3qC4uSWRfh3MMUqQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bFup79IpciqzqQD6IUaufWAUmDav2NuCJIeEr5y+Mh4=; b=E/f0ZVTnGH9WVPYAEymkZpNVwfjz9EuMBsXxYwttg/shF1CSdJRCzjXG1wde0JQlfH1lKm00sRqos6/xqCbUh5R8otEJp0r0JBn4ZipKEc0pRD9fp2oqxeW+kjm11SBTIa1HRzdagdCL9fxGmUo326eXywaBhBu+X0lWWwNaqTEFrKn/4ehEo8NilwabtZhSkJ7PDqSU+hz2W+RPiU8zhvlfo2sFKKzLPTKAtrrb9KpW7IOj0IjKRe6vK4GHLLwMjbOI6K45blp5MzsVnF0t5uyqmEvxiKcUKg1a7IbtN3RxjsMer9kNI7kpnJGwcfTEnzObpsD0nrvNoUNBNeib1w== Received: from DM6PR11CA0031.namprd11.prod.outlook.com (2603:10b6:5:190::44) by PH0PR12MB7078.namprd12.prod.outlook.com (2603:10b6:510:21d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.12; Thu, 9 Jan 2025 16:08:00 +0000 Received: from DS1PEPF00017095.namprd03.prod.outlook.com (2603:10b6:5:190:cafe::1e) by DM6PR11CA0031.outlook.office365.com (2603:10b6:5:190::44) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8335.11 via Frontend Transport; Thu, 9 Jan 2025 16:08:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF00017095.mail.protection.outlook.com (10.167.17.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8335.7 via Frontend Transport; Thu, 9 Jan 2025 16:07:59 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:48 -0800 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 9 Jan 2025 08:07:47 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 9 Jan 2025 08:07:44 -0800 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: , Saeed Mahameed , Gal Pressman , Leon Romanovsky , Mark Bloch , Moshe Shemesh , Yevgeny Kliteynik , Vlad Dogaru , Tariq Toukan Subject: [PATCH net-next V2 15/15] net/mlx5: HWS, update flow - support through bigger action RTC Date: Thu, 9 Jan 2025 18:05:46 +0200 Message-ID: <20250109160546.1733647-16-tariqt@nvidia.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20250109160546.1733647-1-tariqt@nvidia.com> References: <20250109160546.1733647-1-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017095:EE_|PH0PR12MB7078:EE_ X-MS-Office365-Filtering-Correlation-Id: 5c12ca6f-0e9c-47c8-d09d-08dd30c7c9bd X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|376014|82310400026; X-Microsoft-Antispam-Message-Info: BWablo7MgHtCSUKtqYFFJXsJ/JqrQnyKs9g5TTd1xwg6xpAMhTh0Xhi1cS5hDKhV+DVMZs9WSfV0OppvkB2egkkqs3IWZvWG4UVdL7Xke+e7q1MrNfyr2H/EuWgA5FVmMB6IWXZFRyGvdfLevEnUdSpbCa3EdJOMtXPMMLv9shkYlLgMP6+HLsYqqKSyOu2CZRWJJGY4zOf2Tkk8Mp7PFvqt3We1m8jk4kEyqE+7QRjCubtLY9HFxhnspWlzHWCCVhKXOgDX5FsNADhUwzPkmT1lawF2KcAcEcnj5yWqUguWui4ZCxYM0qDM8eL2VLmeNNENYfDroL7ktSrIHRXmqhXCpR7wfkfM1loLYOyleHXthjwP3+557nz64zVe3Y/eyo6PljDjQGV7vFCwgmKU2Jxuts02UFCFRFRNQALT1KDxWrZv3GqrDpO8OgW5eVsmuhf8e8lZRt0pRZ8XjJ26RqIaefKr7nOvcayqtFaEAeygq1rgGCOCRQ7jd02bmPfyBbDyLNKsJ2pDNgJ+LOcVWCA0nezSbWTz68ixsX0zOgCtXpjQUMv7TQ1KL3gs2GIlkJpN0gZdmOKzc2+qdtLWPY1HDc7o4lsE+A/nAM810925k3FkvQ5VhcXWHGkG8Qx5thV7mZRlyAdu9JaSlolbOGsiSzBBENPl8qSYf+0pYxQ7xSZ6F/BXAh7CnAvZDHZxsltFgNceEcDKEri2K5A0SHR7S+NOjd/u4hBvsesz+QJ6dvGxtiF52oVyyGqyu9bqIOFNJgfB45F7zpt46U8V/QRahOmdNu51Sw5LMX2EY9EcJOv6qgzzCXkjq90dtGbzCIweYu6/YYpz57kWQMuGWdOp1coBeGn6KUfscJ2XCjN1buuCBpAbyxB13d9fFYJlR2/kQ1iUZOSSUuiRnXSOrrVmGrs4XawpnFasTKsJjLmoHiOPMOpV3SEKaGFgswbh2hXox84RJ7LkSawcmMrjzXvYoS8UffGWX9Y6pl8aeDyYQN5Tw31CEgrtX66nVyMl95cqdkmuW33KQiBPWADSCPJT96vMyaeT0xI3baiozeptByulDMv36jQXkN4/R9dxTUIYULfeNCMXNHox8aTFoYvJqto9KsmwJtY8xDJJVwgAbcBeczaLfKALBl9a8u9ga7ganbVVgBNy96CEa6r2Bgt/Hbjld9VPbLh/nfZbeNX+w5+0KKZqtJMleDvQ5uNsXhYkzfxhopx1ayExGkXD7mX+jXm32aag1pOj/lsxZ7KYk1SbGvrF6MrcJvkACi9wouASZWGgnVKWMI9Tv7+Obnh/zCA25XOpq6CnrV6gD4bb/7gupPlrJ/5DaojJ4HzuywqCu6IKcdnRJNYlhHOhGeOLYWP2P/F1J0HvoRpiqLuO5EmFjYvyuJ8+DG093tLbLyYtHBM+2fbvCNYQHBWPRiM1Ha7j3FANx2zTSoqbZWZBUCdymXmkr44jhxks32XJZEecMZx0xxkaoqgZEb1ULCDBUtMnmrclAg+0CflQcCQ= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2025 16:07:59.6201 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5c12ca6f-0e9c-47c8-d09d-08dd30c7c9bd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017095.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7078 X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik This patch is the second part of update flow implementation. Instead of using two action RTCs, we use the same RTC which is twice the size of what was required before the update flow support. This way we always allocate STEs from the same RTC (same pool), which means that update is done similar to how create is done. The bigger size allows us to allocate and write new STEs, and later free the old (pre-update) STEs. Similar to rule creation, STEs are written in reverse order: - write action STEs, while match STE is still pointing to the old action STEs - overwrite the match STE with the new one, which now is pointing to the new action STEs Old action STEs can be freed only once we got completion on the writing of the new match STE. To implement this we added new rule states: UPDATING/UPDATED. Rule is moved to UPDATING state in the beginning of the update flow. Once all completions are received, rule is moved to UPDATED state. At this point old action STEs are freed and rule goes back to CREATED state. Signed-off-by: Yevgeny Kliteynik Signed-off-by: Vlad Dogaru Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/matcher.c | 10 ++- .../mellanox/mlx5/core/steering/hws/rule.c | 88 ++++++++++--------- .../mellanox/mlx5/core/steering/hws/rule.h | 15 +++- .../mellanox/mlx5/core/steering/hws/send.c | 20 +++-- 4 files changed, 80 insertions(+), 53 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c index 74a03fbabcf7..80157a29a076 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c @@ -283,8 +283,13 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, rtc_1_id = &action_ste->rtc_1_id; ste_pool = action_ste->pool; ste = &action_ste->ste; + /* Action RTC size calculation: + * log((max number of rules in matcher) * + * (max number of action STEs per rule) * + * (2 to support writing new STEs for update rule)) + */ ste->order = ilog2(roundup_pow_of_two(action_ste->max_stes)) + - attr->table.sz_row_log; + attr->table.sz_row_log + 1; rtc_attr.log_size = ste->order; rtc_attr.log_depth = 0; rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; @@ -554,8 +559,9 @@ static int hws_matcher_bind_at(struct mlx5hws_matcher *matcher) pool_attr.table_type = tbl->type; pool_attr.pool_type = MLX5HWS_POOL_TYPE_STE; pool_attr.flags = MLX5HWS_POOL_FLAGS_FOR_STE_ACTION_POOL; + /* Pool size is similar to action RTC size */ pool_attr.alloc_log_sz = ilog2(roundup_pow_of_two(action_ste->max_stes)) + - matcher->attr.table.sz_row_log; + matcher->attr.table.sz_row_log + 1; hws_matcher_set_pool_attr(&pool_attr, matcher); action_ste->pool = mlx5hws_pool_create(ctx, &pool_attr); if (!action_ste->pool) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c index 699a73ed2fd7..a27a2d5ffc7b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c @@ -129,22 +129,18 @@ static void hws_rule_gen_comp(struct mlx5hws_send_engine *queue, static void hws_rule_save_resize_info(struct mlx5hws_rule *rule, - struct mlx5hws_send_ste_attr *ste_attr, - bool is_update) + struct mlx5hws_send_ste_attr *ste_attr) { if (!mlx5hws_matcher_is_resizable(rule->matcher)) return; - if (likely(!is_update)) { + /* resize_info might already exist (if we're in update flow) */ + if (likely(!rule->resize_info)) { rule->resize_info = kzalloc(sizeof(*rule->resize_info), GFP_KERNEL); if (unlikely(!rule->resize_info)) { pr_warn("HWS: resize info isn't allocated for rule\n"); return; } - - rule->resize_info->max_stes = rule->matcher->action_ste.max_stes; - rule->resize_info->action_ste_pool = rule->matcher->action_ste.max_stes ? - rule->matcher->action_ste.pool : NULL; } memcpy(rule->resize_info->ctrl_seg, ste_attr->wqe_ctrl, @@ -199,8 +195,7 @@ hws_rule_load_delete_info(struct mlx5hws_rule *rule, } } -static int hws_rule_alloc_action_ste(struct mlx5hws_rule *rule, - struct mlx5hws_rule_attr *attr) +static int hws_rule_alloc_action_ste(struct mlx5hws_rule *rule) { struct mlx5hws_matcher *matcher = rule->matcher; struct mlx5hws_matcher_action_ste *action_ste; @@ -215,32 +210,29 @@ static int hws_rule_alloc_action_ste(struct mlx5hws_rule *rule, "Failed to allocate STE for rule actions"); return ret; } - rule->action_ste_idx = ste.offset; + + rule->action_ste.pool = matcher->action_ste.pool; + rule->action_ste.num_stes = matcher->action_ste.max_stes; + rule->action_ste.index = ste.offset; return 0; } -void mlx5hws_rule_free_action_ste(struct mlx5hws_rule *rule) +void mlx5hws_rule_free_action_ste(struct mlx5hws_rule_action_ste_info *action_ste) { - struct mlx5hws_matcher *matcher = rule->matcher; struct mlx5hws_pool_chunk ste = {0}; - struct mlx5hws_pool *pool; - u8 max_stes; - if (mlx5hws_matcher_is_resizable(matcher)) { - /* Free the original action pool if rule was resized */ - max_stes = rule->resize_info->max_stes; - pool = rule->resize_info->action_ste_pool; - } else { - max_stes = matcher->action_ste.max_stes; - pool = matcher->action_ste.pool; - } + if (!action_ste->num_stes) + return; - /* This release is safe only when the rule match part was deleted */ - ste.order = ilog2(roundup_pow_of_two(max_stes)); - ste.offset = rule->action_ste_idx; + ste.order = ilog2(roundup_pow_of_two(action_ste->num_stes)); + ste.offset = action_ste->index; - mlx5hws_pool_chunk_free(pool, &ste); + /* This release is safe only when the rule match STE was deleted + * (when the rule is being deleted) or replaced with the new STE that + * isn't pointing to old action STEs (when the rule is being updated). + */ + mlx5hws_pool_chunk_free(action_ste->pool, &ste); } static void hws_rule_create_init(struct mlx5hws_rule *rule, @@ -257,11 +249,24 @@ static void hws_rule_create_init(struct mlx5hws_rule *rule, /* In update we use these rtc's */ rule->rtc_0 = 0; rule->rtc_1 = 0; + + rule->action_ste.pool = NULL; + rule->action_ste.num_stes = 0; + rule->action_ste.index = -1; + + rule->status = MLX5HWS_RULE_STATUS_CREATING; + } else { + rule->status = MLX5HWS_RULE_STATUS_UPDATING; } + /* Initialize the old action STE info - shallow-copy action_ste. + * In create flow this will set old_action_ste fields to initial values. + * In update flow this will save the existing action STE info, + * so that we will later use it to free old STEs. + */ + rule->old_action_ste = rule->action_ste; + rule->pending_wqes = 0; - rule->action_ste_idx = -1; - rule->status = MLX5HWS_RULE_STATUS_CREATING; /* Init default send STE attributes */ ste_attr->gta_opcode = MLX5HWS_WQE_GTA_OP_ACTIVATE; @@ -288,7 +293,6 @@ static void hws_rule_move_init(struct mlx5hws_rule *rule, rule->rtc_1 = 0; rule->pending_wqes = 0; - rule->action_ste_idx = -1; rule->status = MLX5HWS_RULE_STATUS_CREATING; rule->resize_info->state = MLX5HWS_RULE_RESIZE_STATE_WRITING; } @@ -349,19 +353,17 @@ static int hws_rule_create_hws(struct mlx5hws_rule *rule, if (action_stes) { /* Allocate action STEs for rules that need more than match STE */ - if (!is_update) { - ret = hws_rule_alloc_action_ste(rule, attr); - if (ret) { - mlx5hws_err(ctx, "Failed to allocate action memory %d", ret); - mlx5hws_send_abort_new_dep_wqe(queue); - return ret; - } + ret = hws_rule_alloc_action_ste(rule); + if (ret) { + mlx5hws_err(ctx, "Failed to allocate action memory %d", ret); + mlx5hws_send_abort_new_dep_wqe(queue); + return ret; } /* Skip RX/TX based on the dep_wqe init */ ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0_id : 0; ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1_id : 0; /* Action STEs are written to a specific index last to first */ - ste_attr.direct_index = rule->action_ste_idx + action_stes; + ste_attr.direct_index = rule->action_ste.index + action_stes; apply.next_direct_idx = ste_attr.direct_index; } else { apply.next_direct_idx = 0; @@ -412,7 +414,7 @@ static int hws_rule_create_hws(struct mlx5hws_rule *rule, if (!is_update) hws_rule_save_delete_info(rule, &ste_attr); - hws_rule_save_resize_info(rule, &ste_attr, is_update); + hws_rule_save_resize_info(rule, &ste_attr); mlx5hws_send_engine_inc_rule(queue); if (!attr->burst) @@ -433,7 +435,10 @@ static void hws_rule_destroy_failed_hws(struct mlx5hws_rule *rule, attr->user_data, MLX5HWS_RULE_STATUS_DELETED); /* Rule failed now we can safely release action STEs */ - mlx5hws_rule_free_action_ste(rule); + mlx5hws_rule_free_action_ste(&rule->action_ste); + + /* Perhaps the rule failed updating - release old action STEs as well */ + mlx5hws_rule_free_action_ste(&rule->old_action_ste); /* Clear complex tag */ hws_rule_clear_delete_info(rule); @@ -470,7 +475,8 @@ static int hws_rule_destroy_hws(struct mlx5hws_rule *rule, } /* Rule is not completed yet */ - if (rule->status == MLX5HWS_RULE_STATUS_CREATING) + if (rule->status == MLX5HWS_RULE_STATUS_CREATING || + rule->status == MLX5HWS_RULE_STATUS_UPDATING) return -EBUSY; /* Rule failed and doesn't require cleanup */ @@ -487,7 +493,7 @@ static int hws_rule_destroy_hws(struct mlx5hws_rule *rule, hws_rule_gen_comp(queue, rule, false, attr->user_data, MLX5HWS_RULE_STATUS_DELETED); - mlx5hws_rule_free_action_ste(rule); + mlx5hws_rule_free_action_ste(&rule->action_ste); mlx5hws_rule_clear_resize_info(rule); return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h index fd2bef87116b..b5ee94ac449b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h @@ -15,6 +15,8 @@ enum mlx5hws_rule_status { MLX5HWS_RULE_STATUS_UNKNOWN, MLX5HWS_RULE_STATUS_CREATING, MLX5HWS_RULE_STATUS_CREATED, + MLX5HWS_RULE_STATUS_UPDATING, + MLX5HWS_RULE_STATUS_UPDATED, MLX5HWS_RULE_STATUS_DELETING, MLX5HWS_RULE_STATUS_DELETED, MLX5HWS_RULE_STATUS_FAILING, @@ -41,13 +43,17 @@ struct mlx5hws_rule_match_tag { }; }; +struct mlx5hws_rule_action_ste_info { + struct mlx5hws_pool *pool; + int index; /* STE array index */ + u8 num_stes; +}; + struct mlx5hws_rule_resize_info { - struct mlx5hws_pool *action_ste_pool; u32 rtc_0; u32 rtc_1; u32 rule_idx; u8 state; - u8 max_stes; u8 ctrl_seg[MLX5HWS_WQE_SZ_GTA_CTRL]; /* Ctrl segment of STE: 48 bytes */ u8 data_seg[MLX5HWS_WQE_SZ_GTA_DATA]; /* Data segment of STE: 64 bytes */ }; @@ -58,9 +64,10 @@ struct mlx5hws_rule { struct mlx5hws_rule_match_tag tag; struct mlx5hws_rule_resize_info *resize_info; }; + struct mlx5hws_rule_action_ste_info action_ste; + struct mlx5hws_rule_action_ste_info old_action_ste; u32 rtc_0; /* The RTC into which the STE was inserted */ u32 rtc_1; /* The RTC into which the STE was inserted */ - int action_ste_idx; /* STE array index */ u8 status; /* enum mlx5hws_rule_status */ u8 pending_wqes; bool skip_delete; /* For complex rules - another rule with same tag @@ -68,7 +75,7 @@ struct mlx5hws_rule { */ }; -void mlx5hws_rule_free_action_ste(struct mlx5hws_rule *rule); +void mlx5hws_rule_free_action_ste(struct mlx5hws_rule_action_ste_info *action_ste); int mlx5hws_rule_move_hws_remove(struct mlx5hws_rule *rule, void *queue, void *user_data); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c index c680b7f984e1..cb6abc4ab7df 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c @@ -377,17 +377,25 @@ static void hws_send_engine_update_rule(struct mlx5hws_send_engine *queue, *status = MLX5HWS_FLOW_OP_ERROR; } else { - /* Increase the status, this only works on good flow as the enum - * is arrange it away creating -> created -> deleting -> deleted + /* Increase the status, this only works on good flow as + * the enum is arranged this way: + * - creating -> created + * - updating -> updated + * - deleting -> deleted */ priv->rule->status++; *status = MLX5HWS_FLOW_OP_SUCCESS; - /* Rule was deleted now we can safely release action STEs - * and clear resize info - */ if (priv->rule->status == MLX5HWS_RULE_STATUS_DELETED) { - mlx5hws_rule_free_action_ste(priv->rule); + /* Rule was deleted, now we can safely release + * action STEs and clear resize info + */ + mlx5hws_rule_free_action_ste(&priv->rule->action_ste); mlx5hws_rule_clear_resize_info(priv->rule); + } else if (priv->rule->status == MLX5HWS_RULE_STATUS_UPDATED) { + /* Rule was updated, free the old action STEs */ + mlx5hws_rule_free_action_ste(&priv->rule->old_action_ste); + /* Update completed - move the rule back to "created" */ + priv->rule->status = MLX5HWS_RULE_STATUS_CREATED; } } }