From patchwork Thu Apr 10 19:17:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047162 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2053.outbound.protection.outlook.com [40.107.220.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F685298CCE; Thu, 10 Apr 2025 19:19:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.53 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312776; cv=fail; b=FYb9CH5jkBLU9lQ2yh9ziQUUNzQ0nyhKW0Jo2/HCf4LAci0jnRYsq/JL/QZtP4AMjDUgbBJC+Au86024lADywywGQYZu/zE8hHtcj1oe7OCCSiU6m4XY2PqepWTQp8UtDJcksG7ddnkD5Ptjz6zWymaMBgZIVBdcicf7cZhXU+E= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312776; c=relaxed/simple; bh=ozq5nNyPqvgamiid3fvBHYxvTtDVhGdhl6MXIBYKUoQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=IlVsfdFW/Yo8Qxt9JXgZiePF4+UyJd4YqPHhlNle2xxd/+0ldtGqwsS/LnlrUOrryvWeMT+nV2b4IPEnWhL7xl+J2gYMhkTzqlgPJxbtHVm4O24MOb3Xcz8TzHVJAIv4w2ghOjEb50igKyKxSSVjC1DaUazDcQUwiaYcT2PCl1M= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=rJxivHph; arc=fail smtp.client-ip=40.107.220.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="rJxivHph" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VNncUoPQfnqTecelq1kt75F7BMXgpB8koysXYAVT5986o6qEPm0LGNOqrgTVZLM6NOZg6ytjXAiMyE5BaaYrSwduCC0U/jjxBIohtsJoz9L/p4shYhotga26w3TDE/4df4tCTJYy6Xvofx14RqZJG9QzPYEeBQUu/FN4kaAgG7Xr3gSpIpao61MiGnW0WMhzFF3KU7daSX6pXPJ4pMWVdcwbb8vZtntlO5QU2VIJXHuCGkHaqhdBGkzQl4xmPIb8itjCiRUAIuiLCCPjTyTJXSh8t8zfGBe/SWOOTJnZf5uCNWBUAMkHaWP2tSaiKgrARABA/DZzScm1PXCSKTvNxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wmmQVbtw2fwBKhzRMLIGxh2ndX4g+qAr8KVVKPmrmkQ=; b=f8C+L5ENVNrs+6wr/ZyNN5NTcLFeHqplRWUjatGAVywzCcJIhhCiL5d9Zcw93f8TIgU6bXcs5ETt1uYxCzvYbx9RYeJ+ZWZ1kMi40xUeJiLAo5dkarsyg6HyJozmrPq329yBRmk/skSflzqqEonJNdohtJ/la3FXqvmdBRR/RiOefcQzIFVH7zZoQUHzllxNi+J0W1KGbqCCja0LbHWuicAT5LypD11U95g0GxvJCk3Bz6XSJJ+28OPGPhaq6CX+Zi0wQErHqPmSNBCpaYcOJNjs2BFOAj3BpUACydlWGuOzjrqpyk3HB9sgvKL6/UD9v/Le/flRLqQcwpYzZbYzIQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wmmQVbtw2fwBKhzRMLIGxh2ndX4g+qAr8KVVKPmrmkQ=; b=rJxivHph6xtJ7s8Y1YV+w76x2UW+4MyR9yQ9Nm5qXyJe5vPV5pe6Im88KkBrx9lr+kxD1bQxpqjro3Zmcei1cbDCXtEZ/fYWNwLYl4G+FrusKdY/xC0OE3HDb/k07l9mEFeko+4kRO6JxnAiSHHW2Cf5xu1Ph5PpalnZXSABp5hoWL9e5nH7LeovC2iX+K7N5dwt0jayGYavYsSEo6JyfRI8T6Jo29tLU9JCXVkYU6U2ZNvgbEHh0P5CrVu9sS/62SnGCzlOyGtQfji9lzZOMLXREz9jpuMz2uxO+5sSVC908e83RNQVlgigVDSAVSUkucebu8rJwJc8bhCxTLqKrw== Received: from SJ0PR03CA0293.namprd03.prod.outlook.com (2603:10b6:a03:39e::28) by PH8PR12MB7376.namprd12.prod.outlook.com (2603:10b6:510:214::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8606.32; Thu, 10 Apr 2025 19:19:30 +0000 Received: from SN1PEPF0002BA4D.namprd03.prod.outlook.com (2603:10b6:a03:39e:cafe::56) by SJ0PR03CA0293.outlook.office365.com (2603:10b6:a03:39e::28) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8606.35 via Frontend Transport; Thu, 10 Apr 2025 19:18:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by SN1PEPF0002BA4D.mail.protection.outlook.com (10.167.242.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:18:29 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:16 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:16 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:12 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 01/12] net/mlx5: HWS, Fix matcher action template attach Date: Thu, 10 Apr 2025 22:17:31 +0300 Message-ID: <1744312662-356571-2-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002BA4D:EE_|PH8PR12MB7376:EE_ X-MS-Office365-Filtering-Correlation-Id: 68dda70c-ce96-4a51-0912-08dd78647a3e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|36860700013|82310400026|1800799024; X-Microsoft-Antispam-Message-Info: L/0G1tDSt8lZJjtqkIV2Le3R29D7ow3h2qzd+1j3stPzppXjNC/eyojp6tQGJWZFzfhu8jiS7EYpLEzg7jeeMhlMhPhd2bVNTq6b6EqnHfbgE9FBhzQOqC/Gaq2USDCwm8YF9NqsimbBHGhFCjHqJd4JCtW2xIzqGnEQWUx07maPV+ImqJ2bM1cDWtL+NQoXP54tebGflsT0TgDPms/d+z0fwFvtPZXKX5Cy8L1u5OmOBMCkCWXa/hpkzGsKYHtGV6eK5e1/aQ14J/6Lpvb+QzvkNS97VTxdzntmiQczCT6Jgh3RwqexgUYMOsbGnJWzw/3h0miLWqvqtD+jHzQfZ9puuWYFL3S+OFqv49HAnXRIBlfI0+uoI/IL1E+YU/IUQj8IC6EpTM7Vv3jJQbYrbsnycX1OmG0XasHT43LvPCiS+60gIfuU2ZKsrODKgzws2reygab0McoiPmw+Xh7/BjsqTYcYyfIS3UJM0f8kV1SVTyF/uSQnCzeTCdHTDDbC1QhA3pTe8lWDNsD5xQmXVpMLg2ruajXqXHqZMhmISUppmScPvDTJImFc1M7p7tT904HhtnZRPpRP3ymnExGaNANkpLnVTYCrFTFk5KUZZgqVFM6yXcJpRl0RbAdI7rkuqtBjPcJuHGl5qCeE3KBPBUE606xnOrdns4gLIVnMbc1YY7gfyUD9cjLjrWjO7VPsQvQoi/rO1nnwiiKksfJ1nM06SmOQhYlOP7jpmFd/y/wniX6A8w5T5PnfE+UE8hbj43jjaTAPurTmtfkCPzB26KH7yPNyQFVAuv2bETwWxgwUEdO+TNWnOWItDGnF8WFvSyMhbMMuXIbdLqql0AaSH5z5RJmyOxCLkbcHprln0adFoILZqTru6yjoUIgoOsmmhUg1niHiXixOoSt1z4+gAAniquVFcJElpCR5JJypTFa7YrZVl0scPM3Z41vj7VACoxvkHkFiPLcLPkdlCTCnWQ8E20O+KX5L/0L++DhR5Sl1PQcRxbch8QgFTDjQGY6BJxuJFvB06Z86M5h+6MvY1VXlR5+sfCOJrqQdMawlqqSVm2F9c+K0onM9B+iZgJ6dHfSZr+Ny2TtmMP+C6lz3NnnL3Lv8VaG2a8GAgrPOyhavs/x0H25KlcMb7IMDwRinFqNzbGZaefueOYva4w87k6+wyJZT/s3kZXE1/mQnflUQ8pNRVbNw917OEGwPUeJd09WXe/5Kzt/7T8n0x7ALrHzx/gh4AqhXYUIDLiArZEm+in/NRSdbMh8GxNdZRmVcdVfn7omslsOGWFsYKi+QAml/uvq83FDL+lS059YkxwnCg6Jkeo4YK6OftN9pmRIEk9D8U82+A6g8ISsFzDhQKPLw2MqPxZNBKwcJQDGIdqQ8ELW9MjRhaeoSQXgDWOj/ew5QXHmn02DY8Sk8XALu81nw0He/OzMMaM77kWEbClkLGJSvU3Bm7PQ3Gfq2bmeTXEPLqTVJwW5nFNCKlhccB6fMObILWgV15Fh9BBQoMQPuZ1QG06uX9DHeIZBLOQUU X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(7416014)(36860700013)(82310400026)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:18:29.8020 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 68dda70c-ce96-4a51-0912-08dd78647a3e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002BA4D.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7376 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru The procedure of attaching an action template to an existing matcher had a few issues: 1. Attaching accidentally overran the `at` array in bwc_matcher, which would result in memory corruption. This bug wasn't triggered, but it is possible to trigger it by attaching action templates beyond the initial buffer size of 8. Fix this by converting to a dynamically sized buffer and reallocating if needed. 2. Similarly, the `at` array inside the native matcher was never reallocated. Fix this the same as above. 3. The bwc layer treated any error in action template attach as a signal that the matcher should be rehashed to account for a larger number of action STEs. In reality, there are other unrelated errors that can arise and they should be propagated upstack. Fix this by adding a `need_rehash` output parameter that's orthogonal to error codes. Fixes: 2111bb970c78 ("net/mlx5: HWS, added backward-compatible API handling") Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan Reviewed-by: Michal Kubiak --- .../mellanox/mlx5/core/steering/hws/bwc.c | 55 ++++++++++++++++--- .../mellanox/mlx5/core/steering/hws/bwc.h | 9 ++- .../mellanox/mlx5/core/steering/hws/matcher.c | 48 +++++++++++++--- .../mellanox/mlx5/core/steering/hws/matcher.h | 4 ++ .../mellanox/mlx5/core/steering/hws/mlx5hws.h | 5 +- 5 files changed, 97 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c index 19dce1ba512d..32de8bfc7644 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c @@ -90,13 +90,19 @@ int mlx5hws_bwc_matcher_create_simple(struct mlx5hws_bwc_matcher *bwc_matcher, bwc_matcher->priority = priority; bwc_matcher->size_log = MLX5HWS_BWC_MATCHER_INIT_SIZE_LOG; + bwc_matcher->size_of_at_array = MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM; + bwc_matcher->at = kcalloc(bwc_matcher->size_of_at_array, + sizeof(*bwc_matcher->at), GFP_KERNEL); + if (!bwc_matcher->at) + goto free_bwc_matcher_rules; + /* create dummy action template */ bwc_matcher->at[0] = mlx5hws_action_template_create(action_types ? action_types : init_action_types); if (!bwc_matcher->at[0]) { mlx5hws_err(table->ctx, "BWC matcher: failed creating action template\n"); - goto free_bwc_matcher_rules; + goto free_bwc_matcher_at_array; } bwc_matcher->num_of_at = 1; @@ -126,6 +132,8 @@ int mlx5hws_bwc_matcher_create_simple(struct mlx5hws_bwc_matcher *bwc_matcher, mlx5hws_match_template_destroy(bwc_matcher->mt); free_at: mlx5hws_action_template_destroy(bwc_matcher->at[0]); +free_bwc_matcher_at_array: + kfree(bwc_matcher->at); free_bwc_matcher_rules: kfree(bwc_matcher->rules); err: @@ -192,6 +200,7 @@ int mlx5hws_bwc_matcher_destroy_simple(struct mlx5hws_bwc_matcher *bwc_matcher) for (i = 0; i < bwc_matcher->num_of_at; i++) mlx5hws_action_template_destroy(bwc_matcher->at[i]); + kfree(bwc_matcher->at); mlx5hws_match_template_destroy(bwc_matcher->mt); kfree(bwc_matcher->rules); @@ -520,6 +529,23 @@ hws_bwc_matcher_extend_at(struct mlx5hws_bwc_matcher *bwc_matcher, struct mlx5hws_rule_action rule_actions[]) { enum mlx5hws_action_type action_types[MLX5HWS_BWC_MAX_ACTS]; + void *p; + + if (unlikely(bwc_matcher->num_of_at >= bwc_matcher->size_of_at_array)) { + if (bwc_matcher->size_of_at_array >= MLX5HWS_MATCHER_MAX_AT) + return -ENOMEM; + bwc_matcher->size_of_at_array *= 2; + p = krealloc(bwc_matcher->at, + bwc_matcher->size_of_at_array * + sizeof(*bwc_matcher->at), + __GFP_ZERO | GFP_KERNEL); + if (!p) { + bwc_matcher->size_of_at_array /= 2; + return -ENOMEM; + } + + bwc_matcher->at = p; + } hws_bwc_rule_actions_to_action_types(rule_actions, action_types); @@ -777,6 +803,7 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule, struct mlx5hws_rule_attr rule_attr; struct mutex *queue_lock; /* Protect the queue */ u32 num_of_rules; + bool need_rehash; int ret = 0; int at_idx; @@ -803,10 +830,14 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule, at_idx = bwc_matcher->num_of_at - 1; ret = mlx5hws_matcher_attach_at(bwc_matcher->matcher, - bwc_matcher->at[at_idx]); + bwc_matcher->at[at_idx], + &need_rehash); if (unlikely(ret)) { - /* Action template attach failed, possibly due to - * requiring more action STEs. + hws_bwc_unlock_all_queues(ctx); + return ret; + } + if (unlikely(need_rehash)) { + /* The new action template requires more action STEs. * Need to attempt creating new matcher with all * the action templates, including the new one. */ @@ -942,6 +973,7 @@ hws_bwc_rule_action_update(struct mlx5hws_bwc_rule *bwc_rule, struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx; struct mlx5hws_rule_attr rule_attr; struct mutex *queue_lock; /* Protect the queue */ + bool need_rehash; int at_idx, ret; u16 idx; @@ -973,12 +1005,17 @@ hws_bwc_rule_action_update(struct mlx5hws_bwc_rule *bwc_rule, at_idx = bwc_matcher->num_of_at - 1; ret = mlx5hws_matcher_attach_at(bwc_matcher->matcher, - bwc_matcher->at[at_idx]); + bwc_matcher->at[at_idx], + &need_rehash); if (unlikely(ret)) { - /* Action template attach failed, possibly due to - * requiring more action STEs. - * Need to attempt creating new matcher with all - * the action templates, including the new one. + hws_bwc_unlock_all_queues(ctx); + return ret; + } + if (unlikely(need_rehash)) { + /* The new action template requires more action + * STEs. Need to attempt creating new matcher + * with all the action templates, including the + * new one. */ ret = hws_bwc_matcher_rehash_at(bwc_matcher); if (unlikely(ret)) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h index 47f7ed141553..bb0cf4b922ce 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h @@ -10,9 +10,7 @@ #define MLX5HWS_BWC_MATCHER_REHASH_BURST_TH 32 /* Max number of AT attach operations for the same matcher. - * When the limit is reached, next attempt to attach new AT - * will result in creation of a new matcher and moving all - * the rules to this matcher. + * When the limit is reached, a larger buffer is allocated for the ATs. */ #define MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM 8 @@ -23,10 +21,11 @@ struct mlx5hws_bwc_matcher { struct mlx5hws_matcher *matcher; struct mlx5hws_match_template *mt; - struct mlx5hws_action_template *at[MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM]; - u32 priority; + struct mlx5hws_action_template **at; u8 num_of_at; + u8 size_of_at_array; u8 size_log; + u32 priority; atomic_t num_of_rules; struct list_head *rules; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c index b61864b32053..37a4497048a6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c @@ -905,18 +905,48 @@ static int hws_matcher_uninit(struct mlx5hws_matcher *matcher) return 0; } +static int hws_matcher_grow_at_array(struct mlx5hws_matcher *matcher) +{ + void *p; + + if (matcher->size_of_at_array >= MLX5HWS_MATCHER_MAX_AT) + return -ENOMEM; + + matcher->size_of_at_array *= 2; + p = krealloc(matcher->at, + matcher->size_of_at_array * sizeof(*matcher->at), + __GFP_ZERO | GFP_KERNEL); + if (!p) { + matcher->size_of_at_array /= 2; + return -ENOMEM; + } + + matcher->at = p; + + return 0; +} + int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher, - struct mlx5hws_action_template *at) + struct mlx5hws_action_template *at, + bool *need_rehash) { bool is_jumbo = mlx5hws_matcher_mt_is_jumbo(matcher->mt); struct mlx5hws_context *ctx = matcher->tbl->ctx; u32 required_stes; int ret; - if (!matcher->attr.max_num_of_at_attach) { - mlx5hws_dbg(ctx, "Num of current at (%d) exceed allowed value\n", - matcher->num_of_at); - return -EOPNOTSUPP; + *need_rehash = false; + + if (unlikely(matcher->num_of_at >= matcher->size_of_at_array)) { + ret = hws_matcher_grow_at_array(matcher); + if (ret) + return ret; + + if (matcher->col_matcher) { + ret = hws_matcher_grow_at_array(matcher->col_matcher); + if (ret) + return ret; + } } ret = hws_matcher_check_and_process_at(matcher, at); @@ -927,12 +957,11 @@ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher, if (matcher->action_ste.max_stes < required_stes) { mlx5hws_dbg(ctx, "Required STEs [%d] exceeds initial action template STE [%d]\n", required_stes, matcher->action_ste.max_stes); - return -ENOMEM; + *need_rehash = true; } matcher->at[matcher->num_of_at] = *at; matcher->num_of_at += 1; - matcher->attr.max_num_of_at_attach -= 1; if (matcher->col_matcher) matcher->col_matcher->num_of_at = matcher->num_of_at; @@ -960,8 +989,9 @@ hws_matcher_set_templates(struct mlx5hws_matcher *matcher, if (!matcher->mt) return -ENOMEM; - matcher->at = kvcalloc(num_of_at + matcher->attr.max_num_of_at_attach, - sizeof(*matcher->at), + matcher->size_of_at_array = + num_of_at + matcher->attr.max_num_of_at_attach; + matcher->at = kvcalloc(matcher->size_of_at_array, sizeof(*matcher->at), GFP_KERNEL); if (!matcher->at) { mlx5hws_err(ctx, "Failed to allocate action template array\n"); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h index 020de70270c5..20b32012c418 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h @@ -23,6 +23,9 @@ */ #define MLX5HWS_MATCHER_ACTION_RTC_UPDATE_MULT 1 +/* Maximum number of action templates that can be attached to a matcher. */ +#define MLX5HWS_MATCHER_MAX_AT 128 + enum mlx5hws_matcher_offset { MLX5HWS_MATCHER_OFFSET_TAG_DW1 = 12, MLX5HWS_MATCHER_OFFSET_TAG_DW0 = 13, @@ -72,6 +75,7 @@ struct mlx5hws_matcher { struct mlx5hws_match_template *mt; struct mlx5hws_action_template *at; u8 num_of_at; + u8 size_of_at_array; u8 num_of_mt; /* enum mlx5hws_matcher_flags */ u8 flags; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h index 5121951f2778..8ed8a715a2eb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h @@ -399,11 +399,14 @@ int mlx5hws_matcher_destroy(struct mlx5hws_matcher *matcher); * * @matcher: Matcher to attach the action template to. * @at: Action template to be attached to the matcher. + * @need_rehash: Output parameter that tells callers if the matcher needs to be + * rehashed. * * Return: Zero on success, non-zero otherwise. */ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher, - struct mlx5hws_action_template *at); + struct mlx5hws_action_template *at, + bool *need_rehash); /** * mlx5hws_matcher_resize_set_target - Link two matchers and enable moving rules. From patchwork Thu Apr 10 19:17:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047151 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2089.outbound.protection.outlook.com [40.107.102.89]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11CA128FFC7; Thu, 10 Apr 2025 19:18:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.102.89 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312719; cv=fail; b=HmVkLq2HAJV8IbyngrosY64mH1p7mcIZvnrgw04q8Lz6FTZXnNbUY9ZujHFpDBaqo4mTU8PsxF8952lI2u59rr/DRQBNEBDPKXo+MIP83crkSYlW37ttMiFrRf/D52XveoarZkoE4M1DANbKiU+gE5wBfCk9XsJIXM8OOai7H5c= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312719; c=relaxed/simple; bh=N3ZBhb4FcIkUOjzxMYSxz/IPKpRlXniN0FAQa0I2wCY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=IeywNgCvxviyCBbqY0NnEyV9MDi5wiq4DOqxzWyPz8DUu77i1DCfpqKNhKSmkfDt/hRwnOXBMHSJV5aRsjK/dufiTXjfksCzgvFB93iHi+gyomoG56My/hVWGic6sqavvyZXccbz3Rxu7BSVj9r/eu3zbYPpGyJPl35NL7gvoY0= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=fu0PszMf; arc=fail smtp.client-ip=40.107.102.89 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="fu0PszMf" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=a1HaFa12S263z5MgefrQkWnDwahtaqVcfoZf8PDIDNJyl11ycGjB+H0rRRuo62IaHOSvibrDxQm0YDQz2qRhXBJ+6jan0RWxlWtTHEcRwvhoSWMOmqi8/oYHedr6VK5FfEB4eUIGWfuFMKypwDwqFETdt1niio/jovrHLnlA+WzQ5LP6RwFWLYZnewiB7uTjXKKCy4QuI9Z1QF2nQdVUMKT/X6AJbUGW1pdeM2EIDl7rfMiUAZLxm+91L+tEE0voPg/XghplRWkRMscfqd8vRq0UUVP6n5vc9W9yhfTOkOs9cSENp91nOjPkkOdpHURgJHkdgCT8k9JHIMP2GqRvQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IGK49uSaNATQw6Zn1UHX44d6SycIv0icibxgmaljrqI=; b=EVAwO7yIFEufypU94e29itDuGRQ3Yi+xbG80fLg1XAtm3LjyCql7eHnq9s6DOaeKy0rAt+mEJW6WRAcMcp3ja4/sWqVJ9WbifLQBkiSH+ikfy9gmsX4rXsHSyYy4oxKwstQ/q7lET4iESKlPSNlWhYsw3esjMnayQKQqXeXbWpfaXbkbAU0rwCEqQ0gnpRAE3CuvlyHNGcKGlwXgLldkdgbLEk5V3Ll9UUiAGkkJAIYKZzRdUrsUEBFGL7k7tAz960tnPVL5wFMDRuqTYPeqsJ9TqPIFWwH757nPnQEVla6YeA7wZhvfrKW6o/oNNRv/XJcEzU2f+WqoBGXdxbypvg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IGK49uSaNATQw6Zn1UHX44d6SycIv0icibxgmaljrqI=; b=fu0PszMfaeg3m6ouk22wnMxvG7/D73xEEhwd/C7WyQUihczYG/VlHsl2jfcdJDQaw95D1xjveuyO9iXFR3XQ/lL3ac/IdDLNn1333VgeHvxj2LocoG8u3aydLX9WVYKrkCntOyfENWeaqFlvXG0ApLvFW3K3v1jDYiZwxS+EIEdFAKJR5aKwIEzjzUCqDBkgmcijTxJM+qRKLpT25ZGvrHyVGVBclVtd6IKc0SdWIqo42W9LQdhAbinkHKuSpptUKYZPSFOAdFStSVShx1Ig5n92dx2k69kpv8Ja4GmaoQRIDwNzQTdaoYCSBmHn21unn20T4NzEg1ZCZbHaDymokQ== Received: from BL0PR02CA0052.namprd02.prod.outlook.com (2603:10b6:207:3d::29) by PH7PR12MB7211.namprd12.prod.outlook.com (2603:10b6:510:206::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.22; Thu, 10 Apr 2025 19:18:33 +0000 Received: from BN1PEPF0000467F.namprd03.prod.outlook.com (2603:10b6:207:3d:cafe::c7) by BL0PR02CA0052.outlook.office365.com (2603:10b6:207:3d::29) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8606.37 via Frontend Transport; Thu, 10 Apr 2025 19:18:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF0000467F.mail.protection.outlook.com (10.167.243.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:18:32 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:20 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:20 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:16 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 02/12] net/mlx5: HWS, Remove unused element array Date: Thu, 10 Apr 2025 22:17:32 +0300 Message-ID: <1744312662-356571-3-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF0000467F:EE_|PH7PR12MB7211:EE_ X-MS-Office365-Filtering-Correlation-Id: 63a2be19-4f99-4c9e-94f1-08dd78647bce X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|82310400026|36860700013; X-Microsoft-Antispam-Message-Info: 4dndLXInVZ8RhjEi/PSpvuNFgAvI/1b5iQPRK27Me4ar6ZsoQosSQnIY+Y8rl6tk7mZE4ax28atLYEkOK29FsOKm+l31lB/cUyiELybYOfy+tVV23sXWn2Fc2hKQNeCPPE05GEGvFYoo0nn1xF5ZoTYY0qbuUuONQgetySqttr5luBOWrxFCjMiSDE8Cmpe6+FJj1CJaoN37DIKX0XdMhhiGTSp0VrMt5Jj7hzUd/ML/hxk62s9wwTdIMVt4IH2CNBCCq6c5S23y4pbcYdR7Q21EvjEIL1Aziw/3xq+vVu5kqLYc4BLbYxhfjulMDZrGDDaDEAGDX8NyHUL2QitL37Ge/s6blVfV93vu5oQw+H8pfnQItv5HZ2akviT0xuVy3js8YCSuQ2pNNwjSu6n2tvMhZHrgj9H8VaV5TBAkiKm16zZAIVlwatzQiTQKPJijHvpuC37ReLXTnCn6SJdIOysBXtQCFYLPHO28l6pbuAenj0C7AP7jSSPf6b4L0OOaSLKKjXWTqPZxVYHT6qH7hH//6EN+B9Bt3ruU2pPQsYl/BaOaweB3KnlaUczu0BnYx/kNf/LxR7qkRsReYYvXkwO5XL+RuYOLy3m3R5JkZyQM3fklOESWowjt75XD+mhicoJi12OIKv/cHzFc9RuwH1647wMbpQLFP8x6xHlkOnLWtV6jeZwbPoC0vRuMiZ1tCmILTHZc3UiZLGn1xF6OdD75sXz8bACmOgI9b1YZrbRnhYQZq1a95tS59IVpzMVU6FYQhISeasyqMgFSxQoDUZ4rcJ1yR+Okc8Qz2xhBwA8XcWLd5vcsbf/PoRx+9CAC5VNUEkP88ZEIA5Xq4wtZbc0K1GfNhcg7GfD2mwsrZWnnpF9pwt3uwrc4aH61uQkVD1L/jsFQho6VmcREM1g3aBas/Ru5hFA/me9YM2YFurSgLTtkagTQSxqKYUYmb1SWxYb+sg3M84AyxnpbWxA+67s0BeWSrM2wBh2rTx6KCYb+mq1/ffroeUCaNPH50EcHRzbePRGJdpli42tnlxxsVur9TI8g739XCBsE3ZIfL8IKXMea0A7Bj9cOhnWm8dDE4bIWJP8/0PsfZ13yQGsour5E1Yn5lINJaOuPv2uKoJRX9nzBd+4GL+mu+dQx1Y79F1bSxgT4AVlKNJbFtnRVq52v+C1T5kJxi8JCqa0I1S18p6N6E7bz74hcsui0+HUyviX8cwZQpkOn9fMjYEClzfD/aXbNQXny7SzGni3YhhrLuFoffrZ2DbMvCAdP/ljV4Xri4zFGFrrbrOCcSNJFXBT26d6YCTbJPSRN6Cu+oEeoCiqoSJJ7J882vZkP9oFaaynmFtsOINWrDW3KAgy+9o6UPejU/ChJ2m9YGtSOJkid148oFO6grGmgGwgRfR34TdVn0DmuZYuvtjPgpqz4BbBbMaJU6wQbicYcpK2moQdTeGz0mDKW9k0nM1Scq5Xtijr9KiO17KJInIQ7mg2SlURUA2GkboNEw7o+bUn+Fqald9XYfOgiNXkGiAPzATU+ X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(82310400026)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:18:32.4427 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 63a2be19-4f99-4c9e-94f1-08dd78647bce X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF0000467F.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7211 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru Remove the array of elements wrapped in a struct because in reality only the first element was ever used. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan Reviewed-by: Michal Kubiak --- .../mellanox/mlx5/core/steering/hws/pool.c | 55 ++++++++----------- .../mellanox/mlx5/core/steering/hws/pool.h | 6 +- 2 files changed, 23 insertions(+), 38 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c index 50a81d360bb2..35ed9bee06a6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c @@ -293,7 +293,7 @@ static int hws_pool_create_resource_on_index(struct mlx5hws_pool *pool, } static struct mlx5hws_pool_elements * -hws_pool_element_create_new_elem(struct mlx5hws_pool *pool, u32 order, int idx) +hws_pool_element_create_new_elem(struct mlx5hws_pool *pool, u32 order) { struct mlx5hws_pool_elements *elem; u32 alloc_size; @@ -311,21 +311,21 @@ hws_pool_element_create_new_elem(struct mlx5hws_pool *pool, u32 order, int idx) elem->bitmap = hws_pool_create_and_init_bitmap(alloc_size - order); if (!elem->bitmap) { mlx5hws_err(pool->ctx, - "Failed to create bitmap type: %d: size %d index: %d\n", - pool->type, alloc_size, idx); + "Failed to create bitmap type: %d: size %d\n", + pool->type, alloc_size); goto free_elem; } elem->log_size = alloc_size - order; } - if (hws_pool_create_resource_on_index(pool, alloc_size, idx)) { - mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d index: %d\n", - pool->type, alloc_size, idx); + if (hws_pool_create_resource_on_index(pool, alloc_size, 0)) { + mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d\n", + pool->type, alloc_size); goto free_db; } - pool->db.element_manager->elements[idx] = elem; + pool->db.element = elem; return elem; @@ -359,9 +359,9 @@ hws_pool_onesize_element_get_mem_chunk(struct mlx5hws_pool *pool, u32 order, { struct mlx5hws_pool_elements *elem; - elem = pool->db.element_manager->elements[0]; + elem = pool->db.element; if (!elem) - elem = hws_pool_element_create_new_elem(pool, order, 0); + elem = hws_pool_element_create_new_elem(pool, order); if (!elem) goto err_no_elem; @@ -451,16 +451,14 @@ static int hws_pool_general_element_db_init(struct mlx5hws_pool *pool) return 0; } -static void hws_onesize_element_db_destroy_element(struct mlx5hws_pool *pool, - struct mlx5hws_pool_elements *elem, - struct mlx5hws_pool_chunk *chunk) +static void +hws_onesize_element_db_destroy_element(struct mlx5hws_pool *pool, + struct mlx5hws_pool_elements *elem) { - if (unlikely(!pool->resource[chunk->resource_idx])) - pr_warn("HWS: invalid resource with index %d\n", chunk->resource_idx); - - hws_pool_resource_free(pool, chunk->resource_idx); + hws_pool_resource_free(pool, 0); + bitmap_free(elem->bitmap); kfree(elem); - pool->db.element_manager->elements[chunk->resource_idx] = NULL; + pool->db.element = NULL; } static void hws_onesize_element_db_put_chunk(struct mlx5hws_pool *pool, @@ -471,7 +469,7 @@ static void hws_onesize_element_db_put_chunk(struct mlx5hws_pool *pool, if (unlikely(chunk->resource_idx)) pr_warn("HWS: invalid resource with index %d\n", chunk->resource_idx); - elem = pool->db.element_manager->elements[chunk->resource_idx]; + elem = pool->db.element; if (!elem) { mlx5hws_err(pool->ctx, "No such element (%d)\n", chunk->resource_idx); return; @@ -483,7 +481,7 @@ static void hws_onesize_element_db_put_chunk(struct mlx5hws_pool *pool, if (pool->flags & MLX5HWS_POOL_FLAGS_RELEASE_FREE_RESOURCE && !elem->num_of_elements) - hws_onesize_element_db_destroy_element(pool, elem, chunk); + hws_onesize_element_db_destroy_element(pool, elem); } static int hws_onesize_element_db_get_chunk(struct mlx5hws_pool *pool, @@ -504,18 +502,13 @@ static int hws_onesize_element_db_get_chunk(struct mlx5hws_pool *pool, static void hws_onesize_element_db_uninit(struct mlx5hws_pool *pool) { - struct mlx5hws_pool_elements *elem; - int i; + struct mlx5hws_pool_elements *elem = pool->db.element; - for (i = 0; i < MLX5HWS_POOL_RESOURCE_ARR_SZ; i++) { - elem = pool->db.element_manager->elements[i]; - if (elem) { - bitmap_free(elem->bitmap); - kfree(elem); - pool->db.element_manager->elements[i] = NULL; - } + if (elem) { + bitmap_free(elem->bitmap); + kfree(elem); + pool->db.element = NULL; } - kfree(pool->db.element_manager); } /* This memory management works as the following: @@ -526,10 +519,6 @@ static void hws_onesize_element_db_uninit(struct mlx5hws_pool *pool) */ static int hws_pool_onesize_element_db_init(struct mlx5hws_pool *pool) { - pool->db.element_manager = kzalloc(sizeof(*pool->db.element_manager), GFP_KERNEL); - if (!pool->db.element_manager) - return -ENOMEM; - pool->p_db_uninit = &hws_onesize_element_db_uninit; pool->p_get_chunk = &hws_onesize_element_db_get_chunk; pool->p_put_chunk = &hws_onesize_element_db_put_chunk; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h index 621298b352b2..f4258f83fdbf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h @@ -87,14 +87,10 @@ struct mlx5hws_pool_elements { bool is_full; }; -struct mlx5hws_element_manager { - struct mlx5hws_pool_elements *elements[MLX5HWS_POOL_RESOURCE_ARR_SZ]; -}; - struct mlx5hws_pool_db { enum mlx5hws_db_type type; union { - struct mlx5hws_element_manager *element_manager; + struct mlx5hws_pool_elements *element; struct mlx5hws_buddy_manager *buddy_manager; }; }; From patchwork Thu Apr 10 19:17:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047152 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2080.outbound.protection.outlook.com [40.107.243.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E5AE28FFC7; Thu, 10 Apr 2025 19:18:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.80 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312730; cv=fail; b=CuSRADbCWQaBETHDrRTIRTn4DA6KELTewDrfJTl6sMxL9ejgdx95eO+1o3v7ztQcdqa5tMAQKlM+FT8NGJpLVzqcDSI4bn8CJZ9Sv8Uwa84/bbCYyXxzLMuz7oxGOvwlVZTXM7P++5JAO2V4Pp4qsFA+yGEMs8fm7s4cqxvJ5So= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312730; c=relaxed/simple; bh=8w0+Xv7/WTuvCpAdRVeFys0M/X6XAZGwhaYZRi1C6dU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ps3N3XE9WFnCedrpiUWC4i3xebpSB1yA/Gu03fZ9V/mOlln+54C+WU6sDBKjMXZah2oPritVsdkQt4swQQD+ODIMIGfvb16Izuwfo2bDk9WTccVZwgdzOwly34lLxYFSx3QsJ+SzEQK9RPNvsY7nAQllh0Y6FDK0h9aw4vIvI40= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=er2WfbwD; arc=fail smtp.client-ip=40.107.243.80 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="er2WfbwD" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xsMWcnsnfpmEpCdjqrNCMmKZ5Rq06K22mbA1/eEne6ylrpXUEzC2OjX1smYLKw2bLeQITuqZ3hq4SpifI+GLYXaIfzvLUMm7jE1/gD65RuKuqLtlK1neEPeUSDboqb9eLE1cAynyfGqxjkKjOYIU5+kY4rO2Ec8OaEhX1tIOMT/Wa1HuvVdvIzdUz2sLtGQSEMsEuulMetGOMCQSemeVIwRtb0d+860qcL2wLchbP0yS3zXBoZlDxohCtM9kv71MG3BUyyaFIoMc2bL3Pn02vG8EWybQ6ZXgR2nBy3KvMxWtXFA15D5vjxduRfXOqYQLpW+1m0lDkfE3ZmbUcfIMJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wVZa8s/kL7QIFI4+TmDP4rn9PLee2T2AoX35lh5dT8o=; b=PH4HGaCvtCVOQipWgMISssjFM5YqC36GCiYE7kR/5Ek0SCI4Coo2U1yZbLTQP0V+zTbyMjIZPuk84LnURnqmrfnwsWXTfGeJww6ErC8mjqgf3aKrDq2qXlZ+4g70apFhE9fIP3ULKY/BATnLWROWrpt0FSqQPs5o7gy+h7dSwRsjykIfWlsqFLs8p6gf/S0Ym7beIgxm+ALIGeqoTiVYksFYw/xwJTbadS+Pt52ezAJQ6tbU7N6fRR7C3DesqraKUQu2J2zoopH7pMafpByJgw/jvucCwfKPdRl3MTbmD8z/xjEgWNUEny16wjiR+QV3FO9GX4YQucvH4NEp4hqtPQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wVZa8s/kL7QIFI4+TmDP4rn9PLee2T2AoX35lh5dT8o=; b=er2WfbwDwShSqfcE2QmVN5biBHaNhT8/zZBDTxn2xzBJX16coCP/mxw3Pxg5UtgbSVD7tbpDbCU7y0rorsgInfmCLOvqIKwFRrCWtFqSdU1ROo+Hl1bgV2g/sPNycRzwO8qYIn588UfU5Alg7998WvHmgNJWgTb2i7qi6Q1qkwFshDpp0ELsYKB2j/GW6xyzRNs3sKoMp98BSbGZmGrGeiv0OoAad4gxOCrfX0B2/lOwkq7rH9tF69XwXAMOdDjgFt5TYHnSclYOT9oq3GitXLjAmclrZCYNlRcyBm6ezl3xlo6xCCDzTIicKNf1xLSxnNyHjlBuiUJ68lITF06BxA== Received: from BL1PR13CA0312.namprd13.prod.outlook.com (2603:10b6:208:2c1::17) by MW6PR12MB8900.namprd12.prod.outlook.com (2603:10b6:303:244::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8606.34; Thu, 10 Apr 2025 19:18:37 +0000 Received: from BN1PEPF00004680.namprd03.prod.outlook.com (2603:10b6:208:2c1:cafe::65) by BL1PR13CA0312.outlook.office365.com (2603:10b6:208:2c1::17) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8655.8 via Frontend Transport; Thu, 10 Apr 2025 19:18:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00004680.mail.protection.outlook.com (10.167.243.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:18:36 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:25 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:24 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:20 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 03/12] net/mlx5: HWS, Make pool single resource Date: Thu, 10 Apr 2025 22:17:33 +0300 Message-ID: <1744312662-356571-4-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00004680:EE_|MW6PR12MB8900:EE_ X-MS-Office365-Filtering-Correlation-Id: 924dbad9-d8a4-46e9-f6a1-08dd78647e58 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|82310400026|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: xTiRcgxr3Q7/kWL3R8kJuVNgMlrWxaLC5ucDWZ0R9zLs6xKZQtQGRW3u3Ijx2f0negdWQVgzDDRkex2r6MiT0FOkwxnb6WeA0ii/8P2vCrBA8ZXNk+fHeQFLLeHZfWOHvbvqNw9AmlcmEPKocoaw0i/DkHfH9Bqn/jWIPFdy0DxmQ8OD2k6dVUciSlXbkPz241GZVbVL/y0MZuTL2WV0Pwx/HsD2nstVYSP/Nd5KPBepCnBqtmjuzPKMhLenOzr3wq/ZwIgqvYHuoAclkC4b3jdAWYqtNjWYNFsoI5U2Myx94ZQvEkshNTrToh7Fm7sZNJJKy4GnuV7Pl4ChGAQIjGRVtgiWFqwU/j+sen2i5QA8Y+aOeGRb+fjTYgbN5gwwirf7N5pRTZ8WuXiyMSjXIHEkBcTKNyyyjU3qK2ivAQlBWWWHoV0kdXOHUByc6frNSUaQOi8bbMHJhlArzNRA8yt42GTdInZK0F9SSK1xmawNlM5LICL8rLymS5CAgW499eJgrlnsRcvO4vSkX5V83IRBnpln1joqD1ZZSJX6BVb3MSi4dxn0J945vIkkdfF04MNur0RO63HAgFMFnz2lmvAginAmH/fi16IHLFBadGY846VCEz84YSn8ZK5gz3ccd+fcz3+HmA4a34oxMPNz7age/0xWDvR0kiwQLTbpjh5+bzNQrXubSjum8eClTZUDzScGjICBZPLNtnrctKUoKyfFGT7ZN1k5RnRTssJwfwN1eKogHbDxKjt8/whKSitD/nF/X6Yhf9nOfmp+qWpxAbSwIZN2SbG2yQwnttB8TNJysOLHPgkPWfDEOVlXbZKSJWsQSKKMmxa9TS/skrzCdggYnM1UHM7cqdw2RvmH1KjFFo4McsYGMfI90jsgQQW1oBKyIBBnFAPIpod+Emdt7he4Lma3J+2PCFjurFz9qcVq0dIPceAFPY9jsnRe7Q/laL+SQynyB0G+PaokPCn2HEfLJAbCNuzLbd96yHBCMj0rC1/gUsROiuSwKELjD4hG1lhZJasPjC7Wa5KrE8Bxw5R01wAQlL896TZyvbWuxZCdSVRFX91vO3Ros+WJzdo/1MvKRv+pImUa8EP4jtFagPvkZfvGe5BZbCA8kHLA4QJ3GqdaWlwclJ40rDpBeGorxGf28xRzL/EFjj82JG2QUCMSzx60dly6CIVFWIMw8gHR0n7fzQcstXutZSkRPGUKY/xkZF6BbB5yaxHv8sG094N1LFqJikP34rLHgUpos5lZFis3DmWkWGlJjxPVGDhCtKTuzDie+cldkWiWdeWr1prhepRNX15/+MeKuGUafwF1JE1RYRDa68uFuas6YCEyV9MpQU3/4i775+yvzwXixi9Tk5EUY0Zs+xENZeXeWcMKufKQ181ob49L+D9Suy9oXxsLOrQiLQBL2nP7MpEoACtRnlztxDJki4KuvPWJ9PfA/Sz+LKEnqDrbmhzk2eKoFaGSvDXC7gNx5NzyfqBYztvLvHl9nQ11dhj0cBt/U8g= X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(7416014)(82310400026)(36860700013)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:18:36.6425 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 924dbad9-d8a4-46e9-f6a1-08dd78647e58 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00004680.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8900 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru The pool implementation claimed to support multiple resources, but this does not really make sense in context. Callers always allocate a single STC or STE chunk of exactly the size provided. The code that handled multiple resources was unused (and likely buggy) due to the combination of flags passed by callers. Simplify the pool by having it handle a single resource. As a result of this simplification, chunks no longer contain a resource offset (there is now only one resource per pool), and the get_base_id functions no longer take a chunk parameter. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan Reviewed-by: Michal Kubiak --- .../mellanox/mlx5/core/steering/hws/action.c | 27 ++- .../mellanox/mlx5/core/steering/hws/debug.c | 22 +-- .../mellanox/mlx5/core/steering/hws/matcher.c | 10 +- .../mellanox/mlx5/core/steering/hws/pool.c | 182 ++++++------------ .../mellanox/mlx5/core/steering/hws/pool.h | 28 +-- 5 files changed, 91 insertions(+), 178 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c index b5332c54d4fb..781ba8c4f733 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c @@ -238,6 +238,7 @@ hws_action_fixup_stc_attr(struct mlx5hws_context *ctx, enum mlx5hws_table_type table_type, bool is_mirror) { + struct mlx5hws_pool *pool; bool use_fixup = false; u32 fw_tbl_type; u32 base_id; @@ -253,13 +254,11 @@ hws_action_fixup_stc_attr(struct mlx5hws_context *ctx, use_fixup = true; break; } + pool = stc_attr->ste_table.ste_pool; if (!is_mirror) - base_id = mlx5hws_pool_chunk_get_base_id(stc_attr->ste_table.ste_pool, - &stc_attr->ste_table.ste); + base_id = mlx5hws_pool_get_base_id(pool); else - base_id = - mlx5hws_pool_chunk_get_base_mirror_id(stc_attr->ste_table.ste_pool, - &stc_attr->ste_table.ste); + base_id = mlx5hws_pool_get_base_mirror_id(pool); *fixup_stc_attr = *stc_attr; fixup_stc_attr->ste_table.ste_obj_id = base_id; @@ -337,7 +336,7 @@ __must_hold(&ctx->ctrl_lock) if (!mlx5hws_context_cap_dynamic_reparse(ctx)) stc_attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; - obj_0_id = mlx5hws_pool_chunk_get_base_id(stc_pool, stc); + obj_0_id = mlx5hws_pool_get_base_id(stc_pool); /* According to table/action limitation change the stc_attr */ use_fixup = hws_action_fixup_stc_attr(ctx, stc_attr, &fixup_stc_attr, table_type, false); @@ -353,7 +352,7 @@ __must_hold(&ctx->ctrl_lock) if (table_type == MLX5HWS_TABLE_TYPE_FDB) { u32 obj_1_id; - obj_1_id = mlx5hws_pool_chunk_get_base_mirror_id(stc_pool, stc); + obj_1_id = mlx5hws_pool_get_base_mirror_id(stc_pool); use_fixup = hws_action_fixup_stc_attr(ctx, stc_attr, &fixup_stc_attr, @@ -393,11 +392,11 @@ __must_hold(&ctx->ctrl_lock) stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; stc_attr.action_offset = MLX5HWS_ACTION_OFFSET_HIT; stc_attr.stc_offset = stc->offset; - obj_id = mlx5hws_pool_chunk_get_base_id(stc_pool, stc); + obj_id = mlx5hws_pool_get_base_id(stc_pool); mlx5hws_cmd_stc_modify(ctx->mdev, obj_id, &stc_attr); if (table_type == MLX5HWS_TABLE_TYPE_FDB) { - obj_id = mlx5hws_pool_chunk_get_base_mirror_id(stc_pool, stc); + obj_id = mlx5hws_pool_get_base_mirror_id(stc_pool); mlx5hws_cmd_stc_modify(ctx->mdev, obj_id, &stc_attr); } @@ -1581,7 +1580,6 @@ hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, u32 miss_ft_id) { struct mlx5hws_cmd_rtc_create_attr rtc_attr = {0}; - struct mlx5hws_action_default_stc *default_stc; struct mlx5hws_matcher_action_ste *table_ste; struct mlx5hws_pool_attr pool_attr = {0}; struct mlx5hws_pool *ste_pool, *stc_pool; @@ -1629,7 +1627,7 @@ hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, rtc_attr.fw_gen_wqe = true; rtc_attr.is_scnd_range = true; - obj_id = mlx5hws_pool_chunk_get_base_id(ste_pool, ste); + obj_id = mlx5hws_pool_get_base_id(ste_pool); rtc_attr.pd = ctx->pd_num; rtc_attr.ste_base = obj_id; @@ -1639,8 +1637,7 @@ hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, /* STC is a single resource (obj_id), use any STC for the ID */ stc_pool = ctx->stc_pool; - default_stc = ctx->common_res.default_stc; - obj_id = mlx5hws_pool_chunk_get_base_id(stc_pool, &default_stc->default_hit); + obj_id = mlx5hws_pool_get_base_id(stc_pool); rtc_attr.stc_base = obj_id; ret = mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, rtc_0_id); @@ -1650,11 +1647,11 @@ hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, } /* Create mirror RTC */ - obj_id = mlx5hws_pool_chunk_get_base_mirror_id(ste_pool, ste); + obj_id = mlx5hws_pool_get_base_mirror_id(ste_pool); rtc_attr.ste_base = obj_id; rtc_attr.table_type = mlx5hws_table_get_res_fw_ft_type(MLX5HWS_TABLE_TYPE_FDB, true); - obj_id = mlx5hws_pool_chunk_get_base_mirror_id(stc_pool, &default_stc->default_hit); + obj_id = mlx5hws_pool_get_base_mirror_id(stc_pool); rtc_attr.stc_base = obj_id; ret = mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, rtc_1_id); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c index 696275fd0ce2..3491408c5d84 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c @@ -118,7 +118,6 @@ static int hws_debug_dump_matcher(struct seq_file *f, struct mlx5hws_matcher *ma { enum mlx5hws_table_type tbl_type = matcher->tbl->type; struct mlx5hws_cmd_ft_query_attr ft_attr = {0}; - struct mlx5hws_pool_chunk *ste; struct mlx5hws_pool *ste_pool; u64 icm_addr_0 = 0; u64 icm_addr_1 = 0; @@ -134,12 +133,11 @@ static int hws_debug_dump_matcher(struct seq_file *f, struct mlx5hws_matcher *ma matcher->end_ft_id, matcher->col_matcher ? HWS_PTR_TO_ID(matcher->col_matcher) : 0); - ste = &matcher->match_ste.ste; ste_pool = matcher->match_ste.pool; if (ste_pool) { - ste_0_id = mlx5hws_pool_chunk_get_base_id(ste_pool, ste); + ste_0_id = mlx5hws_pool_get_base_id(ste_pool); if (tbl_type == MLX5HWS_TABLE_TYPE_FDB) - ste_1_id = mlx5hws_pool_chunk_get_base_mirror_id(ste_pool, ste); + ste_1_id = mlx5hws_pool_get_base_mirror_id(ste_pool); } seq_printf(f, ",%d,%d,%d,%d", @@ -148,12 +146,11 @@ static int hws_debug_dump_matcher(struct seq_file *f, struct mlx5hws_matcher *ma matcher->match_ste.rtc_1_id, (int)ste_1_id); - ste = &matcher->action_ste.ste; ste_pool = matcher->action_ste.pool; if (ste_pool) { - ste_0_id = mlx5hws_pool_chunk_get_base_id(ste_pool, ste); + ste_0_id = mlx5hws_pool_get_base_id(ste_pool); if (tbl_type == MLX5HWS_TABLE_TYPE_FDB) - ste_1_id = mlx5hws_pool_chunk_get_base_mirror_id(ste_pool, ste); + ste_1_id = mlx5hws_pool_get_base_mirror_id(ste_pool); else ste_1_id = -1; } else { @@ -387,14 +384,17 @@ static int hws_debug_dump_context_stc(struct seq_file *f, struct mlx5hws_context if (!stc_pool) return 0; - if (stc_pool->resource[0]) { - ret = hws_debug_dump_context_stc_resource(f, ctx, stc_pool->resource[0]); + if (stc_pool->resource) { + ret = hws_debug_dump_context_stc_resource(f, ctx, + stc_pool->resource); if (ret) return ret; } - if (stc_pool->mirror_resource[0]) { - ret = hws_debug_dump_context_stc_resource(f, ctx, stc_pool->mirror_resource[0]); + if (stc_pool->mirror_resource) { + struct mlx5hws_pool_resource *res = stc_pool->mirror_resource; + + ret = hws_debug_dump_context_stc_resource(f, ctx, res); if (ret) return ret; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c index 37a4497048a6..59b14db427b4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c @@ -223,7 +223,6 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, struct mlx5hws_cmd_rtc_create_attr rtc_attr = {0}; struct mlx5hws_match_template *mt = matcher->mt; struct mlx5hws_context *ctx = matcher->tbl->ctx; - struct mlx5hws_action_default_stc *default_stc; struct mlx5hws_matcher_action_ste *action_ste; struct mlx5hws_table *tbl = matcher->tbl; struct mlx5hws_pool *ste_pool, *stc_pool; @@ -305,7 +304,7 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, return -EINVAL; } - obj_id = mlx5hws_pool_chunk_get_base_id(ste_pool, ste); + obj_id = mlx5hws_pool_get_base_id(ste_pool); rtc_attr.pd = ctx->pd_num; rtc_attr.ste_base = obj_id; @@ -316,8 +315,7 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, /* STC is a single resource (obj_id), use any STC for the ID */ stc_pool = ctx->stc_pool; - default_stc = ctx->common_res.default_stc; - obj_id = mlx5hws_pool_chunk_get_base_id(stc_pool, &default_stc->default_hit); + obj_id = mlx5hws_pool_get_base_id(stc_pool); rtc_attr.stc_base = obj_id; ret = mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, rtc_0_id); @@ -328,11 +326,11 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, } if (tbl->type == MLX5HWS_TABLE_TYPE_FDB) { - obj_id = mlx5hws_pool_chunk_get_base_mirror_id(ste_pool, ste); + obj_id = mlx5hws_pool_get_base_mirror_id(ste_pool); rtc_attr.ste_base = obj_id; rtc_attr.table_type = mlx5hws_table_get_res_fw_ft_type(tbl->type, true); - obj_id = mlx5hws_pool_chunk_get_base_mirror_id(stc_pool, &default_stc->default_hit); + obj_id = mlx5hws_pool_get_base_mirror_id(stc_pool); rtc_attr.stc_base = obj_id; hws_matcher_set_rtc_attr_sz(matcher, &rtc_attr, rtc_type, true); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c index 35ed9bee06a6..0de03e17624c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c @@ -20,15 +20,14 @@ static void hws_pool_free_one_resource(struct mlx5hws_pool_resource *resource) kfree(resource); } -static void hws_pool_resource_free(struct mlx5hws_pool *pool, - int resource_idx) +static void hws_pool_resource_free(struct mlx5hws_pool *pool) { - hws_pool_free_one_resource(pool->resource[resource_idx]); - pool->resource[resource_idx] = NULL; + hws_pool_free_one_resource(pool->resource); + pool->resource = NULL; if (pool->tbl_type == MLX5HWS_TABLE_TYPE_FDB) { - hws_pool_free_one_resource(pool->mirror_resource[resource_idx]); - pool->mirror_resource[resource_idx] = NULL; + hws_pool_free_one_resource(pool->mirror_resource); + pool->mirror_resource = NULL; } } @@ -78,7 +77,7 @@ hws_pool_create_one_resource(struct mlx5hws_pool *pool, u32 log_range, } static int -hws_pool_resource_alloc(struct mlx5hws_pool *pool, u32 log_range, int idx) +hws_pool_resource_alloc(struct mlx5hws_pool *pool, u32 log_range) { struct mlx5hws_pool_resource *resource; u32 fw_ft_type, opt_log_range; @@ -91,7 +90,7 @@ hws_pool_resource_alloc(struct mlx5hws_pool *pool, u32 log_range, int idx) return -EINVAL; } - pool->resource[idx] = resource; + pool->resource = resource; if (pool->tbl_type == MLX5HWS_TABLE_TYPE_FDB) { struct mlx5hws_pool_resource *mirror_resource; @@ -102,10 +101,10 @@ hws_pool_resource_alloc(struct mlx5hws_pool *pool, u32 log_range, int idx) if (!mirror_resource) { mlx5hws_err(pool->ctx, "Failed allocating mirrored resource\n"); hws_pool_free_one_resource(resource); - pool->resource[idx] = NULL; + pool->resource = NULL; return -EINVAL; } - pool->mirror_resource[idx] = mirror_resource; + pool->mirror_resource = mirror_resource; } return 0; @@ -129,9 +128,9 @@ static void hws_pool_buddy_db_put_chunk(struct mlx5hws_pool *pool, { struct mlx5hws_buddy_mem *buddy; - buddy = pool->db.buddy_manager->buddies[chunk->resource_idx]; + buddy = pool->db.buddy; if (!buddy) { - mlx5hws_err(pool->ctx, "No such buddy (%d)\n", chunk->resource_idx); + mlx5hws_err(pool->ctx, "Bad buddy state\n"); return; } @@ -139,86 +138,50 @@ static void hws_pool_buddy_db_put_chunk(struct mlx5hws_pool *pool, } static struct mlx5hws_buddy_mem * -hws_pool_buddy_get_next_buddy(struct mlx5hws_pool *pool, int idx, - u32 order, bool *is_new_buddy) +hws_pool_buddy_get_buddy(struct mlx5hws_pool *pool, u32 order) { static struct mlx5hws_buddy_mem *buddy; u32 new_buddy_size; - buddy = pool->db.buddy_manager->buddies[idx]; + buddy = pool->db.buddy; if (buddy) return buddy; new_buddy_size = max(pool->alloc_log_sz, order); - *is_new_buddy = true; buddy = mlx5hws_buddy_create(new_buddy_size); if (!buddy) { - mlx5hws_err(pool->ctx, "Failed to create buddy order: %d index: %d\n", - new_buddy_size, idx); + mlx5hws_err(pool->ctx, "Failed to create buddy order: %d\n", + new_buddy_size); return NULL; } - if (hws_pool_resource_alloc(pool, new_buddy_size, idx) != 0) { - mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d index: %d\n", - pool->type, new_buddy_size, idx); + if (hws_pool_resource_alloc(pool, new_buddy_size) != 0) { + mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d\n", + pool->type, new_buddy_size); mlx5hws_buddy_cleanup(buddy); return NULL; } - pool->db.buddy_manager->buddies[idx] = buddy; + pool->db.buddy = buddy; return buddy; } static int hws_pool_buddy_get_mem_chunk(struct mlx5hws_pool *pool, int order, - u32 *buddy_idx, int *seg) { struct mlx5hws_buddy_mem *buddy; - bool new_mem = false; - int ret = 0; - int i; - - *seg = -1; - - /* Find the next free place from the buddy array */ - while (*seg < 0) { - for (i = 0; i < MLX5HWS_POOL_RESOURCE_ARR_SZ; i++) { - buddy = hws_pool_buddy_get_next_buddy(pool, i, - order, - &new_mem); - if (!buddy) { - ret = -ENOMEM; - goto out; - } - - *seg = mlx5hws_buddy_alloc_mem(buddy, order); - if (*seg >= 0) - goto found; - - if (pool->flags & MLX5HWS_POOL_FLAGS_ONE_RESOURCE) { - mlx5hws_err(pool->ctx, - "Fail to allocate seg for one resource pool\n"); - ret = -ENOMEM; - goto out; - } - - if (new_mem) { - /* We have new memory pool, should be place for us */ - mlx5hws_err(pool->ctx, - "No memory for order: %d with buddy no: %d\n", - order, i); - ret = -ENOMEM; - goto out; - } - } - } -found: - *buddy_idx = i; -out: - return ret; + buddy = hws_pool_buddy_get_buddy(pool, order); + if (!buddy) + return -ENOMEM; + + *seg = mlx5hws_buddy_alloc_mem(buddy, order); + if (*seg >= 0) + return 0; + + return -ENOMEM; } static int hws_pool_buddy_db_get_chunk(struct mlx5hws_pool *pool, @@ -226,9 +189,7 @@ static int hws_pool_buddy_db_get_chunk(struct mlx5hws_pool *pool, { int ret = 0; - /* Go over the buddies and find next free slot */ ret = hws_pool_buddy_get_mem_chunk(pool, chunk->order, - &chunk->resource_idx, &chunk->offset); if (ret) mlx5hws_err(pool->ctx, "Failed to get free slot for chunk with order: %d\n", @@ -240,33 +201,21 @@ static int hws_pool_buddy_db_get_chunk(struct mlx5hws_pool *pool, static void hws_pool_buddy_db_uninit(struct mlx5hws_pool *pool) { struct mlx5hws_buddy_mem *buddy; - int i; - - for (i = 0; i < MLX5HWS_POOL_RESOURCE_ARR_SZ; i++) { - buddy = pool->db.buddy_manager->buddies[i]; - if (buddy) { - mlx5hws_buddy_cleanup(buddy); - kfree(buddy); - pool->db.buddy_manager->buddies[i] = NULL; - } - } - kfree(pool->db.buddy_manager); + buddy = pool->db.buddy; + if (buddy) { + mlx5hws_buddy_cleanup(buddy); + kfree(buddy); + pool->db.buddy = NULL; + } } static int hws_pool_buddy_db_init(struct mlx5hws_pool *pool, u32 log_range) { - pool->db.buddy_manager = kzalloc(sizeof(*pool->db.buddy_manager), GFP_KERNEL); - if (!pool->db.buddy_manager) - return -ENOMEM; - if (pool->flags & MLX5HWS_POOL_FLAGS_ALLOC_MEM_ON_CREATE) { - bool new_buddy; - - if (!hws_pool_buddy_get_next_buddy(pool, 0, log_range, &new_buddy)) { + if (!hws_pool_buddy_get_buddy(pool, log_range)) { mlx5hws_err(pool->ctx, "Failed allocating memory on create log_sz: %d\n", log_range); - kfree(pool->db.buddy_manager); return -ENOMEM; } } @@ -278,14 +227,13 @@ static int hws_pool_buddy_db_init(struct mlx5hws_pool *pool, u32 log_range) return 0; } -static int hws_pool_create_resource_on_index(struct mlx5hws_pool *pool, - u32 alloc_size, int idx) +static int hws_pool_create_resource(struct mlx5hws_pool *pool, u32 alloc_size) { - int ret = hws_pool_resource_alloc(pool, alloc_size, idx); + int ret = hws_pool_resource_alloc(pool, alloc_size); if (ret) { - mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d index: %d\n", - pool->type, alloc_size, idx); + mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d\n", + pool->type, alloc_size); return ret; } @@ -319,7 +267,7 @@ hws_pool_element_create_new_elem(struct mlx5hws_pool *pool, u32 order) elem->log_size = alloc_size - order; } - if (hws_pool_create_resource_on_index(pool, alloc_size, 0)) { + if (hws_pool_create_resource(pool, alloc_size)) { mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d\n", pool->type, alloc_size); goto free_db; @@ -355,7 +303,7 @@ static int hws_pool_element_find_seg(struct mlx5hws_pool_elements *elem, int *se static int hws_pool_onesize_element_get_mem_chunk(struct mlx5hws_pool *pool, u32 order, - u32 *idx, int *seg) + int *seg) { struct mlx5hws_pool_elements *elem; @@ -370,7 +318,6 @@ hws_pool_onesize_element_get_mem_chunk(struct mlx5hws_pool *pool, u32 order, return -ENOMEM; } - *idx = 0; elem->num_of_elements++; return 0; @@ -379,21 +326,17 @@ hws_pool_onesize_element_get_mem_chunk(struct mlx5hws_pool *pool, u32 order, return -ENOMEM; } -static int -hws_pool_general_element_get_mem_chunk(struct mlx5hws_pool *pool, u32 order, - u32 *idx, int *seg) +static int hws_pool_general_element_get_mem_chunk(struct mlx5hws_pool *pool, + u32 order, int *seg) { - int ret, i; - - for (i = 0; i < MLX5HWS_POOL_RESOURCE_ARR_SZ; i++) { - if (!pool->resource[i]) { - ret = hws_pool_create_resource_on_index(pool, order, i); - if (ret) - goto err_no_res; - *idx = i; - *seg = 0; /* One memory slot in that element */ - return 0; - } + int ret; + + if (!pool->resource) { + ret = hws_pool_create_resource(pool, order); + if (ret) + goto err_no_res; + *seg = 0; /* One memory slot in that element */ + return 0; } mlx5hws_err(pool->ctx, "No more resources (last request order: %d)\n", order); @@ -409,9 +352,7 @@ static int hws_pool_general_element_db_get_chunk(struct mlx5hws_pool *pool, { int ret; - /* Go over all memory elements and find/allocate free slot */ ret = hws_pool_general_element_get_mem_chunk(pool, chunk->order, - &chunk->resource_idx, &chunk->offset); if (ret) mlx5hws_err(pool->ctx, "Failed to get free slot for chunk with order: %d\n", @@ -423,11 +364,8 @@ static int hws_pool_general_element_db_get_chunk(struct mlx5hws_pool *pool, static void hws_pool_general_element_db_put_chunk(struct mlx5hws_pool *pool, struct mlx5hws_pool_chunk *chunk) { - if (unlikely(!pool->resource[chunk->resource_idx])) - pr_warn("HWS: invalid resource with index %d\n", chunk->resource_idx); - if (pool->flags & MLX5HWS_POOL_FLAGS_RELEASE_FREE_RESOURCE) - hws_pool_resource_free(pool, chunk->resource_idx); + hws_pool_resource_free(pool); } static void hws_pool_general_element_db_uninit(struct mlx5hws_pool *pool) @@ -455,7 +393,7 @@ static void hws_onesize_element_db_destroy_element(struct mlx5hws_pool *pool, struct mlx5hws_pool_elements *elem) { - hws_pool_resource_free(pool, 0); + hws_pool_resource_free(pool); bitmap_free(elem->bitmap); kfree(elem); pool->db.element = NULL; @@ -466,12 +404,9 @@ static void hws_onesize_element_db_put_chunk(struct mlx5hws_pool *pool, { struct mlx5hws_pool_elements *elem; - if (unlikely(chunk->resource_idx)) - pr_warn("HWS: invalid resource with index %d\n", chunk->resource_idx); - elem = pool->db.element; if (!elem) { - mlx5hws_err(pool->ctx, "No such element (%d)\n", chunk->resource_idx); + mlx5hws_err(pool->ctx, "Pool element was not allocated\n"); return; } @@ -489,9 +424,7 @@ static int hws_onesize_element_db_get_chunk(struct mlx5hws_pool *pool, { int ret = 0; - /* Go over all memory elements and find/allocate free slot */ ret = hws_pool_onesize_element_get_mem_chunk(pool, chunk->order, - &chunk->resource_idx, &chunk->offset); if (ret) mlx5hws_err(pool->ctx, "Failed to get free slot for chunk with order: %d\n", @@ -614,13 +547,10 @@ mlx5hws_pool_create(struct mlx5hws_context *ctx, struct mlx5hws_pool_attr *pool_ int mlx5hws_pool_destroy(struct mlx5hws_pool *pool) { - int i; - mutex_destroy(&pool->lock); - for (i = 0; i < MLX5HWS_POOL_RESOURCE_ARR_SZ; i++) - if (pool->resource[i]) - hws_pool_resource_free(pool, i); + if (pool->resource) + hws_pool_resource_free(pool); hws_pool_db_unint(pool); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h index f4258f83fdbf..112a61cd2997 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h @@ -6,16 +6,12 @@ #define MLX5HWS_POOL_STC_LOG_SZ 15 -#define MLX5HWS_POOL_RESOURCE_ARR_SZ 100 - enum mlx5hws_pool_type { MLX5HWS_POOL_TYPE_STE, MLX5HWS_POOL_TYPE_STC, }; struct mlx5hws_pool_chunk { - u32 resource_idx; - /* Internal offset, relative to base index */ int offset; int order; }; @@ -72,14 +68,10 @@ enum mlx5hws_db_type { MLX5HWS_POOL_DB_TYPE_GENERAL_SIZE, /* One resource only, all the elements are with same one size */ MLX5HWS_POOL_DB_TYPE_ONE_SIZE_RESOURCE, - /* Many resources, the memory allocated with buddy mechanism */ + /* Entries are managed using a buddy mechanism. */ MLX5HWS_POOL_DB_TYPE_BUDDY, }; -struct mlx5hws_buddy_manager { - struct mlx5hws_buddy_mem *buddies[MLX5HWS_POOL_RESOURCE_ARR_SZ]; -}; - struct mlx5hws_pool_elements { u32 num_of_elements; unsigned long *bitmap; @@ -91,7 +83,7 @@ struct mlx5hws_pool_db { enum mlx5hws_db_type type; union { struct mlx5hws_pool_elements *element; - struct mlx5hws_buddy_manager *buddy_manager; + struct mlx5hws_buddy_mem *buddy; }; }; @@ -109,8 +101,8 @@ struct mlx5hws_pool { size_t alloc_log_sz; enum mlx5hws_table_type tbl_type; enum mlx5hws_pool_optimize opt_type; - struct mlx5hws_pool_resource *resource[MLX5HWS_POOL_RESOURCE_ARR_SZ]; - struct mlx5hws_pool_resource *mirror_resource[MLX5HWS_POOL_RESOURCE_ARR_SZ]; + struct mlx5hws_pool_resource *resource; + struct mlx5hws_pool_resource *mirror_resource; /* DB */ struct mlx5hws_pool_db db; /* Functions */ @@ -131,17 +123,13 @@ int mlx5hws_pool_chunk_alloc(struct mlx5hws_pool *pool, void mlx5hws_pool_chunk_free(struct mlx5hws_pool *pool, struct mlx5hws_pool_chunk *chunk); -static inline u32 -mlx5hws_pool_chunk_get_base_id(struct mlx5hws_pool *pool, - struct mlx5hws_pool_chunk *chunk) +static inline u32 mlx5hws_pool_get_base_id(struct mlx5hws_pool *pool) { - return pool->resource[chunk->resource_idx]->base_id; + return pool->resource->base_id; } -static inline u32 -mlx5hws_pool_chunk_get_base_mirror_id(struct mlx5hws_pool *pool, - struct mlx5hws_pool_chunk *chunk) +static inline u32 mlx5hws_pool_get_base_mirror_id(struct mlx5hws_pool *pool) { - return pool->mirror_resource[chunk->resource_idx]->base_id; + return pool->mirror_resource->base_id; } #endif /* MLX5HWS_POOL_H_ */ From patchwork Thu Apr 10 19:17:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047153 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2077.outbound.protection.outlook.com [40.107.96.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EE3028F93C; Thu, 10 Apr 2025 19:18:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.96.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312731; cv=fail; b=rt69tyfHzjqxP4bDzJ+WawsUVVKyLnUkzSYbX1MYY5zi7+a3oImAAE/59BDmUUhJsyNNUZmPGc5VfDsuiSwVbd+QtHwt06OU3cca5OGgxBqp1XPRb542VDxUdDqIOo9DIk/vHtwMvPK2f3d5uG36ad5fGAmu1/v75BFTM6T02GU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312731; c=relaxed/simple; bh=usLGynfChlOr8lhPdrRtxR5KiUcNVGA41PVd+Tsa9GY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Fgdj34F/uF0SQ3BLluA6Cfa2JmLHtdMEW+M9gmzaVMH9UOITWDrsWJoc9y0IExP3V4WGWZ9u9moxvIE7OFLCEZ+C8BI4hw8eqgXUQGWVaQxc2iRVttAEME5aZAONVbH1NJRWyrqlPdZK6uiRb68KIRQpTSdCcV8bLts9Tvny4qc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=B/FOCiLa; arc=fail smtp.client-ip=40.107.96.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="B/FOCiLa" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=GRE5mghL5Duo2eNwj/+BzP/jyVKYSnZD3B+1d2KnyGPjIkf3OpROlfUySdrvtHzPpNnRXjFSb/z0z6+1W6ncyGjH4LfnSWg96+NcOT2ExSiDYWNHh8qAknv7bAfDqn3gU5HUhHSCcxN5sKxNvKTLBbVKzIxatRnVFuFEbadz5o2GJID9P4O/QuC4i/Iqw4Xk9lm9yYgj7W+PhOdw3hoJ/s0qn3qiRSSdJhq+9CVvz5KzjfnptTvT9oqCrsK3k6NDPUAfb+/WT2J+/DdvaLeBi3bZ5ypohDEwAm5n7AimNmL+weRQoy8Yji7WVek9vUAWNtahfypJHxwRr8s+9d2MrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TDDZA8lHBwZ1Je6GQqWlw12UxFPo82DMnI8b08iWeUU=; b=g7tHHFs4LDUT1hKfOaTZnOfY4+hC8TdAupaYatC2saau98p6br+terwsH/uDNJR5tG1qclHNScRBYd/7MiHIVGRqPSgpsQeFqs9djNagtrQmK3YCirA/ofrnnwxPN8M1/T7PjOgmaxsrZG458KMuH4sj1c2V9EU/zdQZBjvJnpdo4SCnoSC9nxWqhS2eUmWz0+zaohiqov/aX0QI14j8FlW7eBrX2pTob6HyNcMkA8Lrf8virP/bkicwnd2cI0JSpdoC/7C8GhgRzDcEd9AE0qovud04Buig/Q/oB4wecaNEqCGwqId044CcsTte59PHtrUvCRGK8zoIRtA/2Hdskg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TDDZA8lHBwZ1Je6GQqWlw12UxFPo82DMnI8b08iWeUU=; b=B/FOCiLaV7sCTNltg41H2RfgYoMKi6Y+AHAyOC6j88HqX9ql89EpkDQWAhGXfJpoYWVUvD0yGJh+x9+hLuoGpYEP+q20qri2io017S1UD6fkfvExqjmVRhMESCd6/UtDczzgkRrdB9HwOwoEKvqUnh7WO6V5Qu5m9WaAWMm6hE34auCiEQ8AbROA6P/5Ko93AbHaRvNsUCDKwgeVYdrcrYMZZPyltle6Vr0nDksFKT+a4uCdh5oVPHfGbnZMqKRD0byCilgDx86wvQhqqPuLtIq5f9p4Ah3oOS93OFFyrdAya1zReOzXtz8ou0kwIsNXOOGU+C1hin42ia2/AReCMw== Received: from BL0PR02CA0057.namprd02.prod.outlook.com (2603:10b6:207:3d::34) by BL1PR12MB5755.namprd12.prod.outlook.com (2603:10b6:208:392::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.22; Thu, 10 Apr 2025 19:18:41 +0000 Received: from BN1PEPF0000467F.namprd03.prod.outlook.com (2603:10b6:207:3d:cafe::de) by BL0PR02CA0057.outlook.office365.com (2603:10b6:207:3d::34) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8632.26 via Frontend Transport; Thu, 10 Apr 2025 19:18:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF0000467F.mail.protection.outlook.com (10.167.243.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:18:41 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:29 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:29 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:25 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 04/12] net/mlx5: HWS, Refactor pool implementation Date: Thu, 10 Apr 2025 22:17:34 +0300 Message-ID: <1744312662-356571-5-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF0000467F:EE_|BL1PR12MB5755:EE_ X-MS-Office365-Filtering-Correlation-Id: c02e7cab-1ff4-4585-d006-08dd78648137 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|36860700013|1800799024|82310400026; X-Microsoft-Antispam-Message-Info: QOOu9yZXKXVAjLYo1gSqAVktvyn4KMB4T8JScPI23ZH8Q/ZCbg1QYrKyEkzy0nWyRLcCvxh4aVcn2aWWyvc/x3YjgjP8S+/eGQ1sWRhZqeKe+JoGowHcMWuPLwol9iyA5P2XAunp/pAUrO7SGIKfsClzYQjTEfyEVrAF7CKmtr3vmx1jjc30BxJYK2KRBRQBTfzsF/jQtSHB1RcDmPSSqjD2cGE/O2M76s69mbTIWjMp9NhhdpeFqX9TnouDxZanbrEHdD91/EZOKaJHwWaxHLm9BQ2rC78KoqLgNA12XuKyNbU9x7t7VoPWCFxhAsDc/CqLpA0peg+7p8QV3XH/iM9fPeTdsJbyaKm0JJz1hgVPLOtnegSelI/pCGg2uEEu2bInb1oILt3IhNSjP7wKQ18+LrYIcNNXqSg/WhvQMh/iyg9uS1G455mX1cRWOPBSY0uGoVl65lHo7sTqLKmMxFgP9S5CHA1mIphHBZkGYeLxkSaMgMnhazofoJClqidaiGCTFPXiwv9JkODmnlBEzGWsD0nJtXOgZh7Dk6Q50wR1FZbnsFdrm4GzYbd/hyyWyIjrOifpz/UH7CQhFDClLGVGN+8uNTsDQN5BxwLrNfUyMQTWw7eJi/aY8PF4OYcQcW1lD2pBaIzMRY7QDhMYuX0+ZN6iXbYI2viGSQyYHJUJEfJgykO1dzTB9bvWlTj3mxXZZdeE5biLaQ3Eg6jhJn6r9YqpAjHXbTii1B6UblrN1264/LBX7wUUO9q43iWG3TsW/B2qfdrppOPn0AG5cT71wKXB+ChBXCoH55WDz/Vmg/p6Y4iffTl5ERT4cm8hdQX3sbn7LAQ9V8v/O9jaZBP8rnKlZJ1WHHI60jpwPd2ghAJX3XoHsVFjSpIRggu/zpTS8xOk+haNAcqJkYLsplasSy72z2szKgdc0s66kHyw0lwmctbkSNqBPMl4Z4/6jVU53v2ILmekD1DrRwDEu7pQ4ijn44U4WJDHMS/yJtTTFI/JBgKretYsQq6J16T+T2cPmQit6Rprhyu4LsmwCV22YQzZhauxRoHc1rPQnE7ggn5mKkzNmypCnlsgf5Yhfzmxgfa9zSvOFPXrbUgKnhm+HqHhzKlkw+VLKD4mOkiJXcLKTs2OF6EXwUNWsuh1fCrHE7Ur0nYID4w6arkKKFPLBto7vH20zP5tBBmsaq+AWWkhOTsRGXm22zwTevYBlOD+qcW0dR3tnDG36+0W2+4Gp/Vb4fVN9U7Q1lozbuOIGMixn1eK2TRRFYHf8M+py2VwYKZ2Op17BrBM95oqQuiv/83LxuHYTDIBISw/wjz3W478bRaDIm8gT9uuNBSaxOOwLx8CWNfcdqlFcB4BCnXmflXspsaOhFP6vaT1AJE85WEGhFfdSa0XCJytFgzWv+cI+4tBa7BrPbIEIMduJ0k+kHsLxB5lzMwoyBvvffRcEk2VDjUf9c30ga7LFi6hyLJfhPfIJyexXlq2J7fc/+2kD988gmJxsbn0fbkGqEc= X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(376014)(7416014)(36860700013)(1800799024)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:18:41.5523 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c02e7cab-1ff4-4585-d006-08dd78648137 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF0000467F.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5755 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru Refactor the pool implementation to remove unused flags and clarify its usage. A pool represents a single range of STEs or STCs which are allocated at pool creation time. Pools are used under three patterns: 1. STCs are allocated one at a time from a global pool using a bitmap based implementation. 2. Action STEs are allocated in power-of-two blocks using a buddy algorithm. 3. Match STEs do not use allocation, since insertion into these tables is based on hashes or direct addressing. In such cases we use a pool only to create the STE range. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan Reviewed-by: Michal Kubiak --- .../mellanox/mlx5/core/steering/hws/action.c | 1 - .../mellanox/mlx5/core/steering/hws/context.c | 1 - .../mellanox/mlx5/core/steering/hws/matcher.c | 19 +- .../mellanox/mlx5/core/steering/hws/pool.c | 387 +++++------------- .../mellanox/mlx5/core/steering/hws/pool.h | 45 +- 5 files changed, 116 insertions(+), 337 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c index 781ba8c4f733..39904b337b81 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c @@ -1602,7 +1602,6 @@ hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, pool_attr.table_type = MLX5HWS_TABLE_TYPE_FDB; pool_attr.pool_type = MLX5HWS_POOL_TYPE_STE; - pool_attr.flags = MLX5HWS_POOL_FLAGS_FOR_STE_ACTION_POOL; pool_attr.alloc_log_sz = 1; table_ste->pool = mlx5hws_pool_create(ctx, &pool_attr); if (!table_ste->pool) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.c index 9cda2774fd64..b7cb736b74d7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.c @@ -34,7 +34,6 @@ static int hws_context_pools_init(struct mlx5hws_context *ctx) /* Create an STC pool per FT type */ pool_attr.pool_type = MLX5HWS_POOL_TYPE_STC; - pool_attr.flags = MLX5HWS_POOL_FLAGS_FOR_STC_POOL; max_log_sz = min(MLX5HWS_POOL_STC_LOG_SZ, ctx->caps->stc_alloc_log_max); pool_attr.alloc_log_sz = max(max_log_sz, ctx->caps->stc_alloc_log_gran); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c index 59b14db427b4..95d31fd6c976 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c @@ -265,14 +265,6 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, rtc_attr.match_definer_0 = ctx->caps->linear_match_definer; } } - - /* Match pool requires implicit allocation */ - ret = mlx5hws_pool_chunk_alloc(ste_pool, ste); - if (ret) { - mlx5hws_err(ctx, "Failed to allocate STE for %s RTC", - hws_matcher_rtc_type_to_str(rtc_type)); - return ret; - } break; case HWS_MATCHER_RTC_TYPE_STE_ARRAY: @@ -357,23 +349,17 @@ static void hws_matcher_destroy_rtc(struct mlx5hws_matcher *matcher, { struct mlx5hws_matcher_action_ste *action_ste; struct mlx5hws_table *tbl = matcher->tbl; - struct mlx5hws_pool_chunk *ste; - struct mlx5hws_pool *ste_pool; u32 rtc_0_id, rtc_1_id; switch (rtc_type) { case HWS_MATCHER_RTC_TYPE_MATCH: rtc_0_id = matcher->match_ste.rtc_0_id; rtc_1_id = matcher->match_ste.rtc_1_id; - ste_pool = matcher->match_ste.pool; - ste = &matcher->match_ste.ste; break; case HWS_MATCHER_RTC_TYPE_STE_ARRAY: action_ste = &matcher->action_ste; rtc_0_id = action_ste->rtc_0_id; rtc_1_id = action_ste->rtc_1_id; - ste_pool = action_ste->pool; - ste = &action_ste->ste; break; default: return; @@ -383,8 +369,6 @@ static void hws_matcher_destroy_rtc(struct mlx5hws_matcher *matcher, mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, rtc_1_id); mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, rtc_0_id); - if (rtc_type == HWS_MATCHER_RTC_TYPE_MATCH) - mlx5hws_pool_chunk_free(ste_pool, ste); } static int @@ -557,7 +541,7 @@ static int hws_matcher_bind_at(struct mlx5hws_matcher *matcher) /* Allocate action STE mempool */ pool_attr.table_type = tbl->type; pool_attr.pool_type = MLX5HWS_POOL_TYPE_STE; - pool_attr.flags = MLX5HWS_POOL_FLAGS_FOR_STE_ACTION_POOL; + pool_attr.flags = MLX5HWS_POOL_FLAG_BUDDY; /* Pool size is similar to action RTC size */ pool_attr.alloc_log_sz = ilog2(roundup_pow_of_two(action_ste->max_stes)) + matcher->attr.table.sz_row_log + @@ -636,7 +620,6 @@ static int hws_matcher_bind_mt(struct mlx5hws_matcher *matcher) /* Create an STE pool per matcher*/ pool_attr.table_type = matcher->tbl->type; pool_attr.pool_type = MLX5HWS_POOL_TYPE_STE; - pool_attr.flags = MLX5HWS_POOL_FLAGS_FOR_MATCHER_STE_POOL; pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + matcher->attr.table.sz_row_log; hws_matcher_set_pool_attr(&pool_attr, matcher); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c index 0de03e17624c..270b333faab3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c @@ -60,10 +60,8 @@ hws_pool_create_one_resource(struct mlx5hws_pool *pool, u32 log_range, ret = -EINVAL; } - if (ret) { - mlx5hws_err(pool->ctx, "Failed to allocate resource objects\n"); + if (ret) goto free_resource; - } resource->pool = pool; resource->range = 1 << log_range; @@ -76,17 +74,17 @@ hws_pool_create_one_resource(struct mlx5hws_pool *pool, u32 log_range, return NULL; } -static int -hws_pool_resource_alloc(struct mlx5hws_pool *pool, u32 log_range) +static int hws_pool_resource_alloc(struct mlx5hws_pool *pool) { struct mlx5hws_pool_resource *resource; u32 fw_ft_type, opt_log_range; fw_ft_type = mlx5hws_table_get_res_fw_ft_type(pool->tbl_type, false); - opt_log_range = pool->opt_type == MLX5HWS_POOL_OPTIMIZE_ORIG ? 0 : log_range; + opt_log_range = pool->opt_type == MLX5HWS_POOL_OPTIMIZE_ORIG ? + 0 : pool->alloc_log_sz; resource = hws_pool_create_one_resource(pool, opt_log_range, fw_ft_type); if (!resource) { - mlx5hws_err(pool->ctx, "Failed allocating resource\n"); + mlx5hws_err(pool->ctx, "Failed to allocate resource\n"); return -EINVAL; } @@ -96,10 +94,11 @@ hws_pool_resource_alloc(struct mlx5hws_pool *pool, u32 log_range) struct mlx5hws_pool_resource *mirror_resource; fw_ft_type = mlx5hws_table_get_res_fw_ft_type(pool->tbl_type, true); - opt_log_range = pool->opt_type == MLX5HWS_POOL_OPTIMIZE_MIRROR ? 0 : log_range; + opt_log_range = pool->opt_type == MLX5HWS_POOL_OPTIMIZE_MIRROR ? + 0 : pool->alloc_log_sz; mirror_resource = hws_pool_create_one_resource(pool, opt_log_range, fw_ft_type); if (!mirror_resource) { - mlx5hws_err(pool->ctx, "Failed allocating mirrored resource\n"); + mlx5hws_err(pool->ctx, "Failed to allocate mirrored resource\n"); hws_pool_free_one_resource(resource); pool->resource = NULL; return -EINVAL; @@ -110,92 +109,58 @@ hws_pool_resource_alloc(struct mlx5hws_pool *pool, u32 log_range) return 0; } -static unsigned long *hws_pool_create_and_init_bitmap(u32 log_range) -{ - unsigned long *cur_bmp; - - cur_bmp = bitmap_zalloc(1 << log_range, GFP_KERNEL); - if (!cur_bmp) - return NULL; - - bitmap_fill(cur_bmp, 1 << log_range); - - return cur_bmp; -} - -static void hws_pool_buddy_db_put_chunk(struct mlx5hws_pool *pool, - struct mlx5hws_pool_chunk *chunk) +static int hws_pool_buddy_init(struct mlx5hws_pool *pool) { struct mlx5hws_buddy_mem *buddy; - buddy = pool->db.buddy; + buddy = mlx5hws_buddy_create(pool->alloc_log_sz); if (!buddy) { - mlx5hws_err(pool->ctx, "Bad buddy state\n"); - return; - } - - mlx5hws_buddy_free_mem(buddy, chunk->offset, chunk->order); -} - -static struct mlx5hws_buddy_mem * -hws_pool_buddy_get_buddy(struct mlx5hws_pool *pool, u32 order) -{ - static struct mlx5hws_buddy_mem *buddy; - u32 new_buddy_size; - - buddy = pool->db.buddy; - if (buddy) - return buddy; - - new_buddy_size = max(pool->alloc_log_sz, order); - buddy = mlx5hws_buddy_create(new_buddy_size); - if (!buddy) { - mlx5hws_err(pool->ctx, "Failed to create buddy order: %d\n", - new_buddy_size); - return NULL; + mlx5hws_err(pool->ctx, "Failed to create buddy order: %zu\n", + pool->alloc_log_sz); + return -ENOMEM; } - if (hws_pool_resource_alloc(pool, new_buddy_size) != 0) { - mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d\n", - pool->type, new_buddy_size); + if (hws_pool_resource_alloc(pool) != 0) { + mlx5hws_err(pool->ctx, "Failed to create resource type: %d size %zu\n", + pool->type, pool->alloc_log_sz); mlx5hws_buddy_cleanup(buddy); - return NULL; + return -ENOMEM; } pool->db.buddy = buddy; - return buddy; + return 0; } -static int hws_pool_buddy_get_mem_chunk(struct mlx5hws_pool *pool, - int order, - int *seg) +static int hws_pool_buddy_db_get_chunk(struct mlx5hws_pool *pool, + struct mlx5hws_pool_chunk *chunk) { - struct mlx5hws_buddy_mem *buddy; + struct mlx5hws_buddy_mem *buddy = pool->db.buddy; - buddy = hws_pool_buddy_get_buddy(pool, order); - if (!buddy) - return -ENOMEM; + if (!buddy) { + mlx5hws_err(pool->ctx, "Bad buddy state\n"); + return -EINVAL; + } - *seg = mlx5hws_buddy_alloc_mem(buddy, order); - if (*seg >= 0) + chunk->offset = mlx5hws_buddy_alloc_mem(buddy, chunk->order); + if (chunk->offset >= 0) return 0; return -ENOMEM; } -static int hws_pool_buddy_db_get_chunk(struct mlx5hws_pool *pool, - struct mlx5hws_pool_chunk *chunk) +static void hws_pool_buddy_db_put_chunk(struct mlx5hws_pool *pool, + struct mlx5hws_pool_chunk *chunk) { - int ret = 0; + struct mlx5hws_buddy_mem *buddy; - ret = hws_pool_buddy_get_mem_chunk(pool, chunk->order, - &chunk->offset); - if (ret) - mlx5hws_err(pool->ctx, "Failed to get free slot for chunk with order: %d\n", - chunk->order); + buddy = pool->db.buddy; + if (!buddy) { + mlx5hws_err(pool->ctx, "Bad buddy state\n"); + return; + } - return ret; + mlx5hws_buddy_free_mem(buddy, chunk->offset, chunk->order); } static void hws_pool_buddy_db_uninit(struct mlx5hws_pool *pool) @@ -210,15 +175,13 @@ static void hws_pool_buddy_db_uninit(struct mlx5hws_pool *pool) } } -static int hws_pool_buddy_db_init(struct mlx5hws_pool *pool, u32 log_range) +static int hws_pool_buddy_db_init(struct mlx5hws_pool *pool) { - if (pool->flags & MLX5HWS_POOL_FLAGS_ALLOC_MEM_ON_CREATE) { - if (!hws_pool_buddy_get_buddy(pool, log_range)) { - mlx5hws_err(pool->ctx, - "Failed allocating memory on create log_sz: %d\n", log_range); - return -ENOMEM; - } - } + int ret; + + ret = hws_pool_buddy_init(pool); + if (ret) + return ret; pool->p_db_uninit = &hws_pool_buddy_db_uninit; pool->p_get_chunk = &hws_pool_buddy_db_get_chunk; @@ -227,234 +190,105 @@ static int hws_pool_buddy_db_init(struct mlx5hws_pool *pool, u32 log_range) return 0; } -static int hws_pool_create_resource(struct mlx5hws_pool *pool, u32 alloc_size) -{ - int ret = hws_pool_resource_alloc(pool, alloc_size); - - if (ret) { - mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d\n", - pool->type, alloc_size); - return ret; - } - - return 0; -} - -static struct mlx5hws_pool_elements * -hws_pool_element_create_new_elem(struct mlx5hws_pool *pool, u32 order) +static unsigned long *hws_pool_create_and_init_bitmap(u32 log_range) { - struct mlx5hws_pool_elements *elem; - u32 alloc_size; - - alloc_size = pool->alloc_log_sz; + unsigned long *bitmap; - elem = kzalloc(sizeof(*elem), GFP_KERNEL); - if (!elem) + bitmap = bitmap_zalloc(1 << log_range, GFP_KERNEL); + if (!bitmap) return NULL; - /* Sharing the same resource, also means that all the elements are with size 1 */ - if ((pool->flags & MLX5HWS_POOL_FLAGS_FIXED_SIZE_OBJECTS) && - !(pool->flags & MLX5HWS_POOL_FLAGS_RESOURCE_PER_CHUNK)) { - /* Currently all chunks in size 1 */ - elem->bitmap = hws_pool_create_and_init_bitmap(alloc_size - order); - if (!elem->bitmap) { - mlx5hws_err(pool->ctx, - "Failed to create bitmap type: %d: size %d\n", - pool->type, alloc_size); - goto free_elem; - } - - elem->log_size = alloc_size - order; - } - - if (hws_pool_create_resource(pool, alloc_size)) { - mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %d\n", - pool->type, alloc_size); - goto free_db; - } - - pool->db.element = elem; + bitmap_fill(bitmap, 1 << log_range); - return elem; - -free_db: - bitmap_free(elem->bitmap); -free_elem: - kfree(elem); - return NULL; + return bitmap; } -static int hws_pool_element_find_seg(struct mlx5hws_pool_elements *elem, int *seg) +static int hws_pool_bitmap_init(struct mlx5hws_pool *pool) { - unsigned int segment, size; + unsigned long *bitmap; - size = 1 << elem->log_size; - - segment = find_first_bit(elem->bitmap, size); - if (segment >= size) { - elem->is_full = true; + bitmap = hws_pool_create_and_init_bitmap(pool->alloc_log_sz); + if (!bitmap) { + mlx5hws_err(pool->ctx, "Failed to create bitmap order: %zu\n", + pool->alloc_log_sz); return -ENOMEM; } - bitmap_clear(elem->bitmap, segment, 1); - *seg = segment; - return 0; -} - -static int -hws_pool_onesize_element_get_mem_chunk(struct mlx5hws_pool *pool, u32 order, - int *seg) -{ - struct mlx5hws_pool_elements *elem; - - elem = pool->db.element; - if (!elem) - elem = hws_pool_element_create_new_elem(pool, order); - if (!elem) - goto err_no_elem; - - if (hws_pool_element_find_seg(elem, seg) != 0) { - mlx5hws_err(pool->ctx, "No more resources (last request order: %d)\n", order); + if (hws_pool_resource_alloc(pool) != 0) { + mlx5hws_err(pool->ctx, "Failed to create resource type: %d: size %zu\n", + pool->type, pool->alloc_log_sz); + bitmap_free(bitmap); return -ENOMEM; } - elem->num_of_elements++; - return 0; + pool->db.bitmap = bitmap; -err_no_elem: - mlx5hws_err(pool->ctx, "Failed to allocate element for order: %d\n", order); - return -ENOMEM; + return 0; } -static int hws_pool_general_element_get_mem_chunk(struct mlx5hws_pool *pool, - u32 order, int *seg) +static int hws_pool_bitmap_db_get_chunk(struct mlx5hws_pool *pool, + struct mlx5hws_pool_chunk *chunk) { - int ret; + unsigned long *bitmap, size; - if (!pool->resource) { - ret = hws_pool_create_resource(pool, order); - if (ret) - goto err_no_res; - *seg = 0; /* One memory slot in that element */ - return 0; + if (chunk->order != 0) { + mlx5hws_err(pool->ctx, "Pool only supports order 0 allocs\n"); + return -EINVAL; } - mlx5hws_err(pool->ctx, "No more resources (last request order: %d)\n", order); - return -ENOMEM; - -err_no_res: - mlx5hws_err(pool->ctx, "Failed to allocate element for order: %d\n", order); - return -ENOMEM; -} - -static int hws_pool_general_element_db_get_chunk(struct mlx5hws_pool *pool, - struct mlx5hws_pool_chunk *chunk) -{ - int ret; - - ret = hws_pool_general_element_get_mem_chunk(pool, chunk->order, - &chunk->offset); - if (ret) - mlx5hws_err(pool->ctx, "Failed to get free slot for chunk with order: %d\n", - chunk->order); - - return ret; -} + bitmap = pool->db.bitmap; + if (!bitmap) { + mlx5hws_err(pool->ctx, "Bad bitmap state\n"); + return -EINVAL; + } -static void hws_pool_general_element_db_put_chunk(struct mlx5hws_pool *pool, - struct mlx5hws_pool_chunk *chunk) -{ - if (pool->flags & MLX5HWS_POOL_FLAGS_RELEASE_FREE_RESOURCE) - hws_pool_resource_free(pool); -} + size = 1 << pool->alloc_log_sz; -static void hws_pool_general_element_db_uninit(struct mlx5hws_pool *pool) -{ - (void)pool; -} + chunk->offset = find_first_bit(bitmap, size); + if (chunk->offset >= size) + return -ENOMEM; -/* This memory management works as the following: - * - At start doesn't allocate no mem at all. - * - When new request for chunk arrived: - * allocate resource and give it. - * - When free that chunk: - * the resource is freed. - */ -static int hws_pool_general_element_db_init(struct mlx5hws_pool *pool) -{ - pool->p_db_uninit = &hws_pool_general_element_db_uninit; - pool->p_get_chunk = &hws_pool_general_element_db_get_chunk; - pool->p_put_chunk = &hws_pool_general_element_db_put_chunk; + bitmap_clear(bitmap, chunk->offset, 1); return 0; } -static void -hws_onesize_element_db_destroy_element(struct mlx5hws_pool *pool, - struct mlx5hws_pool_elements *elem) -{ - hws_pool_resource_free(pool); - bitmap_free(elem->bitmap); - kfree(elem); - pool->db.element = NULL; -} - -static void hws_onesize_element_db_put_chunk(struct mlx5hws_pool *pool, - struct mlx5hws_pool_chunk *chunk) +static void hws_pool_bitmap_db_put_chunk(struct mlx5hws_pool *pool, + struct mlx5hws_pool_chunk *chunk) { - struct mlx5hws_pool_elements *elem; + unsigned long *bitmap; - elem = pool->db.element; - if (!elem) { - mlx5hws_err(pool->ctx, "Pool element was not allocated\n"); + bitmap = pool->db.bitmap; + if (!bitmap) { + mlx5hws_err(pool->ctx, "Bad bitmap state\n"); return; } - bitmap_set(elem->bitmap, chunk->offset, 1); - elem->is_full = false; - elem->num_of_elements--; - - if (pool->flags & MLX5HWS_POOL_FLAGS_RELEASE_FREE_RESOURCE && - !elem->num_of_elements) - hws_onesize_element_db_destroy_element(pool, elem); + bitmap_set(bitmap, chunk->offset, 1); } -static int hws_onesize_element_db_get_chunk(struct mlx5hws_pool *pool, - struct mlx5hws_pool_chunk *chunk) +static void hws_pool_bitmap_db_uninit(struct mlx5hws_pool *pool) { - int ret = 0; - - ret = hws_pool_onesize_element_get_mem_chunk(pool, chunk->order, - &chunk->offset); - if (ret) - mlx5hws_err(pool->ctx, "Failed to get free slot for chunk with order: %d\n", - chunk->order); + unsigned long *bitmap; - return ret; + bitmap = pool->db.bitmap; + if (bitmap) { + bitmap_free(bitmap); + pool->db.bitmap = NULL; + } } -static void hws_onesize_element_db_uninit(struct mlx5hws_pool *pool) +static int hws_pool_bitmap_db_init(struct mlx5hws_pool *pool) { - struct mlx5hws_pool_elements *elem = pool->db.element; + int ret; - if (elem) { - bitmap_free(elem->bitmap); - kfree(elem); - pool->db.element = NULL; - } -} + ret = hws_pool_bitmap_init(pool); + if (ret) + return ret; -/* This memory management works as the following: - * - At start doesn't allocate no mem at all. - * - When new request for chunk arrived: - * aloocate the first and only slot of memory/resource - * when it ended return error. - */ -static int hws_pool_onesize_element_db_init(struct mlx5hws_pool *pool) -{ - pool->p_db_uninit = &hws_onesize_element_db_uninit; - pool->p_get_chunk = &hws_onesize_element_db_get_chunk; - pool->p_put_chunk = &hws_onesize_element_db_put_chunk; + pool->p_db_uninit = &hws_pool_bitmap_db_uninit; + pool->p_get_chunk = &hws_pool_bitmap_db_get_chunk; + pool->p_put_chunk = &hws_pool_bitmap_db_put_chunk; return 0; } @@ -464,15 +298,14 @@ static int hws_pool_db_init(struct mlx5hws_pool *pool, { int ret; - if (db_type == MLX5HWS_POOL_DB_TYPE_GENERAL_SIZE) - ret = hws_pool_general_element_db_init(pool); - else if (db_type == MLX5HWS_POOL_DB_TYPE_ONE_SIZE_RESOURCE) - ret = hws_pool_onesize_element_db_init(pool); + if (db_type == MLX5HWS_POOL_DB_TYPE_BITMAP) + ret = hws_pool_bitmap_db_init(pool); else - ret = hws_pool_buddy_db_init(pool, pool->alloc_log_sz); + ret = hws_pool_buddy_db_init(pool); if (ret) { - mlx5hws_err(pool->ctx, "Failed to init general db : %d (ret: %d)\n", db_type, ret); + mlx5hws_err(pool->ctx, "Failed to init pool type: %d (ret: %d)\n", + db_type, ret); return ret; } @@ -521,15 +354,10 @@ mlx5hws_pool_create(struct mlx5hws_context *ctx, struct mlx5hws_pool_attr *pool_ pool->tbl_type = pool_attr->table_type; pool->opt_type = pool_attr->opt_type; - /* Support general db */ - if (pool->flags == (MLX5HWS_POOL_FLAGS_RELEASE_FREE_RESOURCE | - MLX5HWS_POOL_FLAGS_RESOURCE_PER_CHUNK)) - res_db_type = MLX5HWS_POOL_DB_TYPE_GENERAL_SIZE; - else if (pool->flags == (MLX5HWS_POOL_FLAGS_ONE_RESOURCE | - MLX5HWS_POOL_FLAGS_FIXED_SIZE_OBJECTS)) - res_db_type = MLX5HWS_POOL_DB_TYPE_ONE_SIZE_RESOURCE; - else + if (pool->flags & MLX5HWS_POOL_FLAG_BUDDY) res_db_type = MLX5HWS_POOL_DB_TYPE_BUDDY; + else + res_db_type = MLX5HWS_POOL_DB_TYPE_BITMAP; pool->alloc_log_sz = pool_attr->alloc_log_sz; @@ -545,7 +373,7 @@ mlx5hws_pool_create(struct mlx5hws_context *ctx, struct mlx5hws_pool_attr *pool_ return NULL; } -int mlx5hws_pool_destroy(struct mlx5hws_pool *pool) +void mlx5hws_pool_destroy(struct mlx5hws_pool *pool) { mutex_destroy(&pool->lock); @@ -555,5 +383,4 @@ int mlx5hws_pool_destroy(struct mlx5hws_pool *pool) hws_pool_db_unint(pool); kfree(pool); - return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h index 112a61cd2997..9a781a87f097 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h @@ -23,29 +23,10 @@ struct mlx5hws_pool_resource { }; enum mlx5hws_pool_flags { - /* Only a one resource in that pool */ - MLX5HWS_POOL_FLAGS_ONE_RESOURCE = 1 << 0, - MLX5HWS_POOL_FLAGS_RELEASE_FREE_RESOURCE = 1 << 1, - /* No sharing resources between chunks */ - MLX5HWS_POOL_FLAGS_RESOURCE_PER_CHUNK = 1 << 2, - /* All objects are in the same size */ - MLX5HWS_POOL_FLAGS_FIXED_SIZE_OBJECTS = 1 << 3, - /* Managed by buddy allocator */ - MLX5HWS_POOL_FLAGS_BUDDY_MANAGED = 1 << 4, - /* Allocate pool_type memory on pool creation */ - MLX5HWS_POOL_FLAGS_ALLOC_MEM_ON_CREATE = 1 << 5, - - /* These values should be used by the caller */ - MLX5HWS_POOL_FLAGS_FOR_STC_POOL = - MLX5HWS_POOL_FLAGS_ONE_RESOURCE | - MLX5HWS_POOL_FLAGS_FIXED_SIZE_OBJECTS, - MLX5HWS_POOL_FLAGS_FOR_MATCHER_STE_POOL = - MLX5HWS_POOL_FLAGS_RELEASE_FREE_RESOURCE | - MLX5HWS_POOL_FLAGS_RESOURCE_PER_CHUNK, - MLX5HWS_POOL_FLAGS_FOR_STE_ACTION_POOL = - MLX5HWS_POOL_FLAGS_ONE_RESOURCE | - MLX5HWS_POOL_FLAGS_BUDDY_MANAGED | - MLX5HWS_POOL_FLAGS_ALLOC_MEM_ON_CREATE, + /* Managed by a buddy allocator. If this is not set only allocations of + * order 0 are supported. + */ + MLX5HWS_POOL_FLAG_BUDDY = BIT(0), }; enum mlx5hws_pool_optimize { @@ -64,25 +45,16 @@ struct mlx5hws_pool_attr { }; enum mlx5hws_db_type { - /* Uses for allocating chunk of big memory, each element has its own resource in the FW*/ - MLX5HWS_POOL_DB_TYPE_GENERAL_SIZE, - /* One resource only, all the elements are with same one size */ - MLX5HWS_POOL_DB_TYPE_ONE_SIZE_RESOURCE, + /* Uses a bitmap, supports only allocations of order 0. */ + MLX5HWS_POOL_DB_TYPE_BITMAP, /* Entries are managed using a buddy mechanism. */ MLX5HWS_POOL_DB_TYPE_BUDDY, }; -struct mlx5hws_pool_elements { - u32 num_of_elements; - unsigned long *bitmap; - u32 log_size; - bool is_full; -}; - struct mlx5hws_pool_db { enum mlx5hws_db_type type; union { - struct mlx5hws_pool_elements *element; + unsigned long *bitmap; struct mlx5hws_buddy_mem *buddy; }; }; @@ -103,7 +75,6 @@ struct mlx5hws_pool { enum mlx5hws_pool_optimize opt_type; struct mlx5hws_pool_resource *resource; struct mlx5hws_pool_resource *mirror_resource; - /* DB */ struct mlx5hws_pool_db db; /* Functions */ mlx5hws_pool_unint_db p_db_uninit; @@ -115,7 +86,7 @@ struct mlx5hws_pool * mlx5hws_pool_create(struct mlx5hws_context *ctx, struct mlx5hws_pool_attr *pool_attr); -int mlx5hws_pool_destroy(struct mlx5hws_pool *pool); +void mlx5hws_pool_destroy(struct mlx5hws_pool *pool); int mlx5hws_pool_chunk_alloc(struct mlx5hws_pool *pool, struct mlx5hws_pool_chunk *chunk); From patchwork Thu Apr 10 19:17:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047156 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2055.outbound.protection.outlook.com [40.107.92.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A2EE293B5A; Thu, 10 Apr 2025 19:18:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.55 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312741; cv=fail; b=hRFL7BvLM0nXq8FenRf/c/8VA6UX7Duv/rZKsW4ZQFJDQUTNY3fxeDTIrN8O2QOWbsywEjPOzOLD93R0mtRTbx1z34TfFcISxxpcRJZ8tb1udFyU0SMvDNRc/eG8YjXNTeddIFDx91/NW4y2qptMcpBO4CZYQb7TKcs/gJ5JE34= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312741; c=relaxed/simple; bh=HuwT3q0kj/Gvp00vwNdl71rfa56LT07FEHOojjcs+0I=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WepwmSlMMvIQtSRHJNtZEcd/jjbOs8bjJnsGxoKEEXjE9XZmwQAWoyFiCiDnrkuDRLgzgzvTf1CIkQY4TBEYPJ6bytl3ngZjyMR3SaSD/tM21/Rnl0J41dZhZMqhXtGTi01/X1A9PCI7NEs4f1SE88ErFc4jGN8+6O1ZrUs0Wy4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=loNks4OC; arc=fail smtp.client-ip=40.107.92.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="loNks4OC" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=bajF/HRRZWnvwtVclnUY2Hj9PkXLHcJYT8UesISD2VAk6NDwT0Uk0lecv/U0CAJr3DNilFFFE5EAeDHMts0DsIzC7DukLvspGH1EQDDbAdxv4mHYNnP390eaweSraTMWTarw2Ei5jpwn2h3OSTL48GyhimjD4suAL2OUnR2vurPTpOGdbLCbB0hBEM+U9QTTujEgRavWGF2rfCykCmhOiCYfSi6blxKP837p0E3PFUfQThIhPiwq1l4J9R55NGLJKkvW8dJKyu+4j06K0CVufRqduAKNpjeG7Dcgq27BXj7SqkHxcia/5Va2WPv9doag6815qQ1gKaJT25EHc0bmyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yygrMfjDjMDM6+9S+TzX1P4cLtrkB2+R0GIAmVnmz3g=; b=KHlfBbmMt+zWqeqfZPFlnpKIZNjEXQGfjjuvbz880WzUxT/UXCUzffTb+a+xjj+jij2VMSz7Pyu7JoanjjcY+4gz8ZhLpBdCZ2sYmFpoJwhpj5n/mioPpdNBFagAQ0nSwarOJrSeHv/mFhJACNZTq+WEqhhZrATtiHLnTA9SHRcCWACtmQtRJZnfYUPJMkUqIVtTK5nZww6odk79pO9ETtG42gmORMxsNKUbOc9N6Y9A5s/rQSofmBLONOVaTYMhsj0OdU/QbzI8z3Dpw3VzWoSGotixKcxjQ7CWG8w5m7y/5zMpGV588nPcGUlPmP2awJ72wji7+bRXid5yaIVNbg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yygrMfjDjMDM6+9S+TzX1P4cLtrkB2+R0GIAmVnmz3g=; b=loNks4OCeuj6uMSI97IAWcm7H8QQbvkEdbkunJZCyOvJCjVapAqiWqsOatWO9z6kFKdkia/RHn4NCkHp+AgskDZY60KaDUjAyvuoWsDOexFodM/vtKI6pDX/1jPPCxuh8SsM0RZoVHR3lgCzykONVRiq230nX5a1ubWK1lxICwC0KrwZeeCmuH7pZacs9xTSvjMqJuyb0qB192hyHaJpInsFOXlJj58c3wBcBwk8r9V5SGtDmuv8kqVapnQgvVYH7REv9pFGvOcFzF+l0z3DK+sm/WkQVNc+l++wTsCz/Sv/d6aldR0YCMBtpN05ipbnyXgbAk/TkgQ5+84rrYnFYQ== Received: from SJ0PR03CA0069.namprd03.prod.outlook.com (2603:10b6:a03:331::14) by SA3PR12MB8000.namprd12.prod.outlook.com (2603:10b6:806:31f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8606.35; Thu, 10 Apr 2025 19:18:52 +0000 Received: from SN1PEPF0002BA4C.namprd03.prod.outlook.com (2603:10b6:a03:331:cafe::be) by SJ0PR03CA0069.outlook.office365.com (2603:10b6:a03:331::14) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8606.36 via Frontend Transport; Thu, 10 Apr 2025 19:18:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by SN1PEPF0002BA4C.mail.protection.outlook.com (10.167.242.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:18:51 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:34 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:33 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:29 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 05/12] net/mlx5: HWS, Cleanup after pool refactoring Date: Thu, 10 Apr 2025 22:17:35 +0300 Message-ID: <1744312662-356571-6-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002BA4C:EE_|SA3PR12MB8000:EE_ X-MS-Office365-Filtering-Correlation-Id: 3531a678-b117-4e61-e1b2-08dd78648751 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|82310400026|36860700013; X-Microsoft-Antispam-Message-Info: sM5ejIvP6WOQbq+DEINcm10xw9wYwLhGnlRryvDel7Egtxz1KLck6rPDFZNxbp6zxOUb4TKF+XnszEtf/DC/1RhgqgbtCRGTmOdLsyLeXB3RhBW3H/hHC9b4Z+0wK6WxpR9J5xK0U+Q68GHxcBJQWfoP1xLS5fMl24mTaTNUwnX5aR8WIrqdECf1iY4kGoNTapQGb1tJi44RLeUccUNoPAg7u0IM8p6yjLwT7sWvNKrNaQu3RZ55hYXSXhPCGsEWeTcWKwjw7016/eYg4mImH5S0myMPKbwjVFfL8gMEeO2c84kRUeidqx4PsAijWi1g0dckP1kY+n3X/3lTRVspidbKJrIRD3FsNTOKQIFH8s1/nCIM+DzJKjLXj6hKhVwUSfrGszEH6/GRkelQa6+QanBEb15JkJoH3pHbZqyPpWy02cgPfgXJydXkZ6buHP664YfsC/95ITbcZXJuv/I23TdoaQ++zZSoheAer0PAC8t7mPlFY15YTZeC4Hq/slmhBZ1fHxzQpLbaN5pen6qsTRoLODKbUIG0o/c6Cst3HnTxtI13OFdtcskuQnb1eTKbj0of4jJ6XZUcl2+fMWNlaOLZT+Fyhx35gePAHRiOHyrcsu+BFKi0bdhNthHe4HspwOG3QsoW0Hojp9nR23Y10deL/ZM9CD1sU5TF0w63A2jise5YDdW0qoNQZMtPrfQumg5hCPQALeuWZwne8MQ1kknuY3R7clmqjsTGkuWV8UicTMlXxradHJeoaQgMHpJT8cq06iaEVX+WV+TgJn1tgLzKTjxfKKQa3Xn3IE1MJWSCxeZCPw/LZmnKmH13vxFSl+k1SOcPZkaDrOZov8JG1m2guIgbRXMMYwDtvoLwFTVePdZbux3q6Z9pb5LsQCqp2Iy/ThRleFnVHd18ojVNoSQQzSan17TStw8Dos80UInNCnJCFsz/2ujCXciuU8u9Yp04UcTKMFg5fQOqGkwlYHH234B8D/TXdfPxyX73qGl+6RntzAhvUbI/PNRS8J9alR1TIyzjpS7LOlqK12PmIDM1LNMDbP6NB9WNzCYZBygZ6ELD1uWPfo5PdavcDNGxUs7UoEV2vGQ1ceJXBzdjI+wTD9DzAm54j9kzGeDxEa9+ms8tV66O9sac3evIE2HOI4roYctajkFxAj6M+ANulDWznwL+i11KF1Raubk3AbuHdgftx+Yd1ue8BT6n6+XsVuwH7fXoVNhaXYdCGXbbjFgmAllPqpX6k+IE44GxMgKe53/mmi1XsFJ2XT3nwYjMVO3JWOe7NjHZ1FzfEZUMqrsgV4izC1KcVtXfAHx+lfXLj8MNJyCVwbvLcFtve7rzW70Ih5IGTK9N21aS9KREfXY8VDZlZ5Us80bdqT6tb1NUeDAWJJseL3AYZ79sEw5H2e8F9PM8hs7h/J4QrFWLrt5aj9FIRf8hdYvsuywvLBknDvlym//0GBp+lNHpWWSFKiQkXDg1RRKTfayfK4TxZs0GFPUtYuzVHLGwzYHr+Vs= X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(82310400026)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:18:51.8050 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3531a678-b117-4e61-e1b2-08dd78648751 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002BA4C.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB8000 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru Remove members which are now no longer used. In fact, many of the `struct mlx5hws_pool_chunk` were not even written to beyond being initialized, but they were used in various internals. Also cleanup some local variables which made more sense when the API was thicker. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/action.c | 5 -- .../mellanox/mlx5/core/steering/hws/cmd.c | 1 - .../mellanox/mlx5/core/steering/hws/cmd.h | 1 - .../mellanox/mlx5/core/steering/hws/matcher.c | 47 ++++++------------- .../mellanox/mlx5/core/steering/hws/matcher.h | 2 - 5 files changed, 14 insertions(+), 42 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c index 39904b337b81..161ad720b339 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c @@ -1583,7 +1583,6 @@ hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, struct mlx5hws_matcher_action_ste *table_ste; struct mlx5hws_pool_attr pool_attr = {0}; struct mlx5hws_pool *ste_pool, *stc_pool; - struct mlx5hws_pool_chunk *ste; u32 *rtc_0_id, *rtc_1_id; u32 obj_id; int ret; @@ -1613,8 +1612,6 @@ hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, rtc_0_id = &table_ste->rtc_0_id; rtc_1_id = &table_ste->rtc_1_id; ste_pool = table_ste->pool; - ste = &table_ste->ste; - ste->order = 1; rtc_attr.log_size = 0; rtc_attr.log_depth = 0; @@ -1630,7 +1627,6 @@ hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, rtc_attr.pd = ctx->pd_num; rtc_attr.ste_base = obj_id; - rtc_attr.ste_offset = ste->offset; rtc_attr.reparse_mode = mlx5hws_context_get_reparse_mode(ctx); rtc_attr.table_type = mlx5hws_table_get_res_fw_ft_type(MLX5HWS_TABLE_TYPE_FDB, false); @@ -1833,7 +1829,6 @@ mlx5hws_action_create_dest_match_range(struct mlx5hws_context *ctx, stc_attr.action_offset = MLX5HWS_ACTION_OFFSET_HIT; stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; - stc_attr.ste_table.ste = table_ste->ste; stc_attr.ste_table.ste_pool = table_ste->pool; stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c index e8f98c109b99..9c83753e4592 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c @@ -406,7 +406,6 @@ int mlx5hws_cmd_rtc_create(struct mlx5_core_dev *mdev, MLX5_SET(rtc, attr, match_definer_1, rtc_attr->match_definer_1); MLX5_SET(rtc, attr, stc_id, rtc_attr->stc_base); MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); - MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); MLX5_SET(rtc, attr, reparse_mode, rtc_attr->reparse_mode); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h index 51d9e0291ac1..fa6bff210266 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h @@ -70,7 +70,6 @@ struct mlx5hws_cmd_rtc_create_attr { u32 pd; u32 stc_base; u32 ste_base; - u32 ste_offset; u32 miss_ft_id; bool fw_gen_wqe; u8 update_index_mode; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c index 95d31fd6c976..3028e0387e3f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c @@ -197,22 +197,15 @@ static int hws_matcher_disconnect(struct mlx5hws_matcher *matcher) static void hws_matcher_set_rtc_attr_sz(struct mlx5hws_matcher *matcher, struct mlx5hws_cmd_rtc_create_attr *rtc_attr, - enum mlx5hws_matcher_rtc_type rtc_type, bool is_mirror) { - struct mlx5hws_pool_chunk *ste = &matcher->action_ste.ste; enum mlx5hws_matcher_flow_src flow_src = matcher->attr.optimize_flow_src; - bool is_match_rtc = rtc_type == HWS_MATCHER_RTC_TYPE_MATCH; if ((flow_src == MLX5HWS_MATCHER_FLOW_SRC_VPORT && !is_mirror) || (flow_src == MLX5HWS_MATCHER_FLOW_SRC_WIRE && is_mirror)) { /* Optimize FDB RTC */ rtc_attr->log_size = 0; rtc_attr->log_depth = 0; - } else { - /* Keep original values */ - rtc_attr->log_size = is_match_rtc ? matcher->attr.table.sz_row_log : ste->order; - rtc_attr->log_depth = is_match_rtc ? matcher->attr.table.sz_col_log : 0; } } @@ -225,8 +218,7 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, struct mlx5hws_context *ctx = matcher->tbl->ctx; struct mlx5hws_matcher_action_ste *action_ste; struct mlx5hws_table *tbl = matcher->tbl; - struct mlx5hws_pool *ste_pool, *stc_pool; - struct mlx5hws_pool_chunk *ste; + struct mlx5hws_pool *ste_pool; u32 *rtc_0_id, *rtc_1_id; u32 obj_id; int ret; @@ -236,8 +228,6 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, rtc_0_id = &matcher->match_ste.rtc_0_id; rtc_1_id = &matcher->match_ste.rtc_1_id; ste_pool = matcher->match_ste.pool; - ste = &matcher->match_ste.ste; - ste->order = attr->table.sz_col_log + attr->table.sz_row_log; rtc_attr.log_size = attr->table.sz_row_log; rtc_attr.log_depth = attr->table.sz_col_log; @@ -273,16 +263,15 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, rtc_0_id = &action_ste->rtc_0_id; rtc_1_id = &action_ste->rtc_1_id; ste_pool = action_ste->pool; - ste = &action_ste->ste; /* Action RTC size calculation: * log((max number of rules in matcher) * * (max number of action STEs per rule) * * (2 to support writing new STEs for update rule)) */ - ste->order = ilog2(roundup_pow_of_two(action_ste->max_stes)) + - attr->table.sz_row_log + - MLX5HWS_MATCHER_ACTION_RTC_UPDATE_MULT; - rtc_attr.log_size = ste->order; + rtc_attr.log_size = + ilog2(roundup_pow_of_two(action_ste->max_stes)) + + attr->table.sz_row_log + + MLX5HWS_MATCHER_ACTION_RTC_UPDATE_MULT; rtc_attr.log_depth = 0; rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; /* The action STEs use the default always hit definer */ @@ -300,21 +289,19 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, rtc_attr.pd = ctx->pd_num; rtc_attr.ste_base = obj_id; - rtc_attr.ste_offset = ste->offset; rtc_attr.reparse_mode = mlx5hws_context_get_reparse_mode(ctx); rtc_attr.table_type = mlx5hws_table_get_res_fw_ft_type(tbl->type, false); - hws_matcher_set_rtc_attr_sz(matcher, &rtc_attr, rtc_type, false); + hws_matcher_set_rtc_attr_sz(matcher, &rtc_attr, false); /* STC is a single resource (obj_id), use any STC for the ID */ - stc_pool = ctx->stc_pool; - obj_id = mlx5hws_pool_get_base_id(stc_pool); + obj_id = mlx5hws_pool_get_base_id(ctx->stc_pool); rtc_attr.stc_base = obj_id; ret = mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, rtc_0_id); if (ret) { mlx5hws_err(ctx, "Failed to create matcher RTC of type %s", hws_matcher_rtc_type_to_str(rtc_type)); - goto free_ste; + return ret; } if (tbl->type == MLX5HWS_TABLE_TYPE_FDB) { @@ -322,9 +309,9 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, rtc_attr.ste_base = obj_id; rtc_attr.table_type = mlx5hws_table_get_res_fw_ft_type(tbl->type, true); - obj_id = mlx5hws_pool_get_base_mirror_id(stc_pool); + obj_id = mlx5hws_pool_get_base_mirror_id(ctx->stc_pool); rtc_attr.stc_base = obj_id; - hws_matcher_set_rtc_attr_sz(matcher, &rtc_attr, rtc_type, true); + hws_matcher_set_rtc_attr_sz(matcher, &rtc_attr, true); ret = mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, rtc_1_id); if (ret) { @@ -338,16 +325,12 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, destroy_rtc_0: mlx5hws_cmd_rtc_destroy(ctx->mdev, *rtc_0_id); -free_ste: - if (rtc_type == HWS_MATCHER_RTC_TYPE_MATCH) - mlx5hws_pool_chunk_free(ste_pool, ste); return ret; } static void hws_matcher_destroy_rtc(struct mlx5hws_matcher *matcher, enum mlx5hws_matcher_rtc_type rtc_type) { - struct mlx5hws_matcher_action_ste *action_ste; struct mlx5hws_table *tbl = matcher->tbl; u32 rtc_0_id, rtc_1_id; @@ -357,18 +340,17 @@ static void hws_matcher_destroy_rtc(struct mlx5hws_matcher *matcher, rtc_1_id = matcher->match_ste.rtc_1_id; break; case HWS_MATCHER_RTC_TYPE_STE_ARRAY: - action_ste = &matcher->action_ste; - rtc_0_id = action_ste->rtc_0_id; - rtc_1_id = action_ste->rtc_1_id; + rtc_0_id = matcher->action_ste.rtc_0_id; + rtc_1_id = matcher->action_ste.rtc_1_id; break; default: return; } if (tbl->type == MLX5HWS_TABLE_TYPE_FDB) - mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, rtc_1_id); + mlx5hws_cmd_rtc_destroy(tbl->ctx->mdev, rtc_1_id); - mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, rtc_0_id); + mlx5hws_cmd_rtc_destroy(tbl->ctx->mdev, rtc_0_id); } static int @@ -564,7 +546,6 @@ static int hws_matcher_bind_at(struct mlx5hws_matcher *matcher) stc_attr.action_offset = MLX5HWS_ACTION_OFFSET_HIT; stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; - stc_attr.ste_table.ste = action_ste->ste; stc_attr.ste_table.ste_pool = action_ste->pool; stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h index 20b32012c418..0450b6175ac9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h @@ -45,14 +45,12 @@ struct mlx5hws_match_template { }; struct mlx5hws_matcher_match_ste { - struct mlx5hws_pool_chunk ste; u32 rtc_0_id; u32 rtc_1_id; struct mlx5hws_pool *pool; }; struct mlx5hws_matcher_action_ste { - struct mlx5hws_pool_chunk ste; struct mlx5hws_pool_chunk stc; u32 rtc_0_id; u32 rtc_1_id; From patchwork Thu Apr 10 19:17:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047154 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2065.outbound.protection.outlook.com [40.107.243.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6510C28FFC6; Thu, 10 Apr 2025 19:18:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.65 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312732; cv=fail; b=joUp68IQAuYa43qji1IluQ7X0jqD9EduBitD9cYztpTaI5acU8hVl2dm+9svdROV4qXt63B4HbbMepFu06suhq5rQW2fx2cVR0u0X2+ZlKZ8P8mDsrfG/mdNfPRswHlgoaiFCjlmycFpMJx7YXXFugfeIy/IEFaI1v7nH3FSgT8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312732; c=relaxed/simple; bh=tvXIIJFnNgmZU2Zi3ccRqyKf+tHqZS86Fdk6dRfeWKc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=RmsaGm15dhqzT1ysNl/7ckzPnuVRbxjpctznyurf7nB27GqLTICPI7GXZIhZcCh576J1J4eJoZqX9VRW97KKrQUkYUx3vP4aX9OHqOwW7yo5nnsa/uNogFQjY73K4rh1ZJ+yqz8baSo1tRwjG2luDdBidoGdHtayUaaR4Z/lo00= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=TkYeC1Du; arc=fail smtp.client-ip=40.107.243.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="TkYeC1Du" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=MV3f5jPcGoAY/Xyop3MRWs1oRD6qDBanQJN2hU5PlqAz5KeN1YxQ/MOhjyraFuJmCiiuAFHRG0e9ySzv+BLWsrnE2pmf0eQTN9NgRwu8aaKo24aFvZd55+K9BVy894+DXbKglCGbkq1AZYCz5adYFan2hmcMnpzjVn869r0DIWWr+Ys1sZ87Hdu7ciNyqDBv5tYoyptuvMjAVVcLqnPulADH21oTx+FLc/2M6KED3xn8yHOq76SW/vbmyE9aLDM/z6HRkxX8tymI0zGejUc1p1BL1ZIZ0GM9AHreIuoNXTukh3Ruqa/T0XjIAkVX9XJRrEWWjObkeutEREXljSaLlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=kUqq3GTRzRCrqlG/Yk6ipU6Jh6s068yQarIrREqCL5k=; b=kiZgspQDwfb2opg6lsz+KKxKWu63YhDc2vFMkH/ukXcDrw2E+ydy98GqUw2E/oQglgSAm5uwOHetYRn5AmjEfMT6ueIpXQhojwzeAD005+HykQBMkzvSUh188gN4PwvlKpYUVJ5ulXEN5oPgU4i0tl1dZ5JxxcsGhuAwk2ZzQdVAdNXS3B9YpQaMzmy03uYGb4fn1vBt7D+eS8uGACcR4Tw3d0YN+KtaWhwFX/bA7G1wO0Y7dhDnsNs6vhE5XWhowlURN8dTJzSDW+SoST1Dtwrxi6BYBuoU6PACrhnsHq4E4sZHhh3u4BoKuVnqsHbE3XwawhuINQlFgxWrOUTeGQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kUqq3GTRzRCrqlG/Yk6ipU6Jh6s068yQarIrREqCL5k=; b=TkYeC1Du/mkfNHdg1V/hkXIZGus73VWSzMZhIai6hPYX9A7CChkG/kAr3MU003mPxc8L51EdRVcx5PeT7K3mvwLqkFK3/vO11AHmKB6cEutvkjkVnmDis7A1U2bZ079cGMRMjdpfLLUdgRfRdawR/eGmv5RuNKdtYmK1sICMw5oNk9SQnDEnugd9i87eBVNKh4ytYe7tH5RQY4E/vul5G/ulz6DHPMl1VKAtJtnth/RI+oHVhvI/RcfWNXKb++N9b4E7pXc5bGbl3/MW1wY3GrgvXyp0SPwttUam/tHUcK44wDXtPN80zhWzYS0WX9579+4fkEI11/I9TsFOhe7FRg== Received: from BN8PR03CA0021.namprd03.prod.outlook.com (2603:10b6:408:94::34) by DS0PR12MB8562.namprd12.prod.outlook.com (2603:10b6:8:164::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.20; Thu, 10 Apr 2025 19:18:47 +0000 Received: from BN1PEPF00004684.namprd03.prod.outlook.com (2603:10b6:408:94:cafe::99) by BN8PR03CA0021.outlook.office365.com (2603:10b6:408:94::34) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8632.24 via Frontend Transport; Thu, 10 Apr 2025 19:18:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00004684.mail.protection.outlook.com (10.167.243.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:18:46 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:38 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:37 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:34 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 06/12] net/mlx5: HWS, Add fullness tracking to pool Date: Thu, 10 Apr 2025 22:17:36 +0300 Message-ID: <1744312662-356571-7-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00004684:EE_|DS0PR12MB8562:EE_ X-MS-Office365-Filtering-Correlation-Id: 442522f6-466c-4288-6f69-08dd78648469 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: lxQYswyBo5rbHT2MiV/6an9dN8hh8ZCLBZj9f2FxkVt7ZpPKorf2FuYkW52rsL/TaXuqW97TlN1q7UGrr4QjMzK3eR/lgXB8W1bG7Ujd6+LsDAxZLxB2cr26W0J7S9srldSKbKAk7OftIz0BsxdUYjUChq3d/auz19aJ/eiMj9j6esEXhfI66TprXbngp9mUfGAfm8298k6eGC8N9gFzANtJyctFLT0hSjVQnSZFq0iDxlWA9AsHJQQFnPZZdZcL2pWfeAefgkafCVH8CMyPZA17LoTF6492iKtuTNdR79tqfu9uIru+ikJ74Nw4BLuE6EPnaDSGaTZVvPx+GkxV+SINCeelJlsKyucSVCfagR0zfDxkuWTdTp7uL6aZ83b3gBH97q02nUjWrPf6i1FpJO51040ZUeA6drSV/4wdhZ5K2+erzOw5dVj4rxluhMbrqCn5ip20qXqlUWeS1Wkye2YHRbU2Nh9Lgzhu/pV54ymuj2YV8Wffk0+/Cf2kjt5f9nERs6Qh1XGZn0dME+abK1zKuNDE6F6AagJiM5BfzbOtG8WHES2aStqohU4yBq3WOb+ryxbo7B0AO8MRWLnJj55r21/o5r6Xjlqayi0jGvsuozl0Y1HPTPHvIK2kJfgPT4erI8KsXXo/6VqNtCXYPRvGXXm4/ZGNSJbbYCCaOPdRrLj+chknqo4Q34cbo3tFVjJ028rcrKqQ9LamKJ/VGl6Z8Mc5pieIpMgk6j6lWv1uSrXYQdR+xhRAHY1aveq9ffkvhdiumfKnsvLg4kgofoW6CbEK8eGcZZzntKMjTpz5rzcHiRm4oo9QAYR5+dEmTxtgigcNhTHM6o4xCBZCEdEGVEGhR3lAODV0jLtUPPdzM/TljyqXsHCBSAp7WstndPDKf4CguYAe9a7XYV9O+fK36RsMwVVFNkvRIn9NUvn6Cu9f1Dzz8KBuRpfphVn+EGvaG5slRJkWSH3LcNbtzkGMKzIIOx05300u9dxTo4qvvkILYfyzuD8PaDGTXfMe7ApaF5VoRtRkwEAqTB9wcfkBUcWVtaBgMND5UBVcsc1mjF+fJwDP2Kt3/c3KoYaYT/r08cjVA5wZzWqX49eza610Dt6iEXjBjeYprPBM01VDoSV7niPRWU/Y4CIPF202Kc307czzKQ0H0OhBqSLEDchkVxLzgG5qNAkb3ZMn5VtVRz55okxxbFSTyTFMTJfmXuzAWwsute3XeKxBhHtqMG1u44uAQ8ydIQwCB2R/Ug4ObUq7w5CuTksWQ4vrRN5ZFJDJO6wTfN8PrjMSfyxyXk18lKdPOJmN4qIBkTbQw/fGTkILEX9HrHOpa6EdDlf8Dd414224BTekhZTXSbvjVBm6+uiv9Jgn5s7j5VxuxVfIUxIC7quakFOAWs4vMTzJynSEMQdzaCjCInhq7YzrToTzAzsJfSLIZ43Xotdgr06kfknsjWSwTaebb7dCADlRLsfIdXFKgYqyRzfo3H4P0ZmvD8n9RCwZQGyEUC1teCr3cVvsIWeUA+XxseNcnYX3 X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:18:46.8012 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 442522f6-466c-4288-6f69-08dd78648469 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00004684.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8562 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru Future users will need to query whether a pool is empty. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan Reviewed-by: Michal Kubiak --- .../mellanox/mlx5/core/steering/hws/pool.c | 7 ++++++ .../mellanox/mlx5/core/steering/hws/pool.h | 25 +++++++++++++++++++ 2 files changed, 32 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c index 270b333faab3..26d85fe3c417 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c @@ -324,6 +324,8 @@ int mlx5hws_pool_chunk_alloc(struct mlx5hws_pool *pool, mutex_lock(&pool->lock); ret = pool->p_get_chunk(pool, chunk); + if (ret == 0) + pool->available_elems -= 1 << chunk->order; mutex_unlock(&pool->lock); return ret; @@ -334,6 +336,7 @@ void mlx5hws_pool_chunk_free(struct mlx5hws_pool *pool, { mutex_lock(&pool->lock); pool->p_put_chunk(pool, chunk); + pool->available_elems += 1 << chunk->order; mutex_unlock(&pool->lock); } @@ -360,6 +363,7 @@ mlx5hws_pool_create(struct mlx5hws_context *ctx, struct mlx5hws_pool_attr *pool_ res_db_type = MLX5HWS_POOL_DB_TYPE_BITMAP; pool->alloc_log_sz = pool_attr->alloc_log_sz; + pool->available_elems = 1 << pool_attr->alloc_log_sz; if (hws_pool_db_init(pool, res_db_type)) goto free_pool; @@ -377,6 +381,9 @@ void mlx5hws_pool_destroy(struct mlx5hws_pool *pool) { mutex_destroy(&pool->lock); + if (pool->available_elems != 1 << pool->alloc_log_sz) + mlx5hws_err(pool->ctx, "Attempting to destroy non-empty pool\n"); + if (pool->resource) hws_pool_resource_free(pool); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h index 9a781a87f097..c82760d53e1a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h @@ -71,6 +71,7 @@ struct mlx5hws_pool { enum mlx5hws_pool_flags flags; struct mutex lock; /* protect the pool */ size_t alloc_log_sz; + size_t available_elems; enum mlx5hws_table_type tbl_type; enum mlx5hws_pool_optimize opt_type; struct mlx5hws_pool_resource *resource; @@ -103,4 +104,28 @@ static inline u32 mlx5hws_pool_get_base_mirror_id(struct mlx5hws_pool *pool) { return pool->mirror_resource->base_id; } + +static inline bool +mlx5hws_pool_empty(struct mlx5hws_pool *pool) +{ + bool ret; + + mutex_lock(&pool->lock); + ret = pool->available_elems == 0; + mutex_unlock(&pool->lock); + + return ret; +} + +static inline bool +mlx5hws_pool_full(struct mlx5hws_pool *pool) +{ + bool ret; + + mutex_lock(&pool->lock); + ret = pool->available_elems == (1 << pool->alloc_log_sz); + mutex_unlock(&pool->lock); + + return ret; +} #endif /* MLX5HWS_POOL_H_ */ From patchwork Thu Apr 10 19:17:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047155 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2050.outbound.protection.outlook.com [40.107.94.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58B36293441; Thu, 10 Apr 2025 19:18:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.50 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312736; cv=fail; b=Ml3RA9GlV+Bn1gLRjKQAr0ZBUAaHsfA6YkC26U7e4UKnw/B3XnZZYk5hl0o6ZApBG/JAffQ6vRw5WmQgECS1SABs65hZZJtF94q9vpIKDq7oIuK9IBKZQpxDY5g/2Fus5k2ZS/GZ6KIVGb+MgCYNVy15c1o4u98xxqMcYLOwLGA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312736; c=relaxed/simple; bh=Sx7SKuwKoNTMT+Nq1AyrE3wk1rP8DeN0pqfoQ+zNgMc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NwozGgcazTpYHtR3pmIRMyiEVdPyMEHE9vxsN3YOb4Uv9+Bsod3GxWQnLvZJHzWALdZnVQWJ/OzLRzB0h5HbBnBahAifp5cKQEJABWn0XszYrUOqOgWJMYYFS0XN0D8FFgB5HjH8tI4kH5gkioVn79QAVdZ8etmIpFoWdI+l6uM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=jhXbtvF6; arc=fail smtp.client-ip=40.107.94.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="jhXbtvF6" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=vQgIM8KreCsQBZKmyUal5DAgZRywIM0BmyiZBYQwenBzLeBsNT4kFaqtAC3Kcuo317U1SuvVepc57VkHPwbTPcWiu2ZEDlCXIBNkkUkyiT5AUxReyZKyXYfN19urTtpCbcmvZItKo9XPEmd16XCwUVuB8AuzrHQ2OX6lQ2DPw3vx45SeLpwp2rn1vNZK8LC1x1dCVINVnnXjzLcax2YwHWpuhSMfoDJKTstZwE40Pkj31vxykvCcBnuah0Iz9vE3vNC1AVA4OZe6K3KtyypfNzK9FhfMFSyay4y0GBl9Zx5Prsbipildo6bRDZbdzn3+dviqouA2A/d7bXLmaiO4pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NUq2+PSUk6nMplUEl1VQRvpQC2zDPVbaqeWZ1QJOlRI=; b=J2chcmwp0H0+IYsz5I84a5zMasBHuyMOjPWZKQ3ZMJasAlVF1rlzhrE42o0o7goMWRBOJij2udP8kPaPEVbC1BgHtPthANnf48+9w+K6qCuN++zaNIjSZtkIFPT2NjHA/GyYV7y7QdRoPe9Gh42s7Ig62KQZvRKuQpRn/jZUMznX2Q5+d5a/8EJZyXtR9R08Hg+kiPRAkpi3GJb3ukFq6GpUSVjj0cZ0uN3P1boRpXfiFpaK500pTsKypwhPKpdpTu/HIUoEWvQzX/MKiDNW48ihHRfQtYQkxfn/zSxgnoCzpWyAA1/3avryIG3TbHIoGRQOzwBp/vYvbx+NF3sXaA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NUq2+PSUk6nMplUEl1VQRvpQC2zDPVbaqeWZ1QJOlRI=; b=jhXbtvF6etegXDB1XRK163+QSzKL/E3xZniSzIHj4M9JaRG64/kqQoA7tT+yMT/6QA2M5gC7pFNamKozeNGC9JHRGUgZSEMtsRTQLH6YHdCkr74y8JnIEUND/qdSm+w3jScO0RgCDu0ReIgb+aoObgHH1oVJUugZ2XX7Ks3cPzCqVmyDKqkKfJXGDQ95CB746iIw3SoznneutMU8cedoRSk7z4BXVsiAO1a1IvbB0lO7ZGp/tLp4dIkbcymIvyT1mYseYHDaa6cHk2K8KOjInOGsKB7eHb5vXfHhHz1BFhBk9vfpd6v1Blaz1hfnXzixy8vZZkB7q0iD9vEbb5lunw== Received: from BN8PR03CA0018.namprd03.prod.outlook.com (2603:10b6:408:94::31) by MN0PR12MB6128.namprd12.prod.outlook.com (2603:10b6:208:3c4::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8606.29; Thu, 10 Apr 2025 19:18:50 +0000 Received: from BN1PEPF00004684.namprd03.prod.outlook.com (2603:10b6:408:94:cafe::b) by BN8PR03CA0018.outlook.office365.com (2603:10b6:408:94::31) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8606.37 via Frontend Transport; Thu, 10 Apr 2025 19:18:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00004684.mail.protection.outlook.com (10.167.243.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:18:50 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:42 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:42 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:38 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 07/12] net/mlx5: HWS, Fix pool size optimization Date: Thu, 10 Apr 2025 22:17:37 +0300 Message-ID: <1744312662-356571-8-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00004684:EE_|MN0PR12MB6128:EE_ X-MS-Office365-Filtering-Correlation-Id: e784780f-a934-4784-baa3-08dd7864867a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|82310400026|376014|7416014; X-Microsoft-Antispam-Message-Info: APuUsYDAhCTdgEePLu5S8M0Dn2qjTWbwKcEccpUMHyMO5MF90PKgYOBw28M6nytf19svIlkdFuT8qRzSd3woJ5qxaRLN439mnaFOgHAZkIvve+JeQtwzF6f4sqCmipF8qGqydXxovMakQeqC07QHAZBLotr2jGoRfY41RoiICzn9WLNiodhuxIb6J+VOuQzV4OKiPdlywSn/8q7Zxvo010tcJtuBcLXjAo8yvveYhAsoNGMbflnZtx8cpxDS3XUwYtKRt7KzuZ7A/81DaCTW8n8j5G4sl6ydUrsMLG/ReizFKjKO0M/738i1cOwvfi+miz/etGvtCTD9RxFafylUDAKsL+p42ysLqJrMHALaFQNdNB1c8ODz2fjA0vYwN1O0qN4XCLpCLYL9cJ/V81oRUWgkNDEym+3AEeF+FZb8Ed/0xSiu7vU1nZJoM00oOpepxup20O10OTjjI4Ol5N4XU/GZt8vTjMSuT6Y2Vd9s04aaxkBvJKG0nputOB9/Wo+WaCarWmm8e38lNz9Me/HQc3EsY17iFUYseEr48ZloX37OIdAojNYpBTOYjok0RwIcUGE6BMK+DsIWXf0ZhY1k9Gl/er9Gczo19I6rub3MsU1mLFLPh49fncsXy+lkxxNwvqewN/dF7iLWS2CrYjwW18PaLPrH2v2qmAkZMl7AFR8cLd0oKei2nTblc8RjVKj4FshvNvdkkpxiMBSnu596XPHo6DyRJGc22G97FUMlE0ze0YDiHjWmY9+kPlnTTcqKsSygpbW731l+PUd85WlSNCPcQxjjDGFiSGfBP9LM8oxB+z/358T7NB62xjYwhP05MEAewYH6dYW7ZyEvZiI51rJOmZaUng8wyGz1YDl4vpIkW8e2hr19jPTwPI1dO+SidMoRj3BEOyS1F0RRD/IWqgVCSywjSXp3crCylTSLTaE7Hbhj3q9zRBh3y+k9iyZMHHgOmBRDswABQOVuB6P76Utm9ocT4ibr8Htq7JMLO9eL5BZtiDqP79RmrJa5F+cDqIQLny9lsDUPpRkuq3nvo6XXMlVWwAGlalDLe27YKlPPTLJnQ7QikWQw6171rqXTKV57QHZYhG9phWpgmfBsLQVoGEYjXk+74RfPcqhN6pKEaPww86LVl6RMmCyKZPN1qVUZJ2JIj0VkfQ2BVledfgYg5cqxIrdkqEi+NBn6B/FnFTvDSJen+KT5gWPo6No1zshzEwly9VMlNzNS49F70ZZC9ydacG2TSYmuRG91lxh7l/hku+cJ25piCTN3z9qoeVI6gGVFzVMKTP+kU+gAkvxLvYzl82RDEoNK0EWi805x0BpfzgpusFSMcyirUi5UO0JDidCq430RvSAzArXIJs8lHk1QbWEfTlBHEFl7x+p/Ay5D9vAlrUXSm+R/HGCnD8SFlBP12z4Wq0Jk77qI7pz020SLR0A+Zt5hNrmA2csvOdS5VjEmfpARKRtwaqAJpZRkf0DA/sl28naWce+Gtn5jdYrJxiwMo0YYFj6R7kS8Z+slFIXfopHD60Lt0ROG X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(82310400026)(376014)(7416014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:18:50.3012 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e784780f-a934-4784-baa3-08dd7864867a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00004684.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6128 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru The optimization to create a size-one STE range for the unused direction was broken. The hardware prevents us from creating RTCs over unallocated STE space, so the only reason this has worked so far is because the optimization was never used. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c index 26d85fe3c417..7e37d6e9eb83 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c @@ -80,7 +80,7 @@ static int hws_pool_resource_alloc(struct mlx5hws_pool *pool) u32 fw_ft_type, opt_log_range; fw_ft_type = mlx5hws_table_get_res_fw_ft_type(pool->tbl_type, false); - opt_log_range = pool->opt_type == MLX5HWS_POOL_OPTIMIZE_ORIG ? + opt_log_range = pool->opt_type == MLX5HWS_POOL_OPTIMIZE_MIRROR ? 0 : pool->alloc_log_sz; resource = hws_pool_create_one_resource(pool, opt_log_range, fw_ft_type); if (!resource) { @@ -94,7 +94,7 @@ static int hws_pool_resource_alloc(struct mlx5hws_pool *pool) struct mlx5hws_pool_resource *mirror_resource; fw_ft_type = mlx5hws_table_get_res_fw_ft_type(pool->tbl_type, true); - opt_log_range = pool->opt_type == MLX5HWS_POOL_OPTIMIZE_MIRROR ? + opt_log_range = pool->opt_type == MLX5HWS_POOL_OPTIMIZE_ORIG ? 0 : pool->alloc_log_sz; mirror_resource = hws_pool_create_one_resource(pool, opt_log_range, fw_ft_type); if (!mirror_resource) { From patchwork Thu Apr 10 19:17:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047158 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2088.outbound.protection.outlook.com [40.107.244.88]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D325C296170; Thu, 10 Apr 2025 19:19:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.88 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312756; cv=fail; b=jBq4TVRetY8OO9M6hK3FKXNRMnGQPZl12Oy/5nro8XPuOgxUw81i5/5c01wrTTLr36GHugxj1FL8p0pQytOsDsvD3IhwQsbMBLIBOrYllFzRaonfMj15L5SEtwX98jVwywJ00pUpm2P39CCdQ2SYvxS3b1k/JZl6RmO/GYAa4Ek= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312756; c=relaxed/simple; bh=bpmwnnNT0TPnittb4B4/Chdhy/HnRUXy1S+yKVqJ/Lw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rovX1BvNVm4EPUuvYpxQXOoxfBk+Uh5dT8BUbeFLhsIpLM/WkmiThep9wdpdUNVaqx1RPMWs1xQg+IJEWq7z0k7CQv/XEZDDpBI7COTgbcGkK3ULMEg7uA7KyZM84J0xRPb4Sa2Zn5UiP/pj9XDtMsfDLhsLGrMY9fzMobdxo4c= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=l7545qtu; arc=fail smtp.client-ip=40.107.244.88 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="l7545qtu" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=kPg64bIV3It39t2t3wSUr98EKYKYlV7VDbq21WAh2q0AeG9KwIjJi+KEtScm7nUZG4JOG3GCAUhNdERukCkbvUnC6vv2ck/FYmuih3CmZgIN8dP+y7vT8wfxY+mkHvmTXVcYAQWwWF+N9Vkp9PxXZQ9QUFprPqtJydRvGADaIzKdFIGl6ndBmpDkwxJ+ckKcEqQkiXZmsypTo82ju6CfPpAjNHeHfApMwc0m1WRWFVLBylxHKcbWPdDkRJ/fNHVE09OGyRDogbhet0zaUUd3ZbAYei54B1wWZGbWGW4+1oO30hecEsJUKWU4vZnaij537xDPHfZEQypg7kjV4F3jpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OEXlfYGTU68OR9C/ldrCfG61+2IJVCuSOAQ7rbgdvzo=; b=lWz/dtL+LqPpe4yZG8dk1KUop1NSeXWVZQO6b4YrSWPUVq7WzPiMl52E+DLd96QPNqwY2iR/2O0AupYzy2HU3C7ea1C+9QqyzV3dUld25qRb0yED4/YL3HUJj/tDFNObUewaszC2wn1CTvmEfbUNuhYCTZzzm+yASqycOaLWLXJdRaQ4XIje+eCru+yTmlp+NFXctJZSjowT8jj5lbcDZinaJjV5hs38VsOUY3eLb5h32uFReiMpX0vppzEnjt9mdDgUjxS1YwIcrKTSJMs6MgO15aapsrNIAGbw1bW19X+vr9SKqWZbH/T8XC7qo/zAdoGdQOJ/M+2aKZXGaWKZqA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OEXlfYGTU68OR9C/ldrCfG61+2IJVCuSOAQ7rbgdvzo=; b=l7545qtuEy6s2V4+NJt4c7kVkb/sJtISuXSVqaJeBLfzjSC0glWHWlBRy1JAg6YjReoTpcmFHzmuE82AbBkxjHpBq7Rbm1aTKQqr/onIKLlk8df7+zGKyu+tmdIAo/kNyNTlSd7N76rLT3AR/0vnBZt4xOCjEf78S/BsAk4ozGBdMnRgRTCQoKWSCq4vXvx4kLungQfUgTOfW1CbEXEekARxjyCP7mFQFOV1Dv2v78+UGZCKFXrP8gaANTdwdyKs1jrblraPmI1x+b6q+ELDJ2gvag8QpAq//lmRt3kW5VHX2h++5GMKD+B+f8T4kUqKrmko503kiiBtJgffoY5yPQ== Received: from SA0PR11CA0086.namprd11.prod.outlook.com (2603:10b6:806:d2::31) by CH2PR12MB4181.namprd12.prod.outlook.com (2603:10b6:610:a8::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.44; Thu, 10 Apr 2025 19:19:05 +0000 Received: from SN1PEPF0002BA4B.namprd03.prod.outlook.com (2603:10b6:806:d2:cafe::df) by SA0PR11CA0086.outlook.office365.com (2603:10b6:806:d2::31) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8632.22 via Frontend Transport; Thu, 10 Apr 2025 19:19:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by SN1PEPF0002BA4B.mail.protection.outlook.com (10.167.242.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:19:04 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:47 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:46 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:42 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 08/12] net/mlx5: HWS, Implement action STE pool Date: Thu, 10 Apr 2025 22:17:38 +0300 Message-ID: <1744312662-356571-9-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002BA4B:EE_|CH2PR12MB4181:EE_ X-MS-Office365-Filtering-Correlation-Id: 5972b715-a1a3-4257-559c-08dd78648f2f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|376014|7416014|36860700013; X-Microsoft-Antispam-Message-Info: gM932RYiMnDJZTdwzuIqqWw7iriLVI/DEcPA+xSX3WI9H7hEBk2beYpVC3AdpIymmCehHa+vZw0T2BHBVlgW1r6o5HZ+PoCQ3Pb4tKXzWwXNZNxNAnVD3R172SbLKzSeN8Gc0M7hBsH4MJieSeOatiiuwI6w0QSXgR/RxjBqEtBmdiHFhUioWAh+9UAJqIJ1yEIZ2YGSvMZy06EPYbDxEY+H4ZlNdvNQdcc7076+xcqRuWnY5SMUeCS9FmvGxMh9AaoNgUTuvyL5RQPCEj/P2ADwJyYc/vFhzXuwRsjTy9hVwPBirrJ8W6ESvq/bnzezebZEJn2UDapdeikjSjAGrgSkcM0U/+5K/mRo9Ji+VEKt+RHpF8KjDNV/APx3utJjpfLGzBifMOi7uYZKndjtd3D5FJI4g26ZoxIIwBL3Mog0leAZ4gQuRVnrGuslLF0Iv7ZrcenmZ3BBSrDmWekDfm7kqBVq6C1PPtOwGDmfxQEl1bGqui08h1/dbV1MiFEe4SIsFPdW8+qGrqQk/Mro/AZCa3EI4HurPZJw2OtRijWNKcQ3Xkc2AkrNMNFUdDotkekVX6EZ36paAxXcvmMyNp9qMU9HbBD3Gkt5tf/+COZXLMnRi6l153zLnx/O9bDBWYHXg94nfi1TAnozQoqutsU54E+KJ8E8H2WRJdcVFE5uHwlyGVvO4h1Kqd/L1U6do544EP+M8PGZ3HsMlNeWIEXEhFc7YObNjyyV8rRpkGcRPBHS36UefL2lvClbtgDukZV7/GfgJUFNRnRfHOWuzYCpPXp7MbEwoUNjaMbIZ4VEDBYcRRfFougtXZVkkg0G7foJAcEQlmfKmgt0r+77uTqQJHmqkLjJ1tSi4df4rPQMJvQ7wx6NxDYxPQ/4Sl3j/rpTZhGqo8SbIGD+JgEuxu83//xULXmgvL0nWL0i5eYrNzjTxZR9roRa+sNZtNpaA4QO99Ee65J12ScU7x5hAu2+31LhhfOQgR9D5Dcnpq7vaN3C+gToyxb39koLxgKLZJd1AagdfDlx1jHcZnA3/OBztx5GoHAKkG3AymTXUyj7LsSykuKyiatp4gKqEP9WImb/gYHkeH8+FAsv11dIyfMgECPOGHuq3962goovA3JB94bepMF772URZXS9B5HClAlfBSCQJRjtw5FP9I4ECp9n8TURWJVHznVdtnOjQ17lVncYXhYtPS/CUiBYICZDlgDQDGuym/fmuNZQv+zZMrt10TtbVOmQ/EPxSxbuRmnsQEkNntXn/R9K3hvOeNt2p1ecjv8piZ4i2u9KWHEDqpQ7RT/msiQ8pZHEBCdhT4NrBY7eH23PmYYuSlC86Pd9osHu2QfWyAP9n0NA+4s64EgFoD7wjpqPbc4focyCcUwu0pTfgzzAY5xnuAE0ghdk203/8o4lOOCFu+F96heiK4Tgv3M4DOzXjZej22hro4FtWycAAvhYXdLiCKHGEO+vI1bzyiV3ehng2eTYIr7YRQ+o1F+1+z4NwNyB86soJFt1GP33TbcvNs9iEupUUFH1 X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(376014)(7416014)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:19:04.9691 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5972b715-a1a3-4257-559c-08dd78648f2f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002BA4B.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4181 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru Implement a per-queue pool of action STEs that match STEs can link to, regardless of matcher. The code relies on hints to optimize whether a given rule is added to rx-only, tx-only or both. Correspondingly, action STEs need to be added to different RTC for ingress or egress paths. For rx-and-tx rules, the current rule implementation dictates that the offsets for a given rule must be the same in both RTCs. To avoid wasting STEs, each action STE pool element holds 3 pools: rx-only, tx-only, and rx-and-tx, corresponding to the possible values of the pool optimization enum. The implementation then chooses at rule creation / update which of these elements to allocate from. Each element holds multiple action STE tables, which wrap an RTC, an STE range, the logic to buddy-allocate offsets from the range, and an STC that allows match STEs to point to this table. When allocating offsets from an element, we iterate through available action STE tables and, if needed, create a new table. Similar to the previous implementation, this iteration does not free any resources. This is implemented in a subsequent patch. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan Reviewed-by: Michal Kubiak --- .../net/ethernet/mellanox/mlx5/core/Makefile | 3 +- .../mlx5/core/steering/hws/action_ste_pool.c | 387 ++++++++++++++++++ .../mlx5/core/steering/hws/action_ste_pool.h | 58 +++ .../mellanox/mlx5/core/steering/hws/context.c | 7 + .../mellanox/mlx5/core/steering/hws/context.h | 1 + .../mlx5/core/steering/hws/internal.h | 1 + .../mellanox/mlx5/core/steering/hws/pool.h | 1 + 7 files changed, 457 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.c create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.h diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index 568bbe5f83f5..d292e6a9e22c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -154,7 +154,8 @@ mlx5_core-$(CONFIG_MLX5_HW_STEERING) += steering/hws/cmd.o \ steering/hws/vport.o \ steering/hws/bwc_complex.o \ steering/hws/fs_hws_pools.o \ - steering/hws/fs_hws.o + steering/hws/fs_hws.o \ + steering/hws/action_ste_pool.o # # SF device diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.c new file mode 100644 index 000000000000..cb6ad8411631 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.c @@ -0,0 +1,387 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2025 NVIDIA Corporation & Affiliates */ + +#include "internal.h" + +static const char * +hws_pool_opt_to_str(enum mlx5hws_pool_optimize opt) +{ + switch (opt) { + case MLX5HWS_POOL_OPTIMIZE_NONE: + return "rx-and-tx"; + case MLX5HWS_POOL_OPTIMIZE_ORIG: + return "rx-only"; + case MLX5HWS_POOL_OPTIMIZE_MIRROR: + return "tx-only"; + default: + return "unknown"; + } +} + +static int +hws_action_ste_table_create_pool(struct mlx5hws_context *ctx, + struct mlx5hws_action_ste_table *action_tbl, + enum mlx5hws_pool_optimize opt, size_t log_sz) +{ + struct mlx5hws_pool_attr pool_attr = { 0 }; + + pool_attr.pool_type = MLX5HWS_POOL_TYPE_STE; + pool_attr.table_type = MLX5HWS_TABLE_TYPE_FDB; + pool_attr.flags = MLX5HWS_POOL_FLAG_BUDDY; + pool_attr.opt_type = opt; + pool_attr.alloc_log_sz = log_sz; + + action_tbl->pool = mlx5hws_pool_create(ctx, &pool_attr); + if (!action_tbl->pool) { + mlx5hws_err(ctx, "Failed to allocate STE pool\n"); + return -EINVAL; + } + + return 0; +} + +static int hws_action_ste_table_create_single_rtc( + struct mlx5hws_context *ctx, + struct mlx5hws_action_ste_table *action_tbl, + enum mlx5hws_pool_optimize opt, size_t log_sz, bool tx) +{ + struct mlx5hws_cmd_rtc_create_attr rtc_attr = { 0 }; + u32 *rtc_id; + + rtc_attr.log_depth = 0; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; + /* Action STEs use the default always hit definer. */ + rtc_attr.match_definer_0 = ctx->caps->trivial_match_definer; + rtc_attr.is_frst_jumbo = false; + rtc_attr.miss_ft_id = 0; + rtc_attr.pd = ctx->pd_num; + rtc_attr.reparse_mode = mlx5hws_context_get_reparse_mode(ctx); + + if (tx) { + rtc_attr.table_type = FS_FT_FDB_TX; + rtc_attr.ste_base = + mlx5hws_pool_get_base_mirror_id(action_tbl->pool); + rtc_attr.stc_base = + mlx5hws_pool_get_base_mirror_id(ctx->stc_pool); + rtc_attr.log_size = + opt == MLX5HWS_POOL_OPTIMIZE_ORIG ? 0 : log_sz; + rtc_id = &action_tbl->rtc_1_id; + } else { + rtc_attr.table_type = FS_FT_FDB_RX; + rtc_attr.ste_base = mlx5hws_pool_get_base_id(action_tbl->pool); + rtc_attr.stc_base = mlx5hws_pool_get_base_id(ctx->stc_pool); + rtc_attr.log_size = + opt == MLX5HWS_POOL_OPTIMIZE_MIRROR ? 0 : log_sz; + rtc_id = &action_tbl->rtc_0_id; + } + + return mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, rtc_id); +} + +static int +hws_action_ste_table_create_rtcs(struct mlx5hws_context *ctx, + struct mlx5hws_action_ste_table *action_tbl, + enum mlx5hws_pool_optimize opt, size_t log_sz) +{ + int err; + + err = hws_action_ste_table_create_single_rtc(ctx, action_tbl, opt, + log_sz, false); + if (err) + return err; + + err = hws_action_ste_table_create_single_rtc(ctx, action_tbl, opt, + log_sz, true); + if (err) { + mlx5hws_cmd_rtc_destroy(ctx->mdev, action_tbl->rtc_0_id); + return err; + } + + return 0; +} + +static void +hws_action_ste_table_destroy_rtcs(struct mlx5hws_action_ste_table *action_tbl) +{ + mlx5hws_cmd_rtc_destroy(action_tbl->pool->ctx->mdev, + action_tbl->rtc_1_id); + mlx5hws_cmd_rtc_destroy(action_tbl->pool->ctx->mdev, + action_tbl->rtc_0_id); +} + +static int +hws_action_ste_table_create_stc(struct mlx5hws_context *ctx, + struct mlx5hws_action_ste_table *action_tbl) +{ + struct mlx5hws_cmd_stc_modify_attr stc_attr = { 0 }; + + stc_attr.action_offset = MLX5HWS_ACTION_OFFSET_HIT; + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; + stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; + stc_attr.ste_table.ste_pool = action_tbl->pool; + stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; + + return mlx5hws_action_alloc_single_stc(ctx, &stc_attr, + MLX5HWS_TABLE_TYPE_FDB, + &action_tbl->stc); +} + +static struct mlx5hws_action_ste_table * +hws_action_ste_table_alloc(struct mlx5hws_action_ste_pool_element *parent_elem) +{ + enum mlx5hws_pool_optimize opt = parent_elem->opt; + struct mlx5hws_context *ctx = parent_elem->ctx; + struct mlx5hws_action_ste_table *action_tbl; + size_t log_sz; + int err; + + log_sz = min(parent_elem->log_sz ? + parent_elem->log_sz + + MLX5HWS_ACTION_STE_TABLE_STEP_LOG_SZ : + MLX5HWS_ACTION_STE_TABLE_INIT_LOG_SZ, + MLX5HWS_ACTION_STE_TABLE_MAX_LOG_SZ); + + action_tbl = kzalloc(sizeof(*action_tbl), GFP_KERNEL); + if (!action_tbl) + return ERR_PTR(-ENOMEM); + + err = hws_action_ste_table_create_pool(ctx, action_tbl, opt, log_sz); + if (err) + goto free_tbl; + + err = hws_action_ste_table_create_rtcs(ctx, action_tbl, opt, log_sz); + if (err) + goto destroy_pool; + + err = hws_action_ste_table_create_stc(ctx, action_tbl); + if (err) + goto destroy_rtcs; + + action_tbl->parent_elem = parent_elem; + INIT_LIST_HEAD(&action_tbl->list_node); + list_add(&action_tbl->list_node, &parent_elem->available); + parent_elem->log_sz = log_sz; + + mlx5hws_dbg(ctx, + "Allocated %s action STE table log_sz %zu; STEs (%d, %d); RTCs (%d, %d); STC %d\n", + hws_pool_opt_to_str(opt), log_sz, + mlx5hws_pool_get_base_id(action_tbl->pool), + mlx5hws_pool_get_base_mirror_id(action_tbl->pool), + action_tbl->rtc_0_id, action_tbl->rtc_1_id, + action_tbl->stc.offset); + + return action_tbl; + +destroy_rtcs: + hws_action_ste_table_destroy_rtcs(action_tbl); +destroy_pool: + mlx5hws_pool_destroy(action_tbl->pool); +free_tbl: + kfree(action_tbl); + + return ERR_PTR(err); +} + +static void +hws_action_ste_table_destroy(struct mlx5hws_action_ste_table *action_tbl) +{ + struct mlx5hws_context *ctx = action_tbl->parent_elem->ctx; + + mlx5hws_dbg(ctx, + "Destroying %s action STE table: STEs (%d, %d); RTCs (%d, %d); STC %d\n", + hws_pool_opt_to_str(action_tbl->parent_elem->opt), + mlx5hws_pool_get_base_id(action_tbl->pool), + mlx5hws_pool_get_base_mirror_id(action_tbl->pool), + action_tbl->rtc_0_id, action_tbl->rtc_1_id, + action_tbl->stc.offset); + + mlx5hws_action_free_single_stc(ctx, MLX5HWS_TABLE_TYPE_FDB, + &action_tbl->stc); + hws_action_ste_table_destroy_rtcs(action_tbl); + mlx5hws_pool_destroy(action_tbl->pool); + + list_del(&action_tbl->list_node); + kfree(action_tbl); +} + +static int +hws_action_ste_pool_element_init(struct mlx5hws_context *ctx, + struct mlx5hws_action_ste_pool_element *elem, + enum mlx5hws_pool_optimize opt) +{ + elem->ctx = ctx; + elem->opt = opt; + INIT_LIST_HEAD(&elem->available); + INIT_LIST_HEAD(&elem->full); + + return 0; +} + +static void hws_action_ste_pool_element_destroy( + struct mlx5hws_action_ste_pool_element *elem) +{ + struct mlx5hws_action_ste_table *action_tbl, *p; + + /* This should be empty, but attempt to free its elements anyway. */ + list_for_each_entry_safe(action_tbl, p, &elem->full, list_node) + hws_action_ste_table_destroy(action_tbl); + + list_for_each_entry_safe(action_tbl, p, &elem->available, list_node) + hws_action_ste_table_destroy(action_tbl); +} + +static int hws_action_ste_pool_init(struct mlx5hws_context *ctx, + struct mlx5hws_action_ste_pool *pool) +{ + enum mlx5hws_pool_optimize opt; + int err; + + /* Rules which are added for both RX and TX must use the same action STE + * indices for both. If we were to use a single table, then RX-only and + * TX-only rules would waste the unused entries. Thus, we use separate + * table sets for the three cases. + */ + for (opt = MLX5HWS_POOL_OPTIMIZE_NONE; opt < MLX5HWS_POOL_OPTIMIZE_MAX; + opt++) { + err = hws_action_ste_pool_element_init(ctx, &pool->elems[opt], + opt); + if (err) + goto destroy_elems; + } + + return 0; + +destroy_elems: + while (opt-- > MLX5HWS_POOL_OPTIMIZE_NONE) + hws_action_ste_pool_element_destroy(&pool->elems[opt]); + + return err; +} + +static void hws_action_ste_pool_destroy(struct mlx5hws_action_ste_pool *pool) +{ + int opt; + + for (opt = MLX5HWS_POOL_OPTIMIZE_MAX - 1; + opt >= MLX5HWS_POOL_OPTIMIZE_NONE; opt--) + hws_action_ste_pool_element_destroy(&pool->elems[opt]); +} + +int mlx5hws_action_ste_pool_init(struct mlx5hws_context *ctx) +{ + struct mlx5hws_action_ste_pool *pool; + size_t queues = ctx->queues; + int i, err; + + pool = kcalloc(queues, sizeof(*pool), GFP_KERNEL); + if (!pool) + return -ENOMEM; + + for (i = 0; i < queues; i++) { + err = hws_action_ste_pool_init(ctx, &pool[i]); + if (err) + goto free_pool; + } + + ctx->action_ste_pool = pool; + + return 0; + +free_pool: + while (i--) + hws_action_ste_pool_destroy(&pool[i]); + kfree(pool); + + return err; +} + +void mlx5hws_action_ste_pool_uninit(struct mlx5hws_context *ctx) +{ + size_t queues = ctx->queues; + int i; + + for (i = 0; i < queues; i++) + hws_action_ste_pool_destroy(&ctx->action_ste_pool[i]); + + kfree(ctx->action_ste_pool); +} + +static struct mlx5hws_action_ste_pool_element * +hws_action_ste_choose_elem(struct mlx5hws_action_ste_pool *pool, + bool skip_rx, bool skip_tx) +{ + if (skip_rx) + return &pool->elems[MLX5HWS_POOL_OPTIMIZE_MIRROR]; + + if (skip_tx) + return &pool->elems[MLX5HWS_POOL_OPTIMIZE_ORIG]; + + return &pool->elems[MLX5HWS_POOL_OPTIMIZE_NONE]; +} + +static int +hws_action_ste_table_chunk_alloc(struct mlx5hws_action_ste_table *action_tbl, + struct mlx5hws_action_ste_chunk *chunk) +{ + int err; + + err = mlx5hws_pool_chunk_alloc(action_tbl->pool, &chunk->ste); + if (err) + return err; + + chunk->action_tbl = action_tbl; + + return 0; +} + +int mlx5hws_action_ste_chunk_alloc(struct mlx5hws_action_ste_pool *pool, + bool skip_rx, bool skip_tx, + struct mlx5hws_action_ste_chunk *chunk) +{ + struct mlx5hws_action_ste_pool_element *elem; + struct mlx5hws_action_ste_table *action_tbl; + bool found; + int err; + + if (skip_rx && skip_tx) + return -EINVAL; + + elem = hws_action_ste_choose_elem(pool, skip_rx, skip_tx); + + mlx5hws_dbg(elem->ctx, + "Allocating action STEs skip_rx %d skip_tx %d order %d\n", + skip_rx, skip_tx, chunk->ste.order); + + found = false; + list_for_each_entry(action_tbl, &elem->available, list_node) { + if (!hws_action_ste_table_chunk_alloc(action_tbl, chunk)) { + found = true; + break; + } + } + + if (!found) { + action_tbl = hws_action_ste_table_alloc(elem); + if (IS_ERR(action_tbl)) + return PTR_ERR(action_tbl); + + err = hws_action_ste_table_chunk_alloc(action_tbl, chunk); + if (err) + return err; + } + + if (mlx5hws_pool_empty(action_tbl->pool)) + list_move(&action_tbl->list_node, &elem->full); + + return 0; +} + +void mlx5hws_action_ste_chunk_free(struct mlx5hws_action_ste_chunk *chunk) +{ + mlx5hws_dbg(chunk->action_tbl->pool->ctx, + "Freeing action STEs offset %d order %d\n", + chunk->ste.offset, chunk->ste.order); + mlx5hws_pool_chunk_free(chunk->action_tbl->pool, &chunk->ste); + list_move(&chunk->action_tbl->list_node, + &chunk->action_tbl->parent_elem->available); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.h new file mode 100644 index 000000000000..2de660a63223 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2025 NVIDIA Corporation & Affiliates */ + +#ifndef ACTION_STE_POOL_H_ +#define ACTION_STE_POOL_H_ + +#define MLX5HWS_ACTION_STE_TABLE_INIT_LOG_SZ 10 +#define MLX5HWS_ACTION_STE_TABLE_STEP_LOG_SZ 1 +#define MLX5HWS_ACTION_STE_TABLE_MAX_LOG_SZ 20 + +struct mlx5hws_action_ste_pool_element; + +struct mlx5hws_action_ste_table { + struct mlx5hws_action_ste_pool_element *parent_elem; + /* Wraps the RTC and STE range for this given action. */ + struct mlx5hws_pool *pool; + /* Match STEs use this STC to jump to this pool's RTC. */ + struct mlx5hws_pool_chunk stc; + u32 rtc_0_id; + u32 rtc_1_id; + struct list_head list_node; +}; + +struct mlx5hws_action_ste_pool_element { + struct mlx5hws_context *ctx; + size_t log_sz; /* Size of the largest table so far. */ + enum mlx5hws_pool_optimize opt; + struct list_head available; + struct list_head full; +}; + +/* Central repository of action STEs. The context contains one of these pools + * per queue. + */ +struct mlx5hws_action_ste_pool { + struct mlx5hws_action_ste_pool_element elems[MLX5HWS_POOL_OPTIMIZE_MAX]; +}; + +/* A chunk of STEs and the table it was allocated from. Used by rules. */ +struct mlx5hws_action_ste_chunk { + struct mlx5hws_action_ste_table *action_tbl; + struct mlx5hws_pool_chunk ste; +}; + +int mlx5hws_action_ste_pool_init(struct mlx5hws_context *ctx); + +void mlx5hws_action_ste_pool_uninit(struct mlx5hws_context *ctx); + +/* Callers are expected to fill chunk->ste.order. On success, this function + * populates chunk->tbl and chunk->ste.offset. + */ +int mlx5hws_action_ste_chunk_alloc(struct mlx5hws_action_ste_pool *pool, + bool skip_rx, bool skip_tx, + struct mlx5hws_action_ste_chunk *chunk); + +void mlx5hws_action_ste_chunk_free(struct mlx5hws_action_ste_chunk *chunk); + +#endif /* ACTION_STE_POOL_H_ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.c index b7cb736b74d7..428dae869706 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.c @@ -158,10 +158,16 @@ static int hws_context_init_hws(struct mlx5hws_context *ctx, if (ret) goto pools_uninit; + ret = mlx5hws_action_ste_pool_init(ctx); + if (ret) + goto close_queues; + INIT_LIST_HEAD(&ctx->tbl_list); return 0; +close_queues: + mlx5hws_send_queues_close(ctx); pools_uninit: hws_context_pools_uninit(ctx); uninit_pd: @@ -174,6 +180,7 @@ static void hws_context_uninit_hws(struct mlx5hws_context *ctx) if (!(ctx->flags & MLX5HWS_CONTEXT_FLAG_HWS_SUPPORT)) return; + mlx5hws_action_ste_pool_uninit(ctx); mlx5hws_send_queues_close(ctx); hws_context_pools_uninit(ctx); hws_context_uninit_pd(ctx); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.h index 38c3647444ad..e987e93bbc6e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.h @@ -39,6 +39,7 @@ struct mlx5hws_context { struct mlx5hws_cmd_query_caps *caps; u32 pd_num; struct mlx5hws_pool *stc_pool; + struct mlx5hws_action_ste_pool *action_ste_pool; /* One per queue */ struct mlx5hws_context_common_res common_res; struct mlx5hws_pattern_cache *pattern_cache; struct mlx5hws_definer_cache *definer_cache; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/internal.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/internal.h index 30ccd635b505..21279d503117 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/internal.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/internal.h @@ -17,6 +17,7 @@ #include "context.h" #include "table.h" #include "send.h" +#include "action_ste_pool.h" #include "rule.h" #include "cmd.h" #include "action.h" diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h index c82760d53e1a..33e33d5f1fb3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.h @@ -33,6 +33,7 @@ enum mlx5hws_pool_optimize { MLX5HWS_POOL_OPTIMIZE_NONE = 0x0, MLX5HWS_POOL_OPTIMIZE_ORIG = 0x1, MLX5HWS_POOL_OPTIMIZE_MIRROR = 0x2, + MLX5HWS_POOL_OPTIMIZE_MAX = 0x3, }; struct mlx5hws_pool_attr { From patchwork Thu Apr 10 19:17:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047157 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2040.outbound.protection.outlook.com [40.107.237.40]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 680A328FFF7; Thu, 10 Apr 2025 19:19:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.40 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312748; cv=fail; b=gKPPBB9azAuy/c6+DFDm2j0AaJj2GbuL7+rjHTFSCN9NgzUZiU/C7QEl+hF9SZs9GiWSazsLZ9/PiYPon7CtbZYHCnwKtfVqWFFHfIhNXgdy8t2311/t2pKIC6Nn8YVdS2nwInsQxkl665z1T0LYAcOZlhFbksTHkDz52nivAhg= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312748; c=relaxed/simple; bh=CN68HPTRWrKarOaaiXWM1PJWepn7YokGP0w/UjedFIg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nQ13CrdpP+9jIGk7P+FLbgp2L6BUJcWhJ0fuIHgI4a8UhIsSJ5wjqLmeHdXTcAMNJkN6mMsgsky1AM39KywpjR2TLyuX+Du6tEV+xirtW6YmUSTCofwReWAXIEWVth88PX3BDCE9pBGesdVogSyi6mf0rLCN7O8CC4wIl2styM8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=H80oVFCm; arc=fail smtp.client-ip=40.107.237.40 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="H80oVFCm" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=fG2m0HKjq9WhUZYs3dIThOnF9FcfA3wq2fJcjXaqCOYOAZ3z9AQ4Ho5qS37Ccy5qbpME6GwlYwDoURGkGevOkxchOvb5mf048z6BXIUlFIl/HSWFeihdMga2D/5Cr2ibPudNef7d9BUhnloHHVolrcJqz4Ak2uJYneuYjBfIb61HReQe8j1tgdIV67SF/C1FE16gBAmTlb989gVaMxOtB3MFXMn8znK/XITKEeuN3t3VEkfBUcMtFrUvwHMp5B6wbZYeuQuKCubNy/Uy2QaQqVrmvgf93qQFKl3694HXfZEEgUbg1KDHF5Son6toKgRDAc5epQDUo22D+YYzGrchew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=azPN/hZCmcOqR0XJELNcmBtV/iQH5JG/L7rt2bucZRE=; b=DHKnolxiafhoWEIaNOly5QaVwywIUFSgA6qYwIGGo2elqTifugB0l5q9Otx3mlhWR2Ecf9AMr9vkXhXTTZ2Kkqb3jGb7/ymYOMecOSWvBFrCbgy0HaGk8YMlEbetl37iDM0v18orSG2VJqaH9g3RueR+8kALBaT2l/URcldECoNKfWXoGfmftBmcx1LzKQmUV3GES/L0c8srp5Ii6wn4Y34VMNOklbuBXEyi+lw8L0TA/pDMKIGj3w5VBRCvJ49W02rfzkhmQKHf8J9+zLeGnLgE+Ac3k2PXjYt0j9+LKSgn+/qyx/wD+OrUrVB1wxTzcbbP0jyPB+BKBBAjTlZz2Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=azPN/hZCmcOqR0XJELNcmBtV/iQH5JG/L7rt2bucZRE=; b=H80oVFCmbJGGu7Clu0wJhv9EM/KP3SF3ABMMpG69vMmrMICFLP0KwvSvLAiWG+4MR3s3fzacU5Te7zGFnO5Bhf0478oVC+iIs8tFuBLFdZp9tMlxORf0ZI/b7+BIgju7XGTiB/I8O0RpqX9JFKV7v7sXGPizoimpkurLZjKTfFNKjeAt1TIu3VjzYiivVJ1yAln46BA8pdCBHml0C4AwU4OhiVt9ekEgs9EfyTuXVK/sLNTwzU6aNlJyINQ1UFkFFkQwB19mN4Ti7N5a9XTpsN9NSy9bWXktCISax1f9IHVe2Pi/yWZmq62FGfQGymh7XsCzXuGm+Mu9yrxZ5h1rmg== Received: from BL1PR13CA0312.namprd13.prod.outlook.com (2603:10b6:208:2c1::17) by DS7PR12MB5790.namprd12.prod.outlook.com (2603:10b6:8:75::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.22; Thu, 10 Apr 2025 19:18:58 +0000 Received: from BN1PEPF00004680.namprd03.prod.outlook.com (2603:10b6:208:2c1:cafe::52) by BL1PR13CA0312.outlook.office365.com (2603:10b6:208:2c1::17) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8655.8 via Frontend Transport; Thu, 10 Apr 2025 19:18:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by BN1PEPF00004680.mail.protection.outlook.com (10.167.243.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:18:57 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:51 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:50 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:47 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 09/12] net/mlx5: HWS, Use the new action STE pool Date: Thu, 10 Apr 2025 22:17:39 +0300 Message-ID: <1744312662-356571-10-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00004680:EE_|DS7PR12MB5790:EE_ X-MS-Office365-Filtering-Correlation-Id: 6b409416-0023-419b-17b2-08dd78648b01 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|36860700013|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: b4+blppCML6VayBieXAmRM4M7p6puwG2aWsMqI81w0xN1+5OERqzad/kkX9BNCGJXsurkSLPFH/frUvOz+St0xb+803IPx0UbSh1sZCiL63Zdi9nZgiooiiJZKii9vxlEUHgqHtkg5KhWGZxjuyGOCvsqnjaAjGVnE+LeFxlEw2+gXvsQsdXxsLp08A1/hrWqHYBY1hSe3/z5+HhttFhm62h++BEppqSm7ybhbw8UQbzcUczDqMqomPG4910Q6UNv6THrhAUtEbcGtLaBYs6f+V8CwiPdIJpmTb8mlqthYnV2MgTuiQMCWpol1y13Ts5XJsmw9ZQDMWVndhBhwn0av+KgrBB/mBNAZTjK+h27dR/jtrvDSFYv8gLI5ama+ESbO82cTL8NmVX6YH+Xr5ycS6d36kE1n3gybQCfGazp57ElIo4Y2M/gf1jXBJmh1VIK7qQaF+MFa2x/1ASkP5GtaKVay/yDUdz/pXZT+5Lrj1lDoEPAQMsuObnN1iTw251RbDiYNWqrGpg635DLARf3HSE8ybGN8VibeXARLK0reEIuziBAmjKEK2hTBj6F31WlPYpFWttUuNEHvpNrGLtVOgK1/YJYgyz4fD3L+lzmenMk7NV/2l6YLal5Hq5AALhm3+ZwuFPgft6GV6ha9AzMKVNE5JLrWodcMMqxuxOv3PV56r2WDqex2AtaWyjYciOthM8MJAUdSPy+wEHyh7UC1BS+aU0s4nIzhY1fUMNGdPkeLQU5MxKR6oZh51ejPBL1FocdM/UTIIL/n3OwhZ8X+q2fKzI2xGzod5+FfnS355VnLsLcPPKIlJlcx+HKMMnwLeEg1q0jT+NUAKBaEpx4uxi7jpcvmMDRsD22ggmSu4lfEoxiz4ZjF9JkYU5SkGYaFZoDE1e8wAnDz6w/Qas8MdQkcUVpVEJKTep33FQ1wJCvVSbrAF0cIj3rY1BDidD3RWRGD0zTjJP4VPcLxotWZdYSmja8oi5CrwH0j/+j9FbxrUO6A8ORVfWYlGu564xDIzJVWmktT8FQrafWMTmmNLYDXeXex2p7VTFrrJVsMQIgs4XxPhBuVuSt/eAoHS+U3O98PWbtTkqyY5ZAFVXkdpvxYFSg8MQDyvORUnNgEz1UuLm8Lm6K/Q6SuG81NFxQBCoRgYHwz0EnTsjmwlpw1/dhyHREROETBI7PksIsZGhp8BHHk6mqKxxz0S1RwRcLWho7tWhCm4QELMkVbNMmvP1aU3f/hqDZt1kw6AK8J+QnEGQwoFbFD1EMus2bZEC+kIuJONRsl9G/Cl+L1rmnQzBiXqotW6VLnfyjiMi7awGW09TdxnZNW64o6bKzrC//eqLzPpkTEa5eG19qF9auUbO+rN6I7DXkejUqvehW59JLc32RMnUk7OA5yywih6EGlBzqgPpks80h8HuQtzJ9RUfnUVuaCpMyF26BmS5qs9IqL1mqN/++zzw9h4fm7J+b+z3iN7jc2ETbYFBzjKg+A0dnCpaE3MLwlEIu3UpBEk= X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:18:57.8927 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6b409416-0023-419b-17b2-08dd78648b01 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00004680.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5790 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru Use the central action STE pool when creating / updating rules. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan Reviewed-by: Michal Kubiak --- .../mellanox/mlx5/core/steering/hws/rule.c | 69 ++++++++----------- .../mellanox/mlx5/core/steering/hws/rule.h | 12 +--- 2 files changed, 30 insertions(+), 51 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c index a27a2d5ffc7b..5b758467ed03 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c @@ -195,44 +195,30 @@ hws_rule_load_delete_info(struct mlx5hws_rule *rule, } } -static int hws_rule_alloc_action_ste(struct mlx5hws_rule *rule) +static int mlx5hws_rule_alloc_action_ste(struct mlx5hws_rule *rule, + u16 queue_id, bool skip_rx, + bool skip_tx) { struct mlx5hws_matcher *matcher = rule->matcher; - struct mlx5hws_matcher_action_ste *action_ste; - struct mlx5hws_pool_chunk ste = {0}; - int ret; - - action_ste = &matcher->action_ste; - ste.order = ilog2(roundup_pow_of_two(action_ste->max_stes)); - ret = mlx5hws_pool_chunk_alloc(action_ste->pool, &ste); - if (unlikely(ret)) { - mlx5hws_err(matcher->tbl->ctx, - "Failed to allocate STE for rule actions"); - return ret; - } - - rule->action_ste.pool = matcher->action_ste.pool; - rule->action_ste.num_stes = matcher->action_ste.max_stes; - rule->action_ste.index = ste.offset; + struct mlx5hws_context *ctx = matcher->tbl->ctx; - return 0; + rule->action_ste.ste.order = + ilog2(roundup_pow_of_two(matcher->action_ste.max_stes)); + return mlx5hws_action_ste_chunk_alloc(&ctx->action_ste_pool[queue_id], + skip_rx, skip_tx, + &rule->action_ste); } -void mlx5hws_rule_free_action_ste(struct mlx5hws_rule_action_ste_info *action_ste) +void mlx5hws_rule_free_action_ste(struct mlx5hws_action_ste_chunk *action_ste) { - struct mlx5hws_pool_chunk ste = {0}; - - if (!action_ste->num_stes) + if (!action_ste->action_tbl) return; - ste.order = ilog2(roundup_pow_of_two(action_ste->num_stes)); - ste.offset = action_ste->index; - /* This release is safe only when the rule match STE was deleted * (when the rule is being deleted) or replaced with the new STE that * isn't pointing to old action STEs (when the rule is being updated). */ - mlx5hws_pool_chunk_free(action_ste->pool, &ste); + mlx5hws_action_ste_chunk_free(action_ste); } static void hws_rule_create_init(struct mlx5hws_rule *rule, @@ -250,22 +236,15 @@ static void hws_rule_create_init(struct mlx5hws_rule *rule, rule->rtc_0 = 0; rule->rtc_1 = 0; - rule->action_ste.pool = NULL; - rule->action_ste.num_stes = 0; - rule->action_ste.index = -1; - rule->status = MLX5HWS_RULE_STATUS_CREATING; } else { rule->status = MLX5HWS_RULE_STATUS_UPDATING; + /* Save the old action STE info so we can free it after writing + * new action STEs and a corresponding match STE. + */ + rule->old_action_ste = rule->action_ste; } - /* Initialize the old action STE info - shallow-copy action_ste. - * In create flow this will set old_action_ste fields to initial values. - * In update flow this will save the existing action STE info, - * so that we will later use it to free old STEs. - */ - rule->old_action_ste = rule->action_ste; - rule->pending_wqes = 0; /* Init default send STE attributes */ @@ -277,7 +256,6 @@ static void hws_rule_create_init(struct mlx5hws_rule *rule, /* Init default action apply */ apply->tbl_type = tbl->type; apply->common_res = &ctx->common_res; - apply->jump_to_action_stc = matcher->action_ste.stc.offset; apply->require_dep = 0; } @@ -353,17 +331,24 @@ static int hws_rule_create_hws(struct mlx5hws_rule *rule, if (action_stes) { /* Allocate action STEs for rules that need more than match STE */ - ret = hws_rule_alloc_action_ste(rule); + ret = mlx5hws_rule_alloc_action_ste(rule, attr->queue_id, + !!ste_attr.rtc_0, + !!ste_attr.rtc_1); if (ret) { mlx5hws_err(ctx, "Failed to allocate action memory %d", ret); mlx5hws_send_abort_new_dep_wqe(queue); return ret; } + apply.jump_to_action_stc = + rule->action_ste.action_tbl->stc.offset; /* Skip RX/TX based on the dep_wqe init */ - ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0_id : 0; - ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1_id : 0; + ste_attr.rtc_0 = dep_wqe->rtc_0 ? + rule->action_ste.action_tbl->rtc_0_id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? + rule->action_ste.action_tbl->rtc_1_id : 0; /* Action STEs are written to a specific index last to first */ - ste_attr.direct_index = rule->action_ste.index + action_stes; + ste_attr.direct_index = + rule->action_ste.ste.offset + action_stes; apply.next_direct_idx = ste_attr.direct_index; } else { apply.next_direct_idx = 0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h index b5ee94ac449b..1c47a9c11572 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.h @@ -43,12 +43,6 @@ struct mlx5hws_rule_match_tag { }; }; -struct mlx5hws_rule_action_ste_info { - struct mlx5hws_pool *pool; - int index; /* STE array index */ - u8 num_stes; -}; - struct mlx5hws_rule_resize_info { u32 rtc_0; u32 rtc_1; @@ -64,8 +58,8 @@ struct mlx5hws_rule { struct mlx5hws_rule_match_tag tag; struct mlx5hws_rule_resize_info *resize_info; }; - struct mlx5hws_rule_action_ste_info action_ste; - struct mlx5hws_rule_action_ste_info old_action_ste; + struct mlx5hws_action_ste_chunk action_ste; + struct mlx5hws_action_ste_chunk old_action_ste; u32 rtc_0; /* The RTC into which the STE was inserted */ u32 rtc_1; /* The RTC into which the STE was inserted */ u8 status; /* enum mlx5hws_rule_status */ @@ -75,7 +69,7 @@ struct mlx5hws_rule { */ }; -void mlx5hws_rule_free_action_ste(struct mlx5hws_rule_action_ste_info *action_ste); +void mlx5hws_rule_free_action_ste(struct mlx5hws_action_ste_chunk *action_ste); int mlx5hws_rule_move_hws_remove(struct mlx5hws_rule *rule, void *queue, void *user_data); From patchwork Thu Apr 10 19:17:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047159 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2049.outbound.protection.outlook.com [40.107.223.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E50A32980AC; Thu, 10 Apr 2025 19:19:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.49 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312763; cv=fail; b=mkFyXgxFrXGsN+4MIPz0RxzgvPXw4nKoF+livPeuiwAAok289JxL9jII4QmS3/qg6wLNb0TXq1zyOqwc+hZV8O9rGyncOh85ICSrnAGbqsSAXSvkArbU0UuOAYrNIo5qfnepft2lhiyr2vmRIC7BEw99TG00E7Oly8AkxgS9wm4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312763; c=relaxed/simple; bh=LdWbAvwVV3gdpqNIW7snSk18guPZp4MDOVRwqcioHHU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KezEuk5aklQJk3N9jqgnrLvEDDbpAgL8y0VIxPqIqGYE9X+BltifcdGvAEZD0cmTl5KwM8M/6nZv9oXjSJvkRKdKkrrk3vrPI/SRlMmbJq9EarhYYVpQg4vsPqrz7r79BL5zf8v92tHLvI517efPkRVoKgachszfXJFWnw9Jg5M= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=QWHfjGyp; arc=fail smtp.client-ip=40.107.223.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="QWHfjGyp" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=d/a0Q4lwOEkA4o9ZLIHXl5SvrWwjhLX1Qnp2jOf6pq/IstCj31i8cUaq1WxTbGdmha9DqWyLc2FMEDWCzVyI6XwHozuFmeTah/tAQrY1Unbzm5UWUCdyli2zFXaDK2Z4a+ZgW4rtxJ8qaOwQzu9ZhfPEGMrE3gOy9kQYkNkS3UIDXHawgBN7bExuPK9NYEZk0pjXbnO9GaNC3h3ijfslMbwK4zNIX4UV4KPjur3f6dVKFc6ErV2Bt10aht7UDQcVQGAPQmabPtJ5MMFBUIro9OQveBpaQA5AlVdOp8sPBW+KfKp/d9tj8BKtabFdQkEJq20SlMK4U+sw/ZxQakqrMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=L4uC3Tae1BvHEGMR+mXAwlWyNT9nTTI01Bks0njptF0=; b=n7WDj0kxTnMmZNh+HqLCtnxudoQyil16EROCzFzHfLpt/QK8fXDSmdDDQBA3r30Viiut2B+176RVuLZIqORBKHqAM0d8u8toISMoC2fxB8f30vaav085PABIBs5P7u0tlAda3H4t42YoD5onawFgyrDatbAkXfG9Zz/Po2uCRVM2QPN+2NjwkhdxHmLljwtf+QrxrxmnWkND+2Wt9GDhqPqwQ8YWlKffnFXydI2SU/+QM43iwRPxLEVJPZIEjDiB45qLF7dYiPx3nJ/Gf4mfBklz3S/FXQpVrcCnZJxMf+ZD3Jc+V4usAQlDw5co9+nlwrqCCI0fXyJwOwcUX7D+nQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=L4uC3Tae1BvHEGMR+mXAwlWyNT9nTTI01Bks0njptF0=; b=QWHfjGypLcdtjbqrq7qnksxnhAAIEkJBU/91J36z4HMfBWdYj0MY2OdVN390QJPUGjhP3rgiYHTyM17d/WATWrz4OJ822T3zQwbiv5qWGo932ThXlQJyIoH1UD8v67Lvs86RxQuQPFIjOh1qGeu81WOPud+/zAjB33fcLkkEEUXDb023zfsj34XcxQr9STZ0vC8k1nPstmaFopSLpc7TbdvpZN6p1SCXeV+pcpChmu1SkFIXpKZ/2ZZQiiMED9udvQJZe/GxAq2cuy9h+Ihfo+w1xUzL+RjhKLv3FOZkTXcPxwhjqeViHntSEuuI0YMmSHUwXWRtj4BLREPlXoyb4A== Received: from SJ0PR03CA0288.namprd03.prod.outlook.com (2603:10b6:a03:39e::23) by MW4PR12MB6828.namprd12.prod.outlook.com (2603:10b6:303:209::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.20; Thu, 10 Apr 2025 19:19:10 +0000 Received: from SN1PEPF0002BA4D.namprd03.prod.outlook.com (2603:10b6:a03:39e:cafe::73) by SJ0PR03CA0288.outlook.office365.com (2603:10b6:a03:39e::23) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8632.26 via Frontend Transport; Thu, 10 Apr 2025 19:19:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by SN1PEPF0002BA4D.mail.protection.outlook.com (10.167.242.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:19:10 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:18:55 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:55 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:51 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 10/12] net/mlx5: HWS, Cleanup matcher action STE table Date: Thu, 10 Apr 2025 22:17:40 +0300 Message-ID: <1744312662-356571-11-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002BA4D:EE_|MW4PR12MB6828:EE_ X-MS-Office365-Filtering-Correlation-Id: 1accd30b-6146-444d-46e1-08dd78649232 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|7416014|82310400026|376014; X-Microsoft-Antispam-Message-Info: kgaYe0JthOSpr/BwZKzVi5ugu+xE/XxD9ZfF1mBeSVilI4inCHQ3AAIMm3lTKgNR/SDcTSCsQzsH9aF6SxsTgEgDxp2wQUJWdr7CBlKlx5qQ/vUlJ4eouekxg5jFOOVqi5HsDvXICYig6z40InDjPF5ctCS16XAHr0UgCohb3gyikKmv9wdQZR2G+e+I/ojx65g9xbZ+9qkjVzvEhL4vUF5enasv6BqPZUyR2rgCno6MuA5E42wF4FRYJ513hSdHY8n7+SdOONQd2h7oCMCubRsR31MPkW/UEOS7sBeyffd6BqNFZmaby8BtuKyA2uJFDE3qKogfpOf8yPaAxnjy7CFPf/8y8F4e8VMMtcQEJa6BwwLnTS3UWf1EussHYNdG6L9zP/dRsWQNqSlUEv2Bz0yLJ7hLOR5/+a73kYTIA69hisid+oPYKCd1TUI4K0YW9ck2CODSGXka5dz3QNvkd1UuVJKV/nJcHiV/s8EWTqcuW/4C4I5AdmoJJrJFc3Pr08ei/UCiBFmsAA6khmK3WmB1PgQ1wG9P8WtlKuYYF46LMhRyHFmgJHaZExaTACit7VDf+x15h9zlHjLh0tDiqbXk5ZP14W2GE34LWcWytSp8h69PXMHDaVuADnV+dyjCDUDOj47HQcQV3wG2LX4TkegaicQ2+DNRRkjM3ImSg8pJakc8i5yXlb3MAY38+Z/PKCm2Lo2MB948YnsEvPsG5GMo+iNuIhplOpuSL2cVX1pa2/58dgGvx2gSve6fu9sr+A0VprFqzd19QP6WqdfwcrI1VViZsNsWL/G7HwGuUzyU/QgdWqKamUahR5v2KwN3eHnwXLDrfMps9EtSAlAJ5KATo9O7TfYLxA5wz0P2tkuTIatv9SsLOEdVJSXOQEElDg2D5EMd54Vs9HXHmEu5heTIiNWVBAOGEivDsVqqHlNlrUJRWWquopxRfj6wKi04qst0k/b4abTcF0DwaSV/70lS9X+oJBgo0URi0BkjSgCF6xohnAdzXYrvA6s5NgQrRJfp9PhZOQGIwTsXCEcpU1fNmJf9W5l//HSzQwdvRopSC44dl6LS6EFaz51A/PYwMGLUak9EVzO9efdGKqqzm3CSI9xgSHwSEocGUyFV3WPIdV2WPOTnviemWTmQSVhfdOXDpGTvF004NXefZmhRgey/KtRfpRvOIo0sd3Mbkoccizf14C+Hu/owHavdslt44XdLGWKJD4V766gpZ1MY/tPltvwkN+ELJhhZTxIPwFie1PSmQzXZIomUlIRAe6K7dmrWjo9hBR+gglkVOXbgV+lxRFEAKb+kcfCev95RF+NX3kDn1TSePH+7xOqZQMhcYhhZWp6OaHJ8fvS4MFg+8ccMqxohQHqIL0WmVn8Dqog6ic1zU12n0Ro9azD4yWdJYzAcgYTK1RfwJ3FJ8OJe0hvHVSJP8JRyZF3+qL/jxZHy9zL2GmpHTSoN8uVwG9zCuvM0Vn8YooD0e33Pmxfao/U/0V1K71D5t1BeXKuOCdp19VOsrYcwJ0OEa5XI+wsI X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(7416014)(82310400026)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:19:10.0059 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1accd30b-6146-444d-46e1-08dd78649232 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002BA4D.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6828 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru Remove the matcher action STE implementation now that the code uses per-queue action STE pools. This also allows simplifying matcher code because it is now only handling a single type of RTC/STE. The matcher resize data is also going away. Matchers were saving old action STE data because the rules still used it, but now that data lives in the action STE pool and is no longer coupled to a matcher. Furthermore, matchers no longer need to rehash a due to action template addition. If a new action template needs more action STEs, we simply update the matcher's num_of_action_stes and future rules will allocate the correct number. Existing rules are unaffected by such an operation and can continue to use their existing action STEs. The range action was using the matcher action STE implementation, but there was no reason to do this other than the container fitting the purpose. Extract that information to a separate structure. Finally, stop dumping per-matcher information about action RTCs, because they no longer exist. A later patch in this series will add support for dumping action STE pools. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mellanox/mlx5/core/steering/hws/action.c | 23 +- .../mellanox/mlx5/core/steering/hws/action.h | 8 +- .../mellanox/mlx5/core/steering/hws/bwc.c | 77 +--- .../mellanox/mlx5/core/steering/hws/debug.c | 17 +- .../mellanox/mlx5/core/steering/hws/matcher.c | 336 ++++-------------- .../mellanox/mlx5/core/steering/hws/matcher.h | 20 +- .../mellanox/mlx5/core/steering/hws/mlx5hws.h | 5 +- .../mellanox/mlx5/core/steering/hws/rule.c | 2 +- 8 files changed, 87 insertions(+), 401 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c index 161ad720b339..bef4d25c1a2a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c @@ -1574,13 +1574,13 @@ hws_action_create_dest_match_range_definer(struct mlx5hws_context *ctx) return definer; } -static struct mlx5hws_matcher_action_ste * +static struct mlx5hws_range_action_table * hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, struct mlx5hws_definer *definer, u32 miss_ft_id) { struct mlx5hws_cmd_rtc_create_attr rtc_attr = {0}; - struct mlx5hws_matcher_action_ste *table_ste; + struct mlx5hws_range_action_table *table_ste; struct mlx5hws_pool_attr pool_attr = {0}; struct mlx5hws_pool *ste_pool, *stc_pool; u32 *rtc_0_id, *rtc_1_id; @@ -1669,9 +1669,9 @@ hws_action_create_dest_match_range_table(struct mlx5hws_context *ctx, return NULL; } -static void -hws_action_destroy_dest_match_range_table(struct mlx5hws_context *ctx, - struct mlx5hws_matcher_action_ste *table_ste) +static void hws_action_destroy_dest_match_range_table( + struct mlx5hws_context *ctx, + struct mlx5hws_range_action_table *table_ste) { mutex_lock(&ctx->ctrl_lock); @@ -1683,12 +1683,11 @@ hws_action_destroy_dest_match_range_table(struct mlx5hws_context *ctx, mutex_unlock(&ctx->ctrl_lock); } -static int -hws_action_create_dest_match_range_fill_table(struct mlx5hws_context *ctx, - struct mlx5hws_matcher_action_ste *table_ste, - struct mlx5hws_action *hit_ft_action, - struct mlx5hws_definer *range_definer, - u32 min, u32 max) +static int hws_action_create_dest_match_range_fill_table( + struct mlx5hws_context *ctx, + struct mlx5hws_range_action_table *table_ste, + struct mlx5hws_action *hit_ft_action, + struct mlx5hws_definer *range_definer, u32 min, u32 max) { struct mlx5hws_wqe_gta_data_seg_ste match_wqe_data = {0}; struct mlx5hws_wqe_gta_data_seg_ste range_wqe_data = {0}; @@ -1784,7 +1783,7 @@ mlx5hws_action_create_dest_match_range(struct mlx5hws_context *ctx, u32 min, u32 max, u32 flags) { struct mlx5hws_cmd_stc_modify_attr stc_attr = {0}; - struct mlx5hws_matcher_action_ste *table_ste; + struct mlx5hws_range_action_table *table_ste; struct mlx5hws_action *hit_ft_action; struct mlx5hws_definer *definer; struct mlx5hws_action *action; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.h index 64b76075f7f8..25fa0d4c9221 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.h @@ -118,6 +118,12 @@ struct mlx5hws_action_template { u8 only_term; }; +struct mlx5hws_range_action_table { + struct mlx5hws_pool *pool; + u32 rtc_0_id; + u32 rtc_1_id; +}; + struct mlx5hws_action { u8 type; u8 flags; @@ -186,7 +192,7 @@ struct mlx5hws_action { size_t size; } remove_header; struct { - struct mlx5hws_matcher_action_ste *table_ste; + struct mlx5hws_range_action_table *table_ste; struct mlx5hws_action *hit_ft_action; struct mlx5hws_definer *definer; } range; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c index 32de8bfc7644..510bfbbe5991 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c @@ -478,21 +478,9 @@ hws_bwc_matcher_size_maxed_out(struct mlx5hws_bwc_matcher *bwc_matcher) struct mlx5hws_cmd_query_caps *caps = bwc_matcher->matcher->tbl->ctx->caps; /* check the match RTC size */ - if ((bwc_matcher->size_log + - MLX5HWS_MATCHER_ASSURED_MAIN_TBL_DEPTH + - MLX5HWS_BWC_MATCHER_SIZE_LOG_STEP) > - (caps->ste_alloc_log_max - 1)) - return true; - - /* check the action RTC size */ - if ((bwc_matcher->size_log + - MLX5HWS_BWC_MATCHER_SIZE_LOG_STEP + - ilog2(roundup_pow_of_two(bwc_matcher->matcher->action_ste.max_stes)) + - MLX5HWS_MATCHER_ACTION_RTC_UPDATE_MULT) > - (caps->ste_alloc_log_max - 1)) - return true; - - return false; + return (bwc_matcher->size_log + MLX5HWS_MATCHER_ASSURED_MAIN_TBL_DEPTH + + MLX5HWS_BWC_MATCHER_SIZE_LOG_STEP) > + (caps->ste_alloc_log_max - 1); } static bool @@ -779,19 +767,6 @@ hws_bwc_matcher_rehash_size(struct mlx5hws_bwc_matcher *bwc_matcher) return hws_bwc_matcher_move(bwc_matcher); } -static int -hws_bwc_matcher_rehash_at(struct mlx5hws_bwc_matcher *bwc_matcher) -{ - /* Rehash by action template doesn't require any additional checking. - * The bwc_matcher already contains the new action template. - * Just do the usual rehash: - * - create new matcher - * - move all the rules to the new matcher - * - destroy the old matcher - */ - return hws_bwc_matcher_move(bwc_matcher); -} - int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule, u32 *match_param, struct mlx5hws_rule_action rule_actions[], @@ -803,7 +778,6 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule, struct mlx5hws_rule_attr rule_attr; struct mutex *queue_lock; /* Protect the queue */ u32 num_of_rules; - bool need_rehash; int ret = 0; int at_idx; @@ -830,30 +804,11 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule, at_idx = bwc_matcher->num_of_at - 1; ret = mlx5hws_matcher_attach_at(bwc_matcher->matcher, - bwc_matcher->at[at_idx], - &need_rehash); + bwc_matcher->at[at_idx]); if (unlikely(ret)) { hws_bwc_unlock_all_queues(ctx); return ret; } - if (unlikely(need_rehash)) { - /* The new action template requires more action STEs. - * Need to attempt creating new matcher with all - * the action templates, including the new one. - */ - ret = hws_bwc_matcher_rehash_at(bwc_matcher); - if (unlikely(ret)) { - mlx5hws_action_template_destroy(bwc_matcher->at[at_idx]); - bwc_matcher->at[at_idx] = NULL; - bwc_matcher->num_of_at--; - - hws_bwc_unlock_all_queues(ctx); - - mlx5hws_err(ctx, - "BWC rule insertion: rehash AT failed (%d)\n", ret); - return ret; - } - } hws_bwc_unlock_all_queues(ctx); mutex_lock(queue_lock); @@ -973,7 +928,6 @@ hws_bwc_rule_action_update(struct mlx5hws_bwc_rule *bwc_rule, struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx; struct mlx5hws_rule_attr rule_attr; struct mutex *queue_lock; /* Protect the queue */ - bool need_rehash; int at_idx, ret; u16 idx; @@ -1005,32 +959,11 @@ hws_bwc_rule_action_update(struct mlx5hws_bwc_rule *bwc_rule, at_idx = bwc_matcher->num_of_at - 1; ret = mlx5hws_matcher_attach_at(bwc_matcher->matcher, - bwc_matcher->at[at_idx], - &need_rehash); + bwc_matcher->at[at_idx]); if (unlikely(ret)) { hws_bwc_unlock_all_queues(ctx); return ret; } - if (unlikely(need_rehash)) { - /* The new action template requires more action - * STEs. Need to attempt creating new matcher - * with all the action templates, including the - * new one. - */ - ret = hws_bwc_matcher_rehash_at(bwc_matcher); - if (unlikely(ret)) { - mlx5hws_action_template_destroy(bwc_matcher->at[at_idx]); - bwc_matcher->at[at_idx] = NULL; - bwc_matcher->num_of_at--; - - hws_bwc_unlock_all_queues(ctx); - - mlx5hws_err(ctx, - "BWC rule update: rehash AT failed (%d)\n", - ret); - return ret; - } - } } hws_bwc_unlock_all_queues(ctx); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c index 3491408c5d84..38f75dec9cfc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c @@ -146,18 +146,6 @@ static int hws_debug_dump_matcher(struct seq_file *f, struct mlx5hws_matcher *ma matcher->match_ste.rtc_1_id, (int)ste_1_id); - ste_pool = matcher->action_ste.pool; - if (ste_pool) { - ste_0_id = mlx5hws_pool_get_base_id(ste_pool); - if (tbl_type == MLX5HWS_TABLE_TYPE_FDB) - ste_1_id = mlx5hws_pool_get_base_mirror_id(ste_pool); - else - ste_1_id = -1; - } else { - ste_0_id = -1; - ste_1_id = -1; - } - ft_attr.type = matcher->tbl->fw_ft_type; ret = mlx5hws_cmd_flow_table_query(matcher->tbl->ctx->mdev, matcher->end_ft_id, @@ -167,10 +155,7 @@ static int hws_debug_dump_matcher(struct seq_file *f, struct mlx5hws_matcher *ma if (ret) return ret; - seq_printf(f, ",%d,%d,%d,%d,%d,0x%llx,0x%llx\n", - matcher->action_ste.rtc_0_id, (int)ste_0_id, - matcher->action_ste.rtc_1_id, (int)ste_1_id, - 0, + seq_printf(f, ",-1,-1,-1,-1,0,0x%llx,0x%llx\n", mlx5hws_debug_icm_to_idx(icm_addr_0), mlx5hws_debug_icm_to_idx(icm_addr_1)); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c index 3028e0387e3f..716502732d3d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c @@ -3,25 +3,6 @@ #include "internal.h" -enum mlx5hws_matcher_rtc_type { - HWS_MATCHER_RTC_TYPE_MATCH, - HWS_MATCHER_RTC_TYPE_STE_ARRAY, - HWS_MATCHER_RTC_TYPE_MAX, -}; - -static const char * const mlx5hws_matcher_rtc_type_str[] = { - [HWS_MATCHER_RTC_TYPE_MATCH] = "MATCH", - [HWS_MATCHER_RTC_TYPE_STE_ARRAY] = "STE_ARRAY", - [HWS_MATCHER_RTC_TYPE_MAX] = "UNKNOWN", -}; - -static const char *hws_matcher_rtc_type_to_str(enum mlx5hws_matcher_rtc_type rtc_type) -{ - if (rtc_type > HWS_MATCHER_RTC_TYPE_MAX) - rtc_type = HWS_MATCHER_RTC_TYPE_MAX; - return mlx5hws_matcher_rtc_type_str[rtc_type]; -} - static bool hws_matcher_requires_col_tbl(u8 log_num_of_rules) { /* Collision table concatenation is done only for large rule tables */ @@ -209,83 +190,52 @@ static void hws_matcher_set_rtc_attr_sz(struct mlx5hws_matcher *matcher, } } -static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, - enum mlx5hws_matcher_rtc_type rtc_type) +static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher) { struct mlx5hws_matcher_attr *attr = &matcher->attr; struct mlx5hws_cmd_rtc_create_attr rtc_attr = {0}; struct mlx5hws_match_template *mt = matcher->mt; struct mlx5hws_context *ctx = matcher->tbl->ctx; - struct mlx5hws_matcher_action_ste *action_ste; struct mlx5hws_table *tbl = matcher->tbl; - struct mlx5hws_pool *ste_pool; - u32 *rtc_0_id, *rtc_1_id; u32 obj_id; int ret; - switch (rtc_type) { - case HWS_MATCHER_RTC_TYPE_MATCH: - rtc_0_id = &matcher->match_ste.rtc_0_id; - rtc_1_id = &matcher->match_ste.rtc_1_id; - ste_pool = matcher->match_ste.pool; - - rtc_attr.log_size = attr->table.sz_row_log; - rtc_attr.log_depth = attr->table.sz_col_log; - rtc_attr.is_frst_jumbo = mlx5hws_matcher_mt_is_jumbo(mt); - rtc_attr.is_scnd_range = 0; - rtc_attr.miss_ft_id = matcher->end_ft_id; - - if (attr->insert_mode == MLX5HWS_MATCHER_INSERT_BY_HASH) { - /* The usual Hash Table */ - rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; - - /* The first mt is used since all share the same definer */ - rtc_attr.match_definer_0 = mlx5hws_definer_get_id(mt->definer); - } else if (attr->insert_mode == MLX5HWS_MATCHER_INSERT_BY_INDEX) { - rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; - rtc_attr.num_hash_definer = 1; - - if (attr->distribute_mode == MLX5HWS_MATCHER_DISTRIBUTE_BY_HASH) { - /* Hash Split Table */ - rtc_attr.access_index_mode = MLX5_IFC_RTC_STE_ACCESS_MODE_BY_HASH; - rtc_attr.match_definer_0 = mlx5hws_definer_get_id(mt->definer); - } else if (attr->distribute_mode == MLX5HWS_MATCHER_DISTRIBUTE_BY_LINEAR) { - /* Linear Lookup Table */ - rtc_attr.access_index_mode = MLX5_IFC_RTC_STE_ACCESS_MODE_LINEAR; - rtc_attr.match_definer_0 = ctx->caps->linear_match_definer; - } + rtc_attr.log_size = attr->table.sz_row_log; + rtc_attr.log_depth = attr->table.sz_col_log; + rtc_attr.is_frst_jumbo = mlx5hws_matcher_mt_is_jumbo(mt); + rtc_attr.is_scnd_range = 0; + rtc_attr.miss_ft_id = matcher->end_ft_id; + + if (attr->insert_mode == MLX5HWS_MATCHER_INSERT_BY_HASH) { + /* The usual Hash Table */ + rtc_attr.update_index_mode = + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; + + /* The first mt is used since all share the same definer */ + rtc_attr.match_definer_0 = mlx5hws_definer_get_id(mt->definer); + } else if (attr->insert_mode == MLX5HWS_MATCHER_INSERT_BY_INDEX) { + rtc_attr.update_index_mode = + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; + rtc_attr.num_hash_definer = 1; + + if (attr->distribute_mode == + MLX5HWS_MATCHER_DISTRIBUTE_BY_HASH) { + /* Hash Split Table */ + rtc_attr.access_index_mode = + MLX5_IFC_RTC_STE_ACCESS_MODE_BY_HASH; + rtc_attr.match_definer_0 = + mlx5hws_definer_get_id(mt->definer); + } else if (attr->distribute_mode == + MLX5HWS_MATCHER_DISTRIBUTE_BY_LINEAR) { + /* Linear Lookup Table */ + rtc_attr.access_index_mode = + MLX5_IFC_RTC_STE_ACCESS_MODE_LINEAR; + rtc_attr.match_definer_0 = + ctx->caps->linear_match_definer; } - break; - - case HWS_MATCHER_RTC_TYPE_STE_ARRAY: - action_ste = &matcher->action_ste; - - rtc_0_id = &action_ste->rtc_0_id; - rtc_1_id = &action_ste->rtc_1_id; - ste_pool = action_ste->pool; - /* Action RTC size calculation: - * log((max number of rules in matcher) * - * (max number of action STEs per rule) * - * (2 to support writing new STEs for update rule)) - */ - rtc_attr.log_size = - ilog2(roundup_pow_of_two(action_ste->max_stes)) + - attr->table.sz_row_log + - MLX5HWS_MATCHER_ACTION_RTC_UPDATE_MULT; - rtc_attr.log_depth = 0; - rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; - /* The action STEs use the default always hit definer */ - rtc_attr.match_definer_0 = ctx->caps->trivial_match_definer; - rtc_attr.is_frst_jumbo = false; - rtc_attr.miss_ft_id = 0; - break; - - default: - mlx5hws_err(ctx, "HWS Invalid RTC type\n"); - return -EINVAL; } - obj_id = mlx5hws_pool_get_base_id(ste_pool); + obj_id = mlx5hws_pool_get_base_id(matcher->match_ste.pool); rtc_attr.pd = ctx->pd_num; rtc_attr.ste_base = obj_id; @@ -297,15 +247,16 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, obj_id = mlx5hws_pool_get_base_id(ctx->stc_pool); rtc_attr.stc_base = obj_id; - ret = mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, rtc_0_id); + ret = mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, + &matcher->match_ste.rtc_0_id); if (ret) { - mlx5hws_err(ctx, "Failed to create matcher RTC of type %s", - hws_matcher_rtc_type_to_str(rtc_type)); + mlx5hws_err(ctx, "Failed to create matcher RTC\n"); return ret; } if (tbl->type == MLX5HWS_TABLE_TYPE_FDB) { - obj_id = mlx5hws_pool_get_base_mirror_id(ste_pool); + obj_id = mlx5hws_pool_get_base_mirror_id( + matcher->match_ste.pool); rtc_attr.ste_base = obj_id; rtc_attr.table_type = mlx5hws_table_get_res_fw_ft_type(tbl->type, true); @@ -313,10 +264,10 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, rtc_attr.stc_base = obj_id; hws_matcher_set_rtc_attr_sz(matcher, &rtc_attr, true); - ret = mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, rtc_1_id); + ret = mlx5hws_cmd_rtc_create(ctx->mdev, &rtc_attr, + &matcher->match_ste.rtc_1_id); if (ret) { - mlx5hws_err(ctx, "Failed to create peer matcher RTC of type %s", - hws_matcher_rtc_type_to_str(rtc_type)); + mlx5hws_err(ctx, "Failed to create mirror matcher RTC\n"); goto destroy_rtc_0; } } @@ -324,33 +275,18 @@ static int hws_matcher_create_rtc(struct mlx5hws_matcher *matcher, return 0; destroy_rtc_0: - mlx5hws_cmd_rtc_destroy(ctx->mdev, *rtc_0_id); + mlx5hws_cmd_rtc_destroy(ctx->mdev, matcher->match_ste.rtc_0_id); return ret; } -static void hws_matcher_destroy_rtc(struct mlx5hws_matcher *matcher, - enum mlx5hws_matcher_rtc_type rtc_type) +static void hws_matcher_destroy_rtc(struct mlx5hws_matcher *matcher) { - struct mlx5hws_table *tbl = matcher->tbl; - u32 rtc_0_id, rtc_1_id; - - switch (rtc_type) { - case HWS_MATCHER_RTC_TYPE_MATCH: - rtc_0_id = matcher->match_ste.rtc_0_id; - rtc_1_id = matcher->match_ste.rtc_1_id; - break; - case HWS_MATCHER_RTC_TYPE_STE_ARRAY: - rtc_0_id = matcher->action_ste.rtc_0_id; - rtc_1_id = matcher->action_ste.rtc_1_id; - break; - default: - return; - } + struct mlx5_core_dev *mdev = matcher->tbl->ctx->mdev; - if (tbl->type == MLX5HWS_TABLE_TYPE_FDB) - mlx5hws_cmd_rtc_destroy(tbl->ctx->mdev, rtc_1_id); + if (matcher->tbl->type == MLX5HWS_TABLE_TYPE_FDB) + mlx5hws_cmd_rtc_destroy(mdev, matcher->match_ste.rtc_1_id); - mlx5hws_cmd_rtc_destroy(tbl->ctx->mdev, rtc_0_id); + mlx5hws_cmd_rtc_destroy(mdev, matcher->match_ste.rtc_0_id); } static int @@ -418,85 +354,17 @@ static int hws_matcher_check_and_process_at(struct mlx5hws_matcher *matcher, return 0; } -static int hws_matcher_resize_init(struct mlx5hws_matcher *src_matcher) -{ - struct mlx5hws_matcher_resize_data *resize_data; - - resize_data = kzalloc(sizeof(*resize_data), GFP_KERNEL); - if (!resize_data) - return -ENOMEM; - - resize_data->max_stes = src_matcher->action_ste.max_stes; - - resize_data->stc = src_matcher->action_ste.stc; - resize_data->rtc_0_id = src_matcher->action_ste.rtc_0_id; - resize_data->rtc_1_id = src_matcher->action_ste.rtc_1_id; - resize_data->pool = src_matcher->action_ste.max_stes ? - src_matcher->action_ste.pool : NULL; - - /* Place the new resized matcher on the dst matcher's list */ - list_add(&resize_data->list_node, &src_matcher->resize_dst->resize_data); - - /* Move all the previous resized matchers to the dst matcher's list */ - while (!list_empty(&src_matcher->resize_data)) { - resize_data = list_first_entry(&src_matcher->resize_data, - struct mlx5hws_matcher_resize_data, - list_node); - list_del_init(&resize_data->list_node); - list_add(&resize_data->list_node, &src_matcher->resize_dst->resize_data); - } - - return 0; -} - -static void hws_matcher_resize_uninit(struct mlx5hws_matcher *matcher) -{ - struct mlx5hws_matcher_resize_data *resize_data; - - if (!mlx5hws_matcher_is_resizable(matcher)) - return; - - while (!list_empty(&matcher->resize_data)) { - resize_data = list_first_entry(&matcher->resize_data, - struct mlx5hws_matcher_resize_data, - list_node); - list_del_init(&resize_data->list_node); - - if (resize_data->max_stes) { - mlx5hws_action_free_single_stc(matcher->tbl->ctx, - matcher->tbl->type, - &resize_data->stc); - - if (matcher->tbl->type == MLX5HWS_TABLE_TYPE_FDB) - mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, - resize_data->rtc_1_id); - - mlx5hws_cmd_rtc_destroy(matcher->tbl->ctx->mdev, - resize_data->rtc_0_id); - - if (resize_data->pool) - mlx5hws_pool_destroy(resize_data->pool); - } - - kfree(resize_data); - } -} - static int hws_matcher_bind_at(struct mlx5hws_matcher *matcher) { bool is_jumbo = mlx5hws_matcher_mt_is_jumbo(matcher->mt); - struct mlx5hws_cmd_stc_modify_attr stc_attr = {0}; - struct mlx5hws_matcher_action_ste *action_ste; - struct mlx5hws_table *tbl = matcher->tbl; - struct mlx5hws_pool_attr pool_attr = {0}; - struct mlx5hws_context *ctx = tbl->ctx; - u32 required_stes; - u8 max_stes = 0; + struct mlx5hws_context *ctx = matcher->tbl->ctx; + u8 required_stes, max_stes; int i, ret; if (matcher->flags & MLX5HWS_MATCHER_FLAGS_COLLISION) return 0; + max_stes = 0; for (i = 0; i < matcher->num_of_at; i++) { struct mlx5hws_action_template *at = &matcher->at[i]; @@ -512,74 +380,9 @@ static int hws_matcher_bind_at(struct mlx5hws_matcher *matcher) /* Future: Optimize reparse */ } - /* There are no additional STEs required for matcher */ - if (!max_stes) - return 0; - - matcher->action_ste.max_stes = max_stes; - - action_ste = &matcher->action_ste; - - /* Allocate action STE mempool */ - pool_attr.table_type = tbl->type; - pool_attr.pool_type = MLX5HWS_POOL_TYPE_STE; - pool_attr.flags = MLX5HWS_POOL_FLAG_BUDDY; - /* Pool size is similar to action RTC size */ - pool_attr.alloc_log_sz = ilog2(roundup_pow_of_two(action_ste->max_stes)) + - matcher->attr.table.sz_row_log + - MLX5HWS_MATCHER_ACTION_RTC_UPDATE_MULT; - hws_matcher_set_pool_attr(&pool_attr, matcher); - action_ste->pool = mlx5hws_pool_create(ctx, &pool_attr); - if (!action_ste->pool) { - mlx5hws_err(ctx, "Failed to create action ste pool\n"); - return -EINVAL; - } - - /* Allocate action RTC */ - ret = hws_matcher_create_rtc(matcher, HWS_MATCHER_RTC_TYPE_STE_ARRAY); - if (ret) { - mlx5hws_err(ctx, "Failed to create action RTC\n"); - goto free_ste_pool; - } - - /* Allocate STC for jumps to STE */ - stc_attr.action_offset = MLX5HWS_ACTION_OFFSET_HIT; - stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; - stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; - stc_attr.ste_table.ste_pool = action_ste->pool; - stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; - - ret = mlx5hws_action_alloc_single_stc(ctx, &stc_attr, tbl->type, - &action_ste->stc); - if (ret) { - mlx5hws_err(ctx, "Failed to create action jump to table STC\n"); - goto free_rtc; - } + matcher->num_of_action_stes = max_stes; return 0; - -free_rtc: - hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_STE_ARRAY); -free_ste_pool: - mlx5hws_pool_destroy(action_ste->pool); - return ret; -} - -static void hws_matcher_unbind_at(struct mlx5hws_matcher *matcher) -{ - struct mlx5hws_matcher_action_ste *action_ste; - struct mlx5hws_table *tbl = matcher->tbl; - - action_ste = &matcher->action_ste; - - if (!action_ste->max_stes || - matcher->flags & MLX5HWS_MATCHER_FLAGS_COLLISION || - mlx5hws_matcher_is_in_resize(matcher)) - return; - - mlx5hws_action_free_single_stc(tbl->ctx, tbl->type, &action_ste->stc); - hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_STE_ARRAY); - mlx5hws_pool_destroy(action_ste->pool); } static int hws_matcher_bind_mt(struct mlx5hws_matcher *matcher) @@ -723,10 +526,10 @@ static int hws_matcher_create_and_connect(struct mlx5hws_matcher *matcher) /* Create matcher end flow table anchor */ ret = hws_matcher_create_end_ft(matcher); if (ret) - goto unbind_at; + goto unbind_mt; /* Allocate the RTC for the new matcher */ - ret = hws_matcher_create_rtc(matcher, HWS_MATCHER_RTC_TYPE_MATCH); + ret = hws_matcher_create_rtc(matcher); if (ret) goto destroy_end_ft; @@ -738,11 +541,9 @@ static int hws_matcher_create_and_connect(struct mlx5hws_matcher *matcher) return 0; destroy_rtc: - hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_MATCH); + hws_matcher_destroy_rtc(matcher); destroy_end_ft: hws_matcher_destroy_end_ft(matcher); -unbind_at: - hws_matcher_unbind_at(matcher); unbind_mt: hws_matcher_unbind_mt(matcher); return ret; @@ -750,11 +551,9 @@ static int hws_matcher_create_and_connect(struct mlx5hws_matcher *matcher) static void hws_matcher_destroy_and_disconnect(struct mlx5hws_matcher *matcher) { - hws_matcher_resize_uninit(matcher); hws_matcher_disconnect(matcher); - hws_matcher_destroy_rtc(matcher, HWS_MATCHER_RTC_TYPE_MATCH); + hws_matcher_destroy_rtc(matcher); hws_matcher_destroy_end_ft(matcher); - hws_matcher_unbind_at(matcher); hws_matcher_unbind_mt(matcher); } @@ -776,8 +575,6 @@ hws_matcher_create_col_matcher(struct mlx5hws_matcher *matcher) if (!col_matcher) return -ENOMEM; - INIT_LIST_HEAD(&col_matcher->resize_data); - col_matcher->tbl = matcher->tbl; col_matcher->mt = matcher->mt; col_matcher->at = matcher->at; @@ -831,8 +628,6 @@ static int hws_matcher_init(struct mlx5hws_matcher *matcher) struct mlx5hws_context *ctx = matcher->tbl->ctx; int ret; - INIT_LIST_HEAD(&matcher->resize_data); - mutex_lock(&ctx->ctrl_lock); /* Allocate matcher resource and connect to the packet pipe */ @@ -889,16 +684,12 @@ static int hws_matcher_grow_at_array(struct mlx5hws_matcher *matcher) } int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher, - struct mlx5hws_action_template *at, - bool *need_rehash) + struct mlx5hws_action_template *at) { bool is_jumbo = mlx5hws_matcher_mt_is_jumbo(matcher->mt); - struct mlx5hws_context *ctx = matcher->tbl->ctx; u32 required_stes; int ret; - *need_rehash = false; - if (unlikely(matcher->num_of_at >= matcher->size_of_at_array)) { ret = hws_matcher_grow_at_array(matcher); if (ret) @@ -916,11 +707,8 @@ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher, return ret; required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); - if (matcher->action_ste.max_stes < required_stes) { - mlx5hws_dbg(ctx, "Required STEs [%d] exceeds initial action template STE [%d]\n", - required_stes, matcher->action_ste.max_stes); - *need_rehash = true; - } + if (matcher->num_of_action_stes < required_stes) + matcher->num_of_action_stes = required_stes; matcher->at[matcher->num_of_at] = *at; matcher->num_of_at += 1; @@ -1102,7 +890,7 @@ static int hws_matcher_resize_precheck(struct mlx5hws_matcher *src_matcher, return -EINVAL; } - if (src_matcher->action_ste.max_stes > dst_matcher->action_ste.max_stes) { + if (src_matcher->num_of_action_stes > dst_matcher->num_of_action_stes) { mlx5hws_err(ctx, "Src/dst matcher max STEs mismatch\n"); return -EINVAL; } @@ -1131,10 +919,6 @@ int mlx5hws_matcher_resize_set_target(struct mlx5hws_matcher *src_matcher, src_matcher->resize_dst = dst_matcher; - ret = hws_matcher_resize_init(src_matcher); - if (ret) - src_matcher->resize_dst = NULL; - out: mutex_unlock(&src_matcher->tbl->ctx->ctrl_lock); return ret; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h index 0450b6175ac9..bad1fa8f77fd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h @@ -50,23 +50,6 @@ struct mlx5hws_matcher_match_ste { struct mlx5hws_pool *pool; }; -struct mlx5hws_matcher_action_ste { - struct mlx5hws_pool_chunk stc; - u32 rtc_0_id; - u32 rtc_1_id; - struct mlx5hws_pool *pool; - u8 max_stes; -}; - -struct mlx5hws_matcher_resize_data { - struct mlx5hws_pool_chunk stc; - u32 rtc_0_id; - u32 rtc_1_id; - struct mlx5hws_pool *pool; - u8 max_stes; - struct list_head list_node; -}; - struct mlx5hws_matcher { struct mlx5hws_table *tbl; struct mlx5hws_matcher_attr attr; @@ -75,15 +58,14 @@ struct mlx5hws_matcher { u8 num_of_at; u8 size_of_at_array; u8 num_of_mt; + u8 num_of_action_stes; /* enum mlx5hws_matcher_flags */ u8 flags; u32 end_ft_id; struct mlx5hws_matcher *col_matcher; struct mlx5hws_matcher *resize_dst; struct mlx5hws_matcher_match_ste match_ste; - struct mlx5hws_matcher_action_ste action_ste; struct list_head list_node; - struct list_head resize_data; }; static inline bool diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h index 8ed8a715a2eb..5121951f2778 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h @@ -399,14 +399,11 @@ int mlx5hws_matcher_destroy(struct mlx5hws_matcher *matcher); * * @matcher: Matcher to attach the action template to. * @at: Action template to be attached to the matcher. - * @need_rehash: Output parameter that tells callers if the matcher needs to be - * rehashed. * * Return: Zero on success, non-zero otherwise. */ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher, - struct mlx5hws_action_template *at, - bool *need_rehash); + struct mlx5hws_action_template *at); /** * mlx5hws_matcher_resize_set_target - Link two matchers and enable moving rules. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c index 5b758467ed03..9e6f35d68445 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/rule.c @@ -203,7 +203,7 @@ static int mlx5hws_rule_alloc_action_ste(struct mlx5hws_rule *rule, struct mlx5hws_context *ctx = matcher->tbl->ctx; rule->action_ste.ste.order = - ilog2(roundup_pow_of_two(matcher->action_ste.max_stes)); + ilog2(roundup_pow_of_two(matcher->num_of_action_stes)); return mlx5hws_action_ste_chunk_alloc(&ctx->action_ste_pool[queue_id], skip_rx, skip_tx, &rule->action_ste); From patchwork Thu Apr 10 19:17:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047160 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2085.outbound.protection.outlook.com [40.107.93.85]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19F5B290094; Thu, 10 Apr 2025 19:19:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.85 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312764; cv=fail; b=qfmFhOXsRmggDKnC+4yKKS+wuN2/REOkxOx5I+C3e5HUjqwMuXA0uJ3F5Tfh4OSvWmUu5R8PLIHm+HH+UlnqjoY2winxz0UXlEhuVlLcd5YLL+BTtCqc/bCrWkrK2j5QMqYqko/5LV7m6ITMFd/jB9ynRjw9EwxqKHo9rktn0cY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312764; c=relaxed/simple; bh=l1reJQsVLMPGZF8sjGpXHdGwE+0tau0YqL/lF8A4B4U=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WPcUIqSpRS7gs/7OFKTTCMmirZfIW/GYzVnuDDDIt297uh4sFl+2HMLKmHAIDW4PS6tW0vnqW+xflH3lK0IosiIgkzszE8tsuk8TwNS4GxIk4gJzRcyfJlNbUsPn500TccCaP8JjI4eMMo2N+yC3zs0Av9ae91zyNlLS2jNzpDI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=p9+YEge3; arc=fail smtp.client-ip=40.107.93.85 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="p9+YEge3" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=YPoobOFDYF1kIJGZabCWZOG3HFlN3v0pMV2a0ycqaEd65WV4Fweg4PDC7BW3Wds6tJX2scns0Z1xhBmgsrkIrCx5uFZOQO3YEIoqgChZNrUAWyXgBBdHD08fuh0iENldak22+376vQNYJaPTd2wrKQcNCL9usmx9AdAu7syX4nrK9F/mYFMPb5Bh4tB+CZLyvg9wg/0yPww1tAb96ZXxqdCsqWcg3r6gllZ16chyVsm9qHYFITFBJTNtUgqVsaaoj4O6EWlF1eDIwpCbV9SseQnqP79siS28bg04MLGtSET9cKePGMxI1fbXcp17aLVjcyOZosaW5DAOEvVllnNaKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MnoCCKqs2NGxtmVHjMnOX/Jelxd0TwKh5zwyobSPFXs=; b=qtUae9II1EMVfRJbJqbX2fOeUPZIXgKl+7YtcD9ExKkVbVRPFWReS+IkSKsJYqRieTRZ0MiQdXRUjshpUFoPOY5MZnnoNiD5aVUX+fozvSYao7zdve8LS8PKMTuPIDh52XEMpqptcRY1FA5RH8XlJz5MFfi6+Btd7en/wJ+3IWRWFDVrHZ0dBnFtpR26W1kkmr3Pm0qK0zFQlHWdn6GwhF7Iv9le1eCfoOFiX+r1cQiSRKda9PDC58yqh+2kv3MskZ349k63zK7KcGQ8IhoVly9M/4J3bdnQnxO6Op7raDURgJuPEWv7T0DdGr/1dl/xv6EHUUXspTnL9L70vjoB0A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MnoCCKqs2NGxtmVHjMnOX/Jelxd0TwKh5zwyobSPFXs=; b=p9+YEge3gqgTM6yhdsL9ZBNHnPW+GPkKLlUalwlaa5+7TKOfwPLCSIOrQKebUrrYr8V5DikmtymHqEaA4NhjfYVTYZZ8+cnWdnzbLCoeD8HY3TUOL5ttJxI3h9pYLM7n7Gg9zx0FOQNtd9Aw8xuTUKaWVTg6xdVhWk1mXDcuUaSmmnzHys6mmfruRnED+bcJvtLlRyTzvvxv4pksU4oh5OrDRbu+3YaAOqBVUhiPrxj7DaEOhIX4n339i3uatAPmvRTHIVD4lGu3q8c0zwlPZ56ZF7gQ7+z96FTxKXojOUf4KDQA2D+f6Z/S+BwvS1wSL4PTg3l82cTytWFGMfX3qw== Received: from PH7PR17CA0052.namprd17.prod.outlook.com (2603:10b6:510:325::19) by DM4PR12MB8499.namprd12.prod.outlook.com (2603:10b6:8:181::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.22; Thu, 10 Apr 2025 19:19:16 +0000 Received: from SN1PEPF0002BA4E.namprd03.prod.outlook.com (2603:10b6:510:325:cafe::9e) by PH7PR17CA0052.outlook.office365.com (2603:10b6:510:325::19) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8606.37 via Frontend Transport; Thu, 10 Apr 2025 19:19:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by SN1PEPF0002BA4E.mail.protection.outlook.com (10.167.242.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:19:15 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:19:00 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:18:59 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:18:55 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 11/12] net/mlx5: HWS, Free unused action STE tables Date: Thu, 10 Apr 2025 22:17:41 +0300 Message-ID: <1744312662-356571-12-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002BA4E:EE_|DM4PR12MB8499:EE_ X-MS-Office365-Filtering-Correlation-Id: f6b3c8e9-a433-4fe1-45ed-08dd786495c1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|7416014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: lQRjU4crPnKg6HsUpRWGYZ00Qy3XpwbIpZyPJP7R4AvsZj739PDw8E1WLOaXGoMf3BE5hab10FKEcjdo4VRws2aB8lljSBm5fDJpRNK12XPiRA5bAoLAD4NfpLmtCBVsCecW47H8iX93E9Lu+Nc3c/Xue9svXaXDiEmd3v8+bxaa6vf88Pk89YJrU64Yx1/nRCBqQ69kJ7CWL9GHnznMuq18c4ohWQk7knX5dAnoX8rDwF0bflZ6pAuDzEBBLYlAjePW6bPv7zYV/SAg2RW+WqpvXZwaC+V24IJpqKq+pc+8VAq5WOzDX+WW6gEhiHJKGD9ahy/TmIKF8CKqI+XjBSMekTkGerjtxNLUrV/G1oPHo1qiL5qOCeEbfYwiUaKzSpgYpGNhxzIezrEaCF+NGBkQLLJRNkmA1E9lHzdDolw8j09rvOaE2uW16BLiEbTgsNnFr8MSbdINqpxiK06Di6b4v/w4GD0ub9T+s875o2wmL1L+pgquG0eWhVJwBW/pzjzrgHSSW1tcjNgL5mOnHNd6B3ZXek4WomD922lYluR52ewod3V1kuKZMBowKEZafOPt2pXwFAPt2t6J4ZSd4Mnkq5obEumAUQLu2GDW/fEfpBUrf+9GHT//2u8BoWOwTYtnD3SrjC/LPMZ2nofiw0RzyxJcL+LLrT19J98Tcth4RTG8z+DH98MRmtdg3hTdslcNNliYtQNb4yAhUZTyt/6/YDlTIXddd0VIf9jtmN7WcV6+B/MC8/za+n5Pt5Qvw1AUQwKenNhXug3d6Xb/Z7+lP0nF+AFj1UxJ24Asa4B9pvZEhUVIMto9l7ECVPppOBH0O+Rm40qezdJ/BntE+OKXB7LOkoSiKMBD/dQTCSJKQUrVCPTS4aZizlVXJGvlQSL69kDW9ZS7xYLVi3bWQ48k5pYJy0//GvUa2UIRXPmqVlWRSnFprl0W56EWPLffeFFCoV35Mq/RmepYRQZunlYwQTy8/w3eOlLZ8ArWC6BFnNlL6/M9pf/xNqkTKUInpDbaKwTj3SHNy4B7wlNrg4nNOG4e4zSiUBT5003D8UJbOrgf5EBJzDDXPRbrbRSdB+h9nsPzV7c+Xf6v+SZfnS2UhGAtRc7Tr3esA6i4NdNp5pPEudV8hOwQwi8FxDpnxFZ35bpiBTTAI8pV1QA/2zQzbvdR8huSj0zSR4fK06Jya+AWjY6pUjkydaXCCVL85V2ij+exzTZr/JQOozr/5gX5fXKlxLG536hNTHaHugSwUg5l/ZPEN7AUeYTGO6S/apjcuaTF07OHvTHS5BsLe6YqTn2zMgvrjNJ34SyJsQg9eiVmtyHbz8V5mji0cc6nvBNwM0JrfXvKx2u8cqGeEYCBWIdDu9VCUvCrG0BCAo/MybU7PCYRwVnFmjomZoC1tPPzQehHnWqMfKDXzF7UW2sxaD4nsqwrBTv2ogX9SstuOmSK91Dah0HLqVwkL0IgYmSYdExjF/EpaqY6TcyR/iCi9ywrx0CD2F3Q92rO/F0= X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(1800799024)(376014)(7416014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:19:15.9606 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f6b3c8e9-a433-4fe1-45ed-08dd786495c1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002BA4E.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB8499 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru Periodically check for unused action STE tables and free their associated resources. In order to do this safely, add a per-queue lock to synchronize the garbage collect work with regular operations on steering rules. Signed-off-by: Vlad Dogaru Reviewed-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan --- .../mlx5/core/steering/hws/action_ste_pool.c | 88 ++++++++++++++++++- .../mlx5/core/steering/hws/action_ste_pool.h | 11 +++ .../mellanox/mlx5/core/steering/hws/context.h | 1 + 3 files changed, 96 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.c index cb6ad8411631..5766a9c82f96 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.c @@ -159,6 +159,7 @@ hws_action_ste_table_alloc(struct mlx5hws_action_ste_pool_element *parent_elem) action_tbl->parent_elem = parent_elem; INIT_LIST_HEAD(&action_tbl->list_node); + action_tbl->last_used = jiffies; list_add(&action_tbl->list_node, &parent_elem->available); parent_elem->log_sz = log_sz; @@ -236,6 +237,8 @@ static int hws_action_ste_pool_init(struct mlx5hws_context *ctx, enum mlx5hws_pool_optimize opt; int err; + mutex_init(&pool->lock); + /* Rules which are added for both RX and TX must use the same action STE * indices for both. If we were to use a single table, then RX-only and * TX-only rules would waste the unused entries. Thus, we use separate @@ -247,6 +250,7 @@ static int hws_action_ste_pool_init(struct mlx5hws_context *ctx, opt); if (err) goto destroy_elems; + pool->elems[opt].parent_pool = pool; } return 0; @@ -267,6 +271,58 @@ static void hws_action_ste_pool_destroy(struct mlx5hws_action_ste_pool *pool) hws_action_ste_pool_element_destroy(&pool->elems[opt]); } +static void hws_action_ste_pool_element_collect_stale( + struct mlx5hws_action_ste_pool_element *elem, struct list_head *cleanup) +{ + struct mlx5hws_action_ste_table *action_tbl, *p; + unsigned long expire_time, now; + + expire_time = secs_to_jiffies(MLX5HWS_ACTION_STE_POOL_EXPIRE_SECONDS); + now = jiffies; + + list_for_each_entry_safe(action_tbl, p, &elem->available, list_node) { + if (mlx5hws_pool_full(action_tbl->pool) && + time_before(action_tbl->last_used + expire_time, now)) + list_move(&action_tbl->list_node, cleanup); + } +} + +static void hws_action_ste_table_cleanup_list(struct list_head *cleanup) +{ + struct mlx5hws_action_ste_table *action_tbl, *p; + + list_for_each_entry_safe(action_tbl, p, cleanup, list_node) + hws_action_ste_table_destroy(action_tbl); +} + +static void hws_action_ste_pool_cleanup(struct work_struct *work) +{ + enum mlx5hws_pool_optimize opt; + struct mlx5hws_context *ctx; + LIST_HEAD(cleanup); + int i; + + ctx = container_of(work, struct mlx5hws_context, + action_ste_cleanup.work); + + for (i = 0; i < ctx->queues; i++) { + struct mlx5hws_action_ste_pool *p = &ctx->action_ste_pool[i]; + + mutex_lock(&p->lock); + for (opt = MLX5HWS_POOL_OPTIMIZE_NONE; + opt < MLX5HWS_POOL_OPTIMIZE_MAX; opt++) + hws_action_ste_pool_element_collect_stale( + &p->elems[opt], &cleanup); + mutex_unlock(&p->lock); + } + + hws_action_ste_table_cleanup_list(&cleanup); + + schedule_delayed_work(&ctx->action_ste_cleanup, + secs_to_jiffies( + MLX5HWS_ACTION_STE_POOL_CLEANUP_SECONDS)); +} + int mlx5hws_action_ste_pool_init(struct mlx5hws_context *ctx) { struct mlx5hws_action_ste_pool *pool; @@ -285,6 +341,12 @@ int mlx5hws_action_ste_pool_init(struct mlx5hws_context *ctx) ctx->action_ste_pool = pool; + INIT_DELAYED_WORK(&ctx->action_ste_cleanup, + hws_action_ste_pool_cleanup); + schedule_delayed_work( + &ctx->action_ste_cleanup, + secs_to_jiffies(MLX5HWS_ACTION_STE_POOL_CLEANUP_SECONDS)); + return 0; free_pool: @@ -300,6 +362,8 @@ void mlx5hws_action_ste_pool_uninit(struct mlx5hws_context *ctx) size_t queues = ctx->queues; int i; + cancel_delayed_work_sync(&ctx->action_ste_cleanup); + for (i = 0; i < queues; i++) hws_action_ste_pool_destroy(&ctx->action_ste_pool[i]); @@ -330,6 +394,7 @@ hws_action_ste_table_chunk_alloc(struct mlx5hws_action_ste_table *action_tbl, return err; chunk->action_tbl = action_tbl; + action_tbl->last_used = jiffies; return 0; } @@ -346,6 +411,8 @@ int mlx5hws_action_ste_chunk_alloc(struct mlx5hws_action_ste_pool *pool, if (skip_rx && skip_tx) return -EINVAL; + mutex_lock(&pool->lock); + elem = hws_action_ste_choose_elem(pool, skip_rx, skip_tx); mlx5hws_dbg(elem->ctx, @@ -362,26 +429,39 @@ int mlx5hws_action_ste_chunk_alloc(struct mlx5hws_action_ste_pool *pool, if (!found) { action_tbl = hws_action_ste_table_alloc(elem); - if (IS_ERR(action_tbl)) - return PTR_ERR(action_tbl); + if (IS_ERR(action_tbl)) { + err = PTR_ERR(action_tbl); + goto out; + } err = hws_action_ste_table_chunk_alloc(action_tbl, chunk); if (err) - return err; + goto out; } if (mlx5hws_pool_empty(action_tbl->pool)) list_move(&action_tbl->list_node, &elem->full); - return 0; + err = 0; + +out: + mutex_unlock(&pool->lock); + + return err; } void mlx5hws_action_ste_chunk_free(struct mlx5hws_action_ste_chunk *chunk) { + struct mutex *lock = &chunk->action_tbl->parent_elem->parent_pool->lock; + mlx5hws_dbg(chunk->action_tbl->pool->ctx, "Freeing action STEs offset %d order %d\n", chunk->ste.offset, chunk->ste.order); + + mutex_lock(lock); mlx5hws_pool_chunk_free(chunk->action_tbl->pool, &chunk->ste); + chunk->action_tbl->last_used = jiffies; list_move(&chunk->action_tbl->list_node, &chunk->action_tbl->parent_elem->available); + mutex_unlock(lock); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.h index 2de660a63223..a8ba97359e31 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action_ste_pool.h @@ -8,6 +8,9 @@ #define MLX5HWS_ACTION_STE_TABLE_STEP_LOG_SZ 1 #define MLX5HWS_ACTION_STE_TABLE_MAX_LOG_SZ 20 +#define MLX5HWS_ACTION_STE_POOL_CLEANUP_SECONDS 300 +#define MLX5HWS_ACTION_STE_POOL_EXPIRE_SECONDS 300 + struct mlx5hws_action_ste_pool_element; struct mlx5hws_action_ste_table { @@ -19,10 +22,12 @@ struct mlx5hws_action_ste_table { u32 rtc_0_id; u32 rtc_1_id; struct list_head list_node; + unsigned long last_used; }; struct mlx5hws_action_ste_pool_element { struct mlx5hws_context *ctx; + struct mlx5hws_action_ste_pool *parent_pool; size_t log_sz; /* Size of the largest table so far. */ enum mlx5hws_pool_optimize opt; struct list_head available; @@ -33,6 +38,12 @@ struct mlx5hws_action_ste_pool_element { * per queue. */ struct mlx5hws_action_ste_pool { + /* Protects the entire pool. We have one pool per queue and only one + * operation can be active per rule at a given time. Thus this lock + * protects solely against concurrent garbage collection and we expect + * very little contention. + */ + struct mutex lock; struct mlx5hws_action_ste_pool_element elems[MLX5HWS_POOL_OPTIMIZE_MAX]; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.h index e987e93bbc6e..3f8938c73dc0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/context.h @@ -40,6 +40,7 @@ struct mlx5hws_context { u32 pd_num; struct mlx5hws_pool *stc_pool; struct mlx5hws_action_ste_pool *action_ste_pool; /* One per queue */ + struct delayed_work action_ste_cleanup; struct mlx5hws_context_common_res common_res; struct mlx5hws_pattern_cache *pattern_cache; struct mlx5hws_definer_cache *definer_cache; From patchwork Thu Apr 10 19:17:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 14047161 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2071.outbound.protection.outlook.com [40.107.94.71]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62C2D298989; Thu, 10 Apr 2025 19:19:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.71 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312769; cv=fail; b=i/rpnRUZcS9Q4sSGZGZc5R2wtcJ6H7MRY1QVzRMf8h9sHHX/x8lU/ROGpoZI6ZQIQRmhHq8sr+xt4FGdJVtfCIMyWy2f7pYsJL5AiWESLtCuk7KyjXIdsZQdjCpDrty2OEkOw+/RlxgOpXaTHI6wtizaOqStrErWSPZVUbsVwac= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744312769; c=relaxed/simple; bh=UpuSnju4haSSxrY2aiq+i8iO44ofn6TeIMVS46BX5uo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=JOc8+PSUqBSXF9kX7NsnQMfczADE+ejoYUC/YrWWlgcdVr2qRzGP/cx5+V2HvMnGTFK9Bhnw+WSSqUmi54QDTIpsHpcCLOao2l2tuevBU2B+/CIig+HmmyfhvRpM2UQnJ8NZR2cC48Nxbkhrte5eqjx267Rrf0rtFe/BSnDjIZc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=koqX92K3; arc=fail smtp.client-ip=40.107.94.71 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="koqX92K3" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Nvw3w2PEMKU+RVj+riyNPLvqI9JmvV2QzqNAbNUI8ASPl4bU9R2vOHcAoFfEsdcr8nyuYaV+sh/fcCQ0hPlwfvAQfduf81qXDJjGPHIbvMgrlmk9uL0Igq2GNa487/gq3s4qr5adiyFAzx/TbmREWIRRpB1OzL1zRx8H2IsjNYk8TVPmnLTH9S7rbQDkG2zckP54nI539ADEpORKp67sZvvHaPxcZE5svNBb/hfHMWGaLu6MFY6s++Lb5GRUxExC+7KBmsRGe2z/MZzfiS1HqwRIQbdw8LESsCRttXbBEHCw79Y6svMNsh4itdkB2oAEP46tP/CEfp3dgNA34U8H9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lJessMSFTSSNcnkCiCwFeTTtF7veDXMqyYZxheVv/dw=; b=hpYiNdm+jk95Mcl4bxXDzYpP0fl6K8k4PH87Kcj+/fuvZOkUw1YQgi3aX8rXV9iMDfjUEoq8NRbP6GtvZDdBuT0xXW3rCIB9IYRk5jbw/HsGKleThXoFI7MX62A+D1NnhTnPOWCuJecRflOkIL7TX9SZwwm7NXvBvgOAbPZS7zatcVhQDaWf7joXHMwdWDC/mN2o8O8blLQ4M6iUd0ZARDnGbuScycKOCJRbixoAZx/x9CEp1ke4wl+8FC/qNuRm53tVOxWA5WVNeX2Vft+pKPgs2vfPF//YnOHy7gcnI0CFxd8Um1g4wzlOatDe0RmW19uBjHwIQ015jnbmCY6xFw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lJessMSFTSSNcnkCiCwFeTTtF7veDXMqyYZxheVv/dw=; b=koqX92K3GTDfDiMljJULEcDYwCNeYucupp0oaclcsUT+HQDettnkaNxKeGo66BGTIObLpzn4wOq17sYrI0bQ+yiSFeOVS4ETAOjLhkHWab7cz9BSORg0LknTiXR8IFsaHKPTMGq1PfE+er9xeHb0yijCOGtTNoTtnAWXKQB0SfbFK7T88kwANAKRF3B6x5FfSpO11kmUIjxiXCCfmNss2mFQzBGBv7UVg5oz+6DNQIOl4HVh278na/TVnsceEs+awU6K9TdKpOBaxSRIn/hiS8PCVJyauAmwpL41mCltp6ouaUiFyZ154ct1IP7L05z8DEigBN1FFjS/tQKdIYpx7g== Received: from BYAPR04CA0017.namprd04.prod.outlook.com (2603:10b6:a03:40::30) by MN6PR12MB8592.namprd12.prod.outlook.com (2603:10b6:208:478::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.20; Thu, 10 Apr 2025 19:19:23 +0000 Received: from SN1PEPF0002BA51.namprd03.prod.outlook.com (2603:10b6:a03:40:cafe::a3) by BYAPR04CA0017.outlook.office365.com (2603:10b6:a03:40::30) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8632.24 via Frontend Transport; Thu, 10 Apr 2025 19:19:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by SN1PEPF0002BA51.mail.protection.outlook.com (10.167.242.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8632.13 via Frontend Transport; Thu, 10 Apr 2025 19:19:22 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Apr 2025 12:19:04 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Thu, 10 Apr 2025 12:19:04 -0700 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.1544.14 via Frontend Transport; Thu, 10 Apr 2025 12:19:00 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , "Andrew Lunn" CC: Gal Pressman , Leon Romanovsky , "Saeed Mahameed" , Leon Romanovsky , Tariq Toukan , , , , Moshe Shemesh , Mark Bloch , Vlad Dogaru , Yevgeny Kliteynik , Michal Kubiak Subject: [PATCH net-next V2 12/12] net/mlx5: HWS, Export action STE tables to debugfs Date: Thu, 10 Apr 2025 22:17:42 +0300 Message-ID: <1744312662-356571-13-git-send-email-tariqt@nvidia.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> References: <1744312662-356571-1-git-send-email-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-NV-OnPremToCloud: AnonymousSubmission X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002BA51:EE_|MN6PR12MB8592:EE_ X-MS-Office365-Filtering-Correlation-Id: e51456cf-8527-4d6a-e9bf-08dd7864999c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: rfz1zwyQ/SuTs0WO07MtRncSEmPXQm5FOkio3cYssBBkziKSnHXqhCE9yN5aEqumZnpINSfUy3sOTH6bLI6Ykq01RgqJ7aWK5wFd63ZqXdOsvSNE2sXorIE6RXovZkdH6rbiRbZwKf3IsF5/LWRs8jdeyX4vEGypqmqRQY3aPbuw6K1AD0PtLqSy2yDQ3W+bcLGhiIL/8fPr/DW2eDRdHATtsY7JFSRNZ6GUEwxDS8nqqvbMwsJRUrp6jTBCAo0gOQd67zRjRqwtT3g/y07scxY5nZcWmtBKfxotEUPOYDYdnhYDFKab0xvShGuWeb41UehwQnOpvpTP6iNMaSH/jPUAa6Tb3k1i3dtYl8dI2hwNh4Xrw+++E23nwQBx6M78U15G3KTB+/OfPcSXW2lIaRZq0Gc+IP3QL1UN0J4e5UENlZ6udRReX3WQprjl2VYdEhNOuov1owxJUyhHS4thNeT8J3WOOTIQCm/UjZ4jH+G/QK/PLVJEI5eoEZqLs05gLlx3O2MuW2/Z0wcRkC2kXOOVoDFywMvoPJDzyJQd/R5E4u2cxRJ2n4rri7W9AIzxKgHImdMvwgAVZWkvUwLMkB2SNskoVmlnyEUBJjir4xNqBryWnDxIkRdteidKBJC+3iJay2ApNi6jcKdrmwPrLXzmHmYYhW6aGerqDbDUNM7wuFV6++lg2T3tF16+l3VAluB2xnl9Nd829LNnQUuDWzSX3jJ7C5K96Unc33QPg46QmPcjxHk1r/p8NBTVZhqOjYz2H4digew5M4swc1BfD9O4xIUQdSdpFKLwElkcsOoprPZTQQiOauhpm83lk+fVmtoabZCf13l6QRNhF7OPomx3HRCF0V1CiVG/mbu1yZdY8aaMlj45yzkl/wC/XNTqVuz2c05PwecgR+sBYxA2zugo+NgEc156Fb4DJ7L+vpe6o5Dvf795AsvmIH++GHlUYtdtH3PSh6AHHEB0mhcpWRmKgl952Clc/Dt4+s7+9DpsHWiUZRlr/BJC2vIy846ETYiuVybrfZ6qCXcYHkhRvAFYYonOQIU7CvaKJyEcgRpwRCI/R3rGfXHhnHs0HaJuXeciMzFLDIblTDHPcJ/uN2I++42vxXkJrYHnIyLNy4Vk0V6mlzTXsIRbFhXP2d1TmezS2uSSBzLqBv5keVuhQ0vTfrANaZgbjhxDXdLYtfug8oYOkS8wTGaQM/FSRQxVUwMS4Xd+MS82N9PvIP7mivtFpTnbm+CjniNOdXnD5hLo8dij0cmqPrCkfF1/4F48a8bpgwGb43Nkmkgz9KeWarMw6rICiE3QNeG9+59aVv/b6GrLEZ/cH66NKqhGKvqXZHevqkD+T23VzcWbz7vff54faIVNTymVKzox/et3KutC4HGPgx0NmFwIRw0bdI9pjuEU/BQCgR5OEj810jMJEDliUQ2jMeVg0JdIC4jUeQmkE3nMhqjjBD4fACfUPf+3MInKqJqfzs8XwyXFX/IX7jkeS0l5knTRtf2LU+56qOs= X-Forefront-Antispam-Report: CIP:216.228.118.233;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge2.nvidia.com;CAT:NONE;SFS:(13230040)(7416014)(1800799024)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2025 19:19:22.4957 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e51456cf-8527-4d6a-e9bf-08dd7864999c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.233];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002BA51.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN6PR12MB8592 X-Patchwork-Delegate: kuba@kernel.org From: Vlad Dogaru Introduce a new type of dump object and dump all action STE tables, along with information on their RTCs and STEs. Signed-off-by: Vlad Dogaru Reviewed-by: Hamdan Agbariya Reviewed-by: Mark Bloch Signed-off-by: Tariq Toukan Reviewed-by: Michal Kubiak --- .../mellanox/mlx5/core/steering/hws/debug.c | 36 ++++++++++++++++++- .../mellanox/mlx5/core/steering/hws/debug.h | 2 ++ 2 files changed, 37 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c index 38f75dec9cfc..91568d6c1dac 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.c @@ -387,10 +387,41 @@ static int hws_debug_dump_context_stc(struct seq_file *f, struct mlx5hws_context return 0; } +static void +hws_debug_dump_action_ste_table(struct seq_file *f, + struct mlx5hws_action_ste_table *action_tbl) +{ + int ste_0_id = mlx5hws_pool_get_base_id(action_tbl->pool); + int ste_1_id = mlx5hws_pool_get_base_mirror_id(action_tbl->pool); + + seq_printf(f, "%d,0x%llx,%d,%d,%d,%d\n", + MLX5HWS_DEBUG_RES_TYPE_ACTION_STE_TABLE, + HWS_PTR_TO_ID(action_tbl), + action_tbl->rtc_0_id, ste_0_id, + action_tbl->rtc_1_id, ste_1_id); +} + +static void hws_debug_dump_action_ste_pool(struct seq_file *f, + struct mlx5hws_action_ste_pool *pool) +{ + struct mlx5hws_action_ste_table *action_tbl; + enum mlx5hws_pool_optimize opt; + + mutex_lock(&pool->lock); + for (opt = MLX5HWS_POOL_OPTIMIZE_NONE; opt < MLX5HWS_POOL_OPTIMIZE_MAX; + opt++) { + list_for_each_entry(action_tbl, &pool->elems[opt].available, + list_node) { + hws_debug_dump_action_ste_table(f, action_tbl); + } + } + mutex_unlock(&pool->lock); +} + static int hws_debug_dump_context(struct seq_file *f, struct mlx5hws_context *ctx) { struct mlx5hws_table *tbl; - int ret; + int ret, i; ret = hws_debug_dump_context_info(f, ctx); if (ret) @@ -410,6 +441,9 @@ static int hws_debug_dump_context(struct seq_file *f, struct mlx5hws_context *ct return ret; } + for (i = 0; i < ctx->queues; i++) + hws_debug_dump_action_ste_pool(f, &ctx->action_ste_pool[i]); + return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.h index e44e7ae28f93..89c396f9f266 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/debug.h @@ -26,6 +26,8 @@ enum mlx5hws_debug_res_type { MLX5HWS_DEBUG_RES_TYPE_MATCHER_TEMPLATE_HASH_DEFINER = 4205, MLX5HWS_DEBUG_RES_TYPE_MATCHER_TEMPLATE_RANGE_DEFINER = 4206, MLX5HWS_DEBUG_RES_TYPE_MATCHER_TEMPLATE_COMPARE_MATCH_DEFINER = 4207, + + MLX5HWS_DEBUG_RES_TYPE_ACTION_STE_TABLE = 4300, }; static inline u64