From patchwork Wed Apr 12 04:07:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208416 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 059E5C77B6E for ; Wed, 12 Apr 2023 04:08:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229588AbjDLEIE (ORCPT ); Wed, 12 Apr 2023 00:08:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229553AbjDLEID (ORCPT ); Wed, 12 Apr 2023 00:08:03 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A07519AF for ; Tue, 11 Apr 2023 21:08:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B858362DA3 for ; Wed, 12 Apr 2023 04:08:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 15030C433EF; Wed, 12 Apr 2023 04:08:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272481; bh=7tz/mpw0zg7cf4uxIxI7RdGeemhQIQZZKaD4/lDu2A0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AmYc20lhT8vIvVO+f0lzkm9Y1oJdqwE4HkqREbkCll4jcdmz5K/9qgbucG+simrGP AkcM4z3lLCnxdcLZjk6JFSiPMyOj3h/3YxrlpgrDKwwJ8LDVi4fJw8E5IzzhPQNp0n 9O6OOCcBw0412CEPFcjjqoVdQrEAZTg5xyEVEmZ35JMeGfa/Xa1CUd7lSaDnuQ340Q riwNKyKGGQzeTbpYfeqgeU/fKVyOu22NFpeSSejj1VbMYbkF9pJsi4dhG9oLUTDzpa 4OSTm/vmqyGmjmC/7qiiT/fCbtazYZlgdRf34Q/Nig4LrdJQj0TezI8VTt7ke6ksXk 28N1IrZso380g== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Vlad Buslov , Maor Dickman , Roi Dayan Subject: [net-next 01/15] net/mlx5: Add mlx5_ifc definitions for bridge multicast support Date: Tue, 11 Apr 2023 21:07:38 -0700 Message-Id: <20230412040752.14220-2-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Vlad Buslov Add the required hardware definitions to mlx5_ifc: fdb_uplink_hairpin, fdb_multi_path_any_table_limit_regc, fdb_multi_path_any_table. Signed-off-by: Vlad Buslov Reviewed-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- include/linux/mlx5/mlx5_ifc.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index e47d6c58da35..02c628f4fe26 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -880,7 +880,12 @@ enum { struct mlx5_ifc_flow_table_eswitch_cap_bits { u8 fdb_to_vport_reg_c_id[0x8]; - u8 reserved_at_8[0xd]; + u8 reserved_at_8[0x5]; + u8 fdb_uplink_hairpin[0x1]; + u8 fdb_multi_path_any_table_limit_regc[0x1]; + u8 reserved_at_f[0x3]; + u8 fdb_multi_path_any_table[0x1]; + u8 reserved_at_13[0x2]; u8 fdb_modify_header_fwd_to_table[0x1]; u8 fdb_ipv4_ttl_modify[0x1]; u8 flow_source[0x1]; From patchwork Wed Apr 12 04:07:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208418 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3C16C7619A for ; Wed, 12 Apr 2023 04:08:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229652AbjDLEIJ (ORCPT ); Wed, 12 Apr 2023 00:08:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229593AbjDLEIE (ORCPT ); Wed, 12 Apr 2023 00:08:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14B5C1BE4 for ; Tue, 11 Apr 2023 21:08:03 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A3EEE62DAA for ; Wed, 12 Apr 2023 04:08:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 005DEC433EF; Wed, 12 Apr 2023 04:08:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272482; bh=YXnu5cBzzvPnpfvtlHJowrOqXeA6jnYskEGJjsSyqBU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=k2jqDyI9QiUlO2uRQWLuHQFlwmda4iP++mJqeFc5Xmg+xNGO6ym+QUNo2uiUy6lCr jBA1kq+fvyL+thmC91MjEDikMi+lpFRzxXebWAwLynxopdaFIhs7dNLUK2ztZUGgsI YcTwkWKSEsC1bx2raksHfWieYQTjLmln37I5UITEYhL1TcDpxPmE/zOGsy8cPInHd6 i9mGDI+YKGv6P5EUCnPbK9eeELA5q4u9lmIXclAyaBRbLadKkR716MHeyU2X0tC/l/ ufkvhxHme5eKug6pMfxV410W3EBFG0iVOY/N2rUFdCA8Ya+P87TxfvMJ8aTalN6zJY QrON9xrTxC4Bw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Vlad Buslov , Maor Dickman , Roi Dayan Subject: [net-next 02/15] net/mlx5: Bridge, increase bridge tables sizes Date: Tue, 11 Apr 2023 21:07:39 -0700 Message-Id: <20230412040752.14220-3-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Vlad Buslov Bridge ingress and egress tables got more flow groups recently for QinQ support and will get more in following patches of this series. Increase the sizes of the tables to allow offloading more flows in each mode. Signed-off-by: Vlad Buslov Reviewed-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c index 3cdcb0e0b20f..e45f9bb80535 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c @@ -13,8 +13,8 @@ #define CREATE_TRACE_POINTS #include "diag/bridge_tracepoint.h" -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE 12000 -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE 16000 +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE 131072 +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE 524288 #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_FROM 0 #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO \ (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) @@ -40,10 +40,10 @@ MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE - 1) #define MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE \ (MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_TO + 1) -static_assert(MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE == 64000); +static_assert(MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE == 1048576); -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE 16000 -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE (32000 - 1) +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE 131072 +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE (262144 - 1) #define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_FROM 0 #define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_TO \ (MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE - 1) @@ -63,7 +63,7 @@ static_assert(MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE == 64000); MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_FROM #define MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE \ (MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_TO + 1) -static_assert(MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE == 64000); +static_assert(MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE == 524288); #define MLX5_ESW_BRIDGE_SKIP_TABLE_SIZE 0 From patchwork Wed Apr 12 04:07:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208419 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F92EC7619A for ; Wed, 12 Apr 2023 04:08:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229638AbjDLEIK (ORCPT ); Wed, 12 Apr 2023 00:08:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229553AbjDLEIF (ORCPT ); Wed, 12 Apr 2023 00:08:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 085EF1989 for ; Tue, 11 Apr 2023 21:08:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9691D62DA3 for ; Wed, 12 Apr 2023 04:08:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3B41C433D2; Wed, 12 Apr 2023 04:08:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272483; bh=csUG3zbgP+X2HwnvUh5CIf0AIiaNokFbFiRXG9DA7vQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kvoXSt5gOlFSEcUFlulNDA1nDxzsw9+d9DkoPDHDII7Dq/YR/qVP8U3eF71v3Zo3B 9nM+pCTitfUZ7sstSIZL6ST9gJWcCeMmGuaPcMxXtDMY9RA0sFAyVx9x7HLOia/NNQ 5xYQOiLAbQS8XU7OT4FqlGr/ez4zH8IP/CFmvnEMPG1xnoNHSgKCeK8lEV53ingpAt Ygou4GgV1Gc0DqURrJ+sVI1cVKLetg1eA/l++4Rk9yDu5HZyPm7D4Xn5rLtdYN01hE mul3qshAWPCvLK8EzOXjEMXhyFxCODIVlHlpN4e+VTHqfaiCFIu6JE6b4gAPjMME73 iFuHPpE2SsKxg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Vlad Buslov , Maor Dickman , Roi Dayan Subject: [net-next 03/15] net/mlx5: Bridge, move additional data structures to priv header Date: Tue, 11 Apr 2023 21:07:40 -0700 Message-Id: <20230412040752.14220-4-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Vlad Buslov Following patches in series will require accessing flow tables and groups sizes, table levels and struct mlx5_esw_bridge from new the new source file dedicated to multicast code. Expose these data in bridge_priv.h to reduce clutter in following patches that will implement the actual functionality. Signed-off-by: Vlad Buslov Reviewed-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/esw/bridge.c | 85 ------------------- .../mellanox/mlx5/core/esw/bridge_priv.h | 85 +++++++++++++++++++ 2 files changed, 85 insertions(+), 85 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c index e45f9bb80535..ec052fff7712 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c @@ -13,66 +13,6 @@ #define CREATE_TRACE_POINTS #include "diag/bridge_tracepoint.h" -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE 131072 -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE 524288 -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_FROM 0 -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_FROM \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO + 1) -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_TO \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_FROM + \ - MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_FROM \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_TO + 1) -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_TO \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_FROM + \ - MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_FROM \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_TO + 1) -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_TO \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_FROM + \ - MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_FROM \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_TO + 1) -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_TO \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_FROM + \ - MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE - 1) -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_TO + 1) -static_assert(MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE == 1048576); - -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE 131072 -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE (262144 - 1) -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_FROM 0 -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_TO \ - (MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE - 1) -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_FROM \ - (MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_TO + 1) -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_TO \ - (MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_FROM + \ - MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE - 1) -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_FROM \ - (MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_TO + 1) -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_TO \ - (MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_FROM + \ - MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE - 1) -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_FROM \ - (MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_TO + 1) -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_TO \ - MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_FROM -#define MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE \ - (MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_TO + 1) -static_assert(MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE == 524288); - -#define MLX5_ESW_BRIDGE_SKIP_TABLE_SIZE 0 - -enum { - MLX5_ESW_BRIDGE_LEVEL_INGRESS_TABLE, - MLX5_ESW_BRIDGE_LEVEL_EGRESS_TABLE, - MLX5_ESW_BRIDGE_LEVEL_SKIP_TABLE, -}; - static const struct rhashtable_params fdb_ht_params = { .key_offset = offsetof(struct mlx5_esw_bridge_fdb_entry, key), .key_len = sizeof(struct mlx5_esw_bridge_fdb_key), @@ -80,31 +20,6 @@ static const struct rhashtable_params fdb_ht_params = { .automatic_shrinking = true, }; -enum { - MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG = BIT(0), -}; - -struct mlx5_esw_bridge { - int ifindex; - int refcnt; - struct list_head list; - struct mlx5_esw_bridge_offloads *br_offloads; - - struct list_head fdb_list; - struct rhashtable fdb_ht; - - struct mlx5_flow_table *egress_ft; - struct mlx5_flow_group *egress_vlan_fg; - struct mlx5_flow_group *egress_qinq_fg; - struct mlx5_flow_group *egress_mac_fg; - struct mlx5_flow_group *egress_miss_fg; - struct mlx5_pkt_reformat *egress_miss_pkt_reformat; - struct mlx5_flow_handle *egress_miss_handle; - unsigned long ageing_time; - u32 flags; - u16 vlan_proto; -}; - static void mlx5_esw_bridge_fdb_offload_notify(struct net_device *dev, const unsigned char *addr, u16 vid, unsigned long val) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h index 878311fe950a..b99761e73c1b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h @@ -12,6 +12,70 @@ #include #include "fs_core.h" +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE 131072 +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE 524288 +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_FROM 0 +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_TO + 1) +static_assert(MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE == 1048576); + +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE 131072 +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE (262144 - 1) +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_FROM 0 +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_TO \ + MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_FROM +#define MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE \ + (MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_TO + 1) +static_assert(MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE == 524288); + +#define MLX5_ESW_BRIDGE_SKIP_TABLE_SIZE 0 + +enum { + MLX5_ESW_BRIDGE_LEVEL_INGRESS_TABLE, + MLX5_ESW_BRIDGE_LEVEL_EGRESS_TABLE, + MLX5_ESW_BRIDGE_LEVEL_SKIP_TABLE, +}; + +enum { + MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG = BIT(0), +}; + struct mlx5_esw_bridge_fdb_key { unsigned char addr[ETH_ALEN]; u16 vid; @@ -60,4 +124,25 @@ struct mlx5_esw_bridge_port { struct xarray vlans; }; +struct mlx5_esw_bridge { + int ifindex; + int refcnt; + struct list_head list; + struct mlx5_esw_bridge_offloads *br_offloads; + + struct list_head fdb_list; + struct rhashtable fdb_ht; + + struct mlx5_flow_table *egress_ft; + struct mlx5_flow_group *egress_vlan_fg; + struct mlx5_flow_group *egress_qinq_fg; + struct mlx5_flow_group *egress_mac_fg; + struct mlx5_flow_group *egress_miss_fg; + struct mlx5_pkt_reformat *egress_miss_pkt_reformat; + struct mlx5_flow_handle *egress_miss_handle; + unsigned long ageing_time; + u32 flags; + u16 vlan_proto; +}; + #endif /* _MLX5_ESW_BRIDGE_PRIVATE_ */ From patchwork Wed Apr 12 04:07:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208420 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 666F0C77B73 for ; Wed, 12 Apr 2023 04:08:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229693AbjDLEIN (ORCPT ); Wed, 12 Apr 2023 00:08:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229626AbjDLEIG (ORCPT ); Wed, 12 Apr 2023 00:08:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A79F19AF for ; Tue, 11 Apr 2023 21:08:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7B36362DA9 for ; Wed, 12 Apr 2023 04:08:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA7F3C433D2; Wed, 12 Apr 2023 04:08:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272483; bh=SktG9vHsYhoTIUbVmN4ctV3EVSZu/ni/tjSV0i0OHQE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YReSVJtWZu/ACI+//0ZJJrJaqdJdvjoTXS8AfFxcTcVwM/eQawBC8GeI3BAY8Yy4m 9pKI5ixqcAJvs4iqxv1+WNDNJWNq7zgfI6dABkIJV9Bmm1DyQ+CsozUVg9iYpzWdw2 gHeE1QlDzq1xtzNshuNQiB/Rh1KsTQnPDNtTXultDEZqbYoxBn+OgqlyBD1OCxfIOk 8tbuTgBi/sz6kNaGSGd501o2MpoNVbQzGD78hyC0+7qH0avKyemc9A1qOVHjSm5nE8 Ike0QR2Ct15DwgPYw/wGClOxfcW5S/331xic1nnCzA0dLn9NXAdtLu+0cxpwo0KZr+ b+PkymaqpmjSg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Vlad Buslov , Maor Dickman , Roi Dayan Subject: [net-next 04/15] net/mlx5: Bridge, extract code to lookup parent bridge of port Date: Tue, 11 Apr 2023 21:07:41 -0700 Message-Id: <20230412040752.14220-5-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Vlad Buslov The pattern when function looks up a port by vport_num+vhca_id tuple in order to just obtain its parent bridge is repeated multiple times in bridge.c file. Further commits in this series use the pattern even more. Extract the pattern to standalone mlx5_esw_bridge_from_port_lookup() function to improve code readability. This commits doesn't change functionality. Signed-off-by: Vlad Buslov Reviewed-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/esw/bridge.c | 47 ++++++++++--------- 1 file changed, 26 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c index ec052fff7712..bbbf982bbbc0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c @@ -933,6 +933,19 @@ static void mlx5_esw_bridge_port_erase(struct mlx5_esw_bridge_port *port, xa_erase(&br_offloads->ports, mlx5_esw_bridge_port_key(port)); } +static struct mlx5_esw_bridge * +mlx5_esw_bridge_from_port_lookup(u16 vport_num, u16 esw_owner_vhca_id, + struct mlx5_esw_bridge_offloads *br_offloads) +{ + struct mlx5_esw_bridge_port *port; + + port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); + if (!port) + return NULL; + + return port->bridge; +} + static void mlx5_esw_bridge_fdb_entry_refresh(struct mlx5_esw_bridge_fdb_entry *entry) { trace_mlx5_esw_bridge_fdb_entry_refresh(entry); @@ -1388,28 +1401,26 @@ mlx5_esw_bridge_fdb_entry_init(struct net_device *dev, u16 vport_num, u16 esw_ow int mlx5_esw_bridge_ageing_time_set(u16 vport_num, u16 esw_owner_vhca_id, unsigned long ageing_time, struct mlx5_esw_bridge_offloads *br_offloads) { - struct mlx5_esw_bridge_port *port; + struct mlx5_esw_bridge *bridge; - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); - if (!port) + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); + if (!bridge) return -EINVAL; - port->bridge->ageing_time = clock_t_to_jiffies(ageing_time); + bridge->ageing_time = clock_t_to_jiffies(ageing_time); return 0; } int mlx5_esw_bridge_vlan_filtering_set(u16 vport_num, u16 esw_owner_vhca_id, bool enable, struct mlx5_esw_bridge_offloads *br_offloads) { - struct mlx5_esw_bridge_port *port; struct mlx5_esw_bridge *bridge; bool filtering; - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); - if (!port) + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); + if (!bridge) return -EINVAL; - bridge = port->bridge; filtering = bridge->flags & MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG; if (filtering == enable) return 0; @@ -1426,15 +1437,13 @@ int mlx5_esw_bridge_vlan_filtering_set(u16 vport_num, u16 esw_owner_vhca_id, boo int mlx5_esw_bridge_vlan_proto_set(u16 vport_num, u16 esw_owner_vhca_id, u16 proto, struct mlx5_esw_bridge_offloads *br_offloads) { - struct mlx5_esw_bridge_port *port; struct mlx5_esw_bridge *bridge; - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, - br_offloads); - if (!port) + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, + br_offloads); + if (!bridge) return -EINVAL; - bridge = port->bridge; if (bridge->vlan_proto == proto) return 0; if (proto != ETH_P_8021Q && proto != ETH_P_8021AD) { @@ -1626,14 +1635,12 @@ void mlx5_esw_bridge_fdb_update_used(struct net_device *dev, u16 vport_num, u16 struct switchdev_notifier_fdb_info *fdb_info) { struct mlx5_esw_bridge_fdb_entry *entry; - struct mlx5_esw_bridge_port *port; struct mlx5_esw_bridge *bridge; - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); - if (!port) + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); + if (!bridge) return; - bridge = port->bridge; entry = mlx5_esw_bridge_fdb_lookup(bridge, fdb_info->addr, fdb_info->vid); if (!entry) { esw_debug(br_offloads->esw->dev, @@ -1680,14 +1687,12 @@ void mlx5_esw_bridge_fdb_remove(struct net_device *dev, u16 vport_num, u16 esw_o { struct mlx5_eswitch *esw = br_offloads->esw; struct mlx5_esw_bridge_fdb_entry *entry; - struct mlx5_esw_bridge_port *port; struct mlx5_esw_bridge *bridge; - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); - if (!port) + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); + if (!bridge) return; - bridge = port->bridge; entry = mlx5_esw_bridge_fdb_lookup(bridge, fdb_info->addr, fdb_info->vid); if (!entry) { esw_debug(esw->dev, From patchwork Wed Apr 12 04:07:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208421 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD523C77B6E for ; Wed, 12 Apr 2023 04:08:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229609AbjDLEIT (ORCPT ); Wed, 12 Apr 2023 00:08:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229647AbjDLEII (ORCPT ); Wed, 12 Apr 2023 00:08:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9480E44B9 for ; Tue, 11 Apr 2023 21:08:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 671F462DA3 for ; Wed, 12 Apr 2023 04:08:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9DC0C433EF; Wed, 12 Apr 2023 04:08:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272484; bh=KfUu7/U/tndimpRCbY2kBD7wd4P2DeMFKxRuiG55gjk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M/7srb+wvgehtF25PWyD7o796/OZ15JwIqzG5B4+ZWriuizACh9JeJllaR8wh17PD SbPwPnvYzZpKkFZd8O2spnYubif2RewQgZKJ9EQ+aEY+ucDkfqPkGS8lbvkWNvcbz6 6qTjfq78J97k5p8pszVIvKPnVJdV6KEMGhJYXCuDSsf09aoCVvK/fJFDvxzVvefDjg Cv7CizFmR3c1sxWnPcXU4bJ7YbnDDVTsBo9Suj1wnLugt2E68909dvSq2hjsmw4zG1 YrRUqAJ9oYOFnFV4wCuw0P9oD1SBKmDCYCc9aE2E6kFqOZoGBC+JO8+WlBOC7nf7I2 XqZW/RcawtyRg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Vlad Buslov , Maor Dickman , Roi Dayan Subject: [net-next 05/15] net/mlx5: Bridge, snoop igmp/mld packets Date: Tue, 11 Apr 2023 21:07:42 -0700 Message-Id: <20230412040752.14220-6-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Vlad Buslov Handle SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED attribute notification to dynamically toggle bridge multicast offload. Set new MLX5_ESW_BRIDGE_MCAST_FLAG bridge flag when multicast offload is enabled. Put multicast-specific code into new bridge_mcast.c file. When initializing bridge multicast pipeline create a static rule for snooping on IGMP traffic and three rules for snooping on MLD traffic (for query, report and done message types). Note that matching MLD traffic requires having flexparser MLX5_FLEX_PROTO_ICMPV6 capability enabled. By default Linux bridge is created with multicast enabled which can be modified by 'mcast_snooping' argument: $ ip link set name my_bridge type bridge mcast_snooping 0 Signed-off-by: Vlad Buslov Reviewed-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/Makefile | 2 +- .../mellanox/mlx5/core/en/rep/bridge.c | 4 + .../ethernet/mellanox/mlx5/core/esw/bridge.c | 31 ++ .../ethernet/mellanox/mlx5/core/esw/bridge.h | 9 + .../mellanox/mlx5/core/esw/bridge_mcast.c | 316 ++++++++++++++++++ .../mellanox/mlx5/core/esw/bridge_priv.h | 27 +- 6 files changed, 384 insertions(+), 5 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index 6c2f1d4a58ab..4a009b720bee 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -75,7 +75,7 @@ mlx5_core-$(CONFIG_MLX5_ESWITCH) += esw/acl/helper.o \ esw/acl/egress_lgcy.o esw/acl/egress_ofld.o \ esw/acl/ingress_lgcy.o esw/acl/ingress_ofld.o -mlx5_core-$(CONFIG_MLX5_BRIDGE) += esw/bridge.o en/rep/bridge.o +mlx5_core-$(CONFIG_MLX5_BRIDGE) += esw/bridge.o esw/bridge_mcast.o en/rep/bridge.o mlx5_core-$(CONFIG_THERMAL) += thermal.o mlx5_core-$(CONFIG_MLX5_MPFS) += lib/mpfs.o diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c index ce85b48d327d..9701eb3c5f1b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c @@ -306,6 +306,10 @@ mlx5_esw_bridge_port_obj_attr_set(struct net_device *dev, attr->u.vlan_protocol, br_offloads); break; + case SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED: + err = mlx5_esw_bridge_mcast_set(vport_num, esw_owner_vhca_id, + !attr->u.mc_disabled, br_offloads); + break; default: err = -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c index bbbf982bbbc0..35436aa9548d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c @@ -868,6 +868,7 @@ static void mlx5_esw_bridge_put(struct mlx5_esw_bridge_offloads *br_offloads, return; mlx5_esw_bridge_egress_table_cleanup(bridge); + mlx5_esw_bridge_mcast_disable(bridge); list_del(&bridge->list); rhashtable_destroy(&bridge->fdb_ht); kvfree(bridge); @@ -1458,6 +1459,36 @@ int mlx5_esw_bridge_vlan_proto_set(u16 vport_num, u16 esw_owner_vhca_id, u16 pro return 0; } +int mlx5_esw_bridge_mcast_set(u16 vport_num, u16 esw_owner_vhca_id, bool enable, + struct mlx5_esw_bridge_offloads *br_offloads) +{ + struct mlx5_eswitch *esw = br_offloads->esw; + struct mlx5_esw_bridge *bridge; + int err = 0; + bool mcast; + + if (!(MLX5_CAP_ESW_FLOWTABLE((esw)->dev, fdb_multi_path_any_table) || + MLX5_CAP_ESW_FLOWTABLE((esw)->dev, fdb_multi_path_any_table_limit_regc)) || + !MLX5_CAP_ESW_FLOWTABLE((esw)->dev, fdb_uplink_hairpin) || + !MLX5_CAP_ESW_FLOWTABLE_FDB((esw)->dev, ignore_flow_level)) + return -EOPNOTSUPP; + + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); + if (!bridge) + return -EINVAL; + + mcast = bridge->flags & MLX5_ESW_BRIDGE_MCAST_FLAG; + if (mcast == enable) + return 0; + + if (enable) + err = mlx5_esw_bridge_mcast_enable(bridge); + else + mlx5_esw_bridge_mcast_disable(bridge); + + return err; +} + static int mlx5_esw_bridge_vport_init(u16 vport_num, u16 esw_owner_vhca_id, u16 flags, struct mlx5_esw_bridge_offloads *br_offloads, struct mlx5_esw_bridge *bridge) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h index 10851a515bca..b18f137173d9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h @@ -25,12 +25,19 @@ struct mlx5_esw_bridge_offloads { struct delayed_work update_work; struct mlx5_flow_table *ingress_ft; + struct mlx5_flow_group *ingress_igmp_fg; + struct mlx5_flow_group *ingress_mld_fg; struct mlx5_flow_group *ingress_vlan_fg; struct mlx5_flow_group *ingress_vlan_filter_fg; struct mlx5_flow_group *ingress_qinq_fg; struct mlx5_flow_group *ingress_qinq_filter_fg; struct mlx5_flow_group *ingress_mac_fg; + struct mlx5_flow_handle *igmp_handle; + struct mlx5_flow_handle *mld_query_handle; + struct mlx5_flow_handle *mld_report_handle; + struct mlx5_flow_handle *mld_done_handle; + struct mlx5_flow_table *skip_ft; }; @@ -64,6 +71,8 @@ int mlx5_esw_bridge_vlan_filtering_set(u16 vport_num, u16 esw_owner_vhca_id, boo struct mlx5_esw_bridge_offloads *br_offloads); int mlx5_esw_bridge_vlan_proto_set(u16 vport_num, u16 esw_owner_vhca_id, u16 proto, struct mlx5_esw_bridge_offloads *br_offloads); +int mlx5_esw_bridge_mcast_set(u16 vport_num, u16 esw_owner_vhca_id, bool enable, + struct mlx5_esw_bridge_offloads *br_offloads); int mlx5_esw_bridge_port_vlan_add(u16 vport_num, u16 esw_owner_vhca_id, u16 vid, u16 flags, struct mlx5_esw_bridge_offloads *br_offloads, struct netlink_ext_ack *extack); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c new file mode 100644 index 000000000000..d5a89a86c9e8 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c @@ -0,0 +1,316 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ + +#include "bridge.h" +#include "eswitch.h" +#include "bridge_priv.h" + +static struct mlx5_flow_group * +mlx5_esw_bridge_ingress_igmp_fg_create(struct mlx5_eswitch *esw, + struct mlx5_flow_table *ingress_ft) +{ + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); + struct mlx5_flow_group *fg; + u32 *in, *match; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) + return ERR_PTR(-ENOMEM); + + MLX5_SET(create_flow_group_in, in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS); + match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); + + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.ip_version); + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.ip_protocol); + + MLX5_SET(create_flow_group_in, in, start_flow_index, + MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_FROM); + MLX5_SET(create_flow_group_in, in, end_flow_index, + MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_TO); + + fg = mlx5_create_flow_group(ingress_ft, in); + kvfree(in); + if (IS_ERR(fg)) + esw_warn(esw->dev, + "Failed to create IGMP flow group for bridge ingress table (err=%pe)\n", + fg); + + return fg; +} + +static struct mlx5_flow_group * +mlx5_esw_bridge_ingress_mld_fg_create(struct mlx5_eswitch *esw, + struct mlx5_flow_table *ingress_ft) +{ + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); + struct mlx5_flow_group *fg; + u32 *in, *match; + + if (!(MLX5_CAP_GEN(esw->dev, flex_parser_protocols) & MLX5_FLEX_PROTO_ICMPV6)) { + esw_warn(esw->dev, + "Can't create MLD flow group due to missing hardware ICMPv6 parsing support\n"); + return NULL; + } + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) + return ERR_PTR(-ENOMEM); + + MLX5_SET(create_flow_group_in, in, match_criteria_enable, + MLX5_MATCH_OUTER_HEADERS | MLX5_MATCH_MISC_PARAMETERS_3); + match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); + + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.ip_version); + MLX5_SET_TO_ONES(fte_match_param, match, misc_parameters_3.icmpv6_type); + + MLX5_SET(create_flow_group_in, in, start_flow_index, + MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_FROM); + MLX5_SET(create_flow_group_in, in, end_flow_index, + MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_TO); + + fg = mlx5_create_flow_group(ingress_ft, in); + kvfree(in); + if (IS_ERR(fg)) + esw_warn(esw->dev, + "Failed to create MLD flow group for bridge ingress table (err=%pe)\n", + fg); + + return fg; +} + +static int +mlx5_esw_bridge_ingress_mcast_fgs_init(struct mlx5_esw_bridge_offloads *br_offloads) +{ + struct mlx5_flow_table *ingress_ft = br_offloads->ingress_ft; + struct mlx5_eswitch *esw = br_offloads->esw; + struct mlx5_flow_group *igmp_fg, *mld_fg; + + igmp_fg = mlx5_esw_bridge_ingress_igmp_fg_create(esw, ingress_ft); + if (IS_ERR(igmp_fg)) + return PTR_ERR(igmp_fg); + + mld_fg = mlx5_esw_bridge_ingress_mld_fg_create(esw, ingress_ft); + if (IS_ERR(mld_fg)) { + mlx5_destroy_flow_group(igmp_fg); + return PTR_ERR(mld_fg); + } + + br_offloads->ingress_igmp_fg = igmp_fg; + br_offloads->ingress_mld_fg = mld_fg; + return 0; +} + +static void +mlx5_esw_bridge_ingress_mcast_fgs_cleanup(struct mlx5_esw_bridge_offloads *br_offloads) +{ + if (br_offloads->ingress_mld_fg) + mlx5_destroy_flow_group(br_offloads->ingress_mld_fg); + br_offloads->ingress_mld_fg = NULL; + if (br_offloads->ingress_igmp_fg) + mlx5_destroy_flow_group(br_offloads->ingress_igmp_fg); + br_offloads->ingress_igmp_fg = NULL; +} + +static struct mlx5_flow_handle * +mlx5_esw_bridge_ingress_igmp_fh_create(struct mlx5_flow_table *ingress_ft, + struct mlx5_flow_table *skip_ft) +{ + struct mlx5_flow_destination dest = { + .type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE, + .ft = skip_ft, + }; + struct mlx5_flow_act flow_act = { + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, + .flags = FLOW_ACT_NO_APPEND, + }; + struct mlx5_flow_spec *rule_spec; + struct mlx5_flow_handle *handle; + + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); + if (!rule_spec) + return ERR_PTR(-ENOMEM); + + rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS; + + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, outer_headers.ip_version); + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ip_version, 4); + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, outer_headers.ip_protocol); + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ip_protocol, IPPROTO_IGMP); + + handle = mlx5_add_flow_rules(ingress_ft, rule_spec, &flow_act, &dest, 1); + + kvfree(rule_spec); + return handle; +} + +static struct mlx5_flow_handle * +mlx5_esw_bridge_ingress_mld_fh_create(u8 type, struct mlx5_flow_table *ingress_ft, + struct mlx5_flow_table *skip_ft) +{ + struct mlx5_flow_destination dest = { + .type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE, + .ft = skip_ft, + }; + struct mlx5_flow_act flow_act = { + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, + .flags = FLOW_ACT_NO_APPEND, + }; + struct mlx5_flow_spec *rule_spec; + struct mlx5_flow_handle *handle; + + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); + if (!rule_spec) + return ERR_PTR(-ENOMEM); + + rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS | MLX5_MATCH_MISC_PARAMETERS_3; + + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, outer_headers.ip_version); + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ip_version, 6); + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, misc_parameters_3.icmpv6_type); + MLX5_SET(fte_match_param, rule_spec->match_value, misc_parameters_3.icmpv6_type, type); + + handle = mlx5_add_flow_rules(ingress_ft, rule_spec, &flow_act, &dest, 1); + + kvfree(rule_spec); + return handle; +} + +static int +mlx5_esw_bridge_ingress_mcast_fhs_create(struct mlx5_esw_bridge_offloads *br_offloads) +{ + struct mlx5_flow_handle *igmp_handle, *mld_query_handle, *mld_report_handle, + *mld_done_handle; + struct mlx5_flow_table *ingress_ft = br_offloads->ingress_ft, + *skip_ft = br_offloads->skip_ft; + int err; + + igmp_handle = mlx5_esw_bridge_ingress_igmp_fh_create(ingress_ft, skip_ft); + if (IS_ERR(igmp_handle)) + return PTR_ERR(igmp_handle); + + if (br_offloads->ingress_mld_fg) { + mld_query_handle = mlx5_esw_bridge_ingress_mld_fh_create(ICMPV6_MGM_QUERY, + ingress_ft, + skip_ft); + if (IS_ERR(mld_query_handle)) { + err = PTR_ERR(mld_query_handle); + goto err_mld_query; + } + + mld_report_handle = mlx5_esw_bridge_ingress_mld_fh_create(ICMPV6_MGM_REPORT, + ingress_ft, + skip_ft); + if (IS_ERR(mld_report_handle)) { + err = PTR_ERR(mld_report_handle); + goto err_mld_report; + } + + mld_done_handle = mlx5_esw_bridge_ingress_mld_fh_create(ICMPV6_MGM_REDUCTION, + ingress_ft, + skip_ft); + if (IS_ERR(mld_done_handle)) { + err = PTR_ERR(mld_done_handle); + goto err_mld_done; + } + } else { + mld_query_handle = NULL; + mld_report_handle = NULL; + mld_done_handle = NULL; + } + + br_offloads->igmp_handle = igmp_handle; + br_offloads->mld_query_handle = mld_query_handle; + br_offloads->mld_report_handle = mld_report_handle; + br_offloads->mld_done_handle = mld_done_handle; + + return 0; + +err_mld_done: + mlx5_del_flow_rules(mld_report_handle); +err_mld_report: + mlx5_del_flow_rules(mld_query_handle); +err_mld_query: + mlx5_del_flow_rules(igmp_handle); + return err; +} + +static void +mlx5_esw_bridge_ingress_mcast_fhs_cleanup(struct mlx5_esw_bridge_offloads *br_offloads) +{ + if (br_offloads->mld_done_handle) + mlx5_del_flow_rules(br_offloads->mld_done_handle); + br_offloads->mld_done_handle = NULL; + if (br_offloads->mld_report_handle) + mlx5_del_flow_rules(br_offloads->mld_report_handle); + br_offloads->mld_report_handle = NULL; + if (br_offloads->mld_query_handle) + mlx5_del_flow_rules(br_offloads->mld_query_handle); + br_offloads->mld_query_handle = NULL; + if (br_offloads->igmp_handle) + mlx5_del_flow_rules(br_offloads->igmp_handle); + br_offloads->igmp_handle = NULL; +} + +static int mlx5_esw_brige_mcast_global_enable(struct mlx5_esw_bridge_offloads *br_offloads) +{ + int err; + + if (br_offloads->ingress_igmp_fg) + return 0; /* already enabled by another bridge */ + + err = mlx5_esw_bridge_ingress_mcast_fgs_init(br_offloads); + if (err) { + esw_warn(br_offloads->esw->dev, + "Failed to create global multicast flow groups (err=%d)\n", + err); + return err; + } + + err = mlx5_esw_bridge_ingress_mcast_fhs_create(br_offloads); + if (err) { + esw_warn(br_offloads->esw->dev, + "Failed to create global multicast flows (err=%d)\n", + err); + goto err_fhs; + } + + return 0; + +err_fhs: + mlx5_esw_bridge_ingress_mcast_fgs_cleanup(br_offloads); + return err; +} + +static void mlx5_esw_brige_mcast_global_disable(struct mlx5_esw_bridge_offloads *br_offloads) +{ + struct mlx5_esw_bridge *br; + + list_for_each_entry(br, &br_offloads->bridges, list) { + /* Ingress table is global, so only disable snooping when all + * bridges on esw have multicast disabled. + */ + if (br->flags & MLX5_ESW_BRIDGE_MCAST_FLAG) + return; + } + + mlx5_esw_bridge_ingress_mcast_fhs_cleanup(br_offloads); + mlx5_esw_bridge_ingress_mcast_fgs_cleanup(br_offloads); +} + +int mlx5_esw_bridge_mcast_enable(struct mlx5_esw_bridge *bridge) +{ + int err; + + err = mlx5_esw_brige_mcast_global_enable(bridge->br_offloads); + if (err) + return err; + + bridge->flags |= MLX5_ESW_BRIDGE_MCAST_FLAG; + return 0; +} + +void mlx5_esw_bridge_mcast_disable(struct mlx5_esw_bridge *bridge) +{ + bridge->flags &= ~MLX5_ESW_BRIDGE_MCAST_FLAG; + mlx5_esw_brige_mcast_global_disable(bridge->br_offloads); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h index b99761e73c1b..dbb935db1b3c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h @@ -12,11 +12,26 @@ #include #include "fs_core.h" +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_SIZE 1 +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_SIZE 3 #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE 131072 -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE 524288 -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_FROM 0 -#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO \ - (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE \ + (524288 - MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_SIZE - \ + MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_SIZE) + +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_FROM 0 +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_FROM \ (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO + 1) #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_TO \ @@ -74,6 +89,7 @@ enum { enum { MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG = BIT(0), + MLX5_ESW_BRIDGE_MCAST_FLAG = BIT(1), }; struct mlx5_esw_bridge_fdb_key { @@ -145,4 +161,7 @@ struct mlx5_esw_bridge { u16 vlan_proto; }; +int mlx5_esw_bridge_mcast_enable(struct mlx5_esw_bridge *bridge); +void mlx5_esw_bridge_mcast_disable(struct mlx5_esw_bridge *bridge); + #endif /* _MLX5_ESW_BRIDGE_PRIVATE_ */ From patchwork Wed Apr 12 04:07:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208422 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAD22C77B6E for ; Wed, 12 Apr 2023 04:08:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229711AbjDLEIZ (ORCPT ); Wed, 12 Apr 2023 00:08:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229688AbjDLEIS (ORCPT ); Wed, 12 Apr 2023 00:08:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8118246A5 for ; Tue, 11 Apr 2023 21:08:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 52F0A62DA5 for ; Wed, 12 Apr 2023 04:08:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0D9CC433EF; Wed, 12 Apr 2023 04:08:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272485; bh=ieDogycRO2WTRYIVGDUhIotqnr4z0XIfuuLu/C7sOFM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aleCNUYpGn0cZ0rTvz6aaBqxi/xPVWJ4ZFlu5C+GrVhXS9TK2mUoRbybV1MrQx4Rw Ac0UAw/elMgI5Trps0fg+NOVa+IfncUEv6yR3kmtGrKsbO1kQEaEwWFc08REBiwfFd Buhe10wRGYznPBEhCq6ZuphG9L1Pzg8Bkvd2A93xfH7U1HEjy1F6V4gnCjSxhgizY7 K0YjTkr7BF6VlHlCY3uflw2ZEwYFvOqKGQMuTlJOmfSjkJJMAWYx02vG9vQFw8iAR8 T08dtEthskLMILiMwaChel6/kE7hKcDB8lYckkFYJ2fsiUsDS0lm+LLZenLPDRie6Q bC8qkFuA8o+cA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Vlad Buslov , Maor Dickman , Roi Dayan Subject: [net-next 06/15] net/mlx5: Bridge, add per-port multicast replication tables Date: Tue, 11 Apr 2023 21:07:43 -0700 Message-Id: <20230412040752.14220-7-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Vlad Buslov Multicast replication requires adding one more level of FDB_BR_OFFLOAD priority flow tables. The new level is used for per-port multicast-specific tables that have following flow groups structure (flow highest to lowest priority): - Flow group of size one that matches on source port metadata. This will have a static single rule that prevent packets from being replicated to their source port. - Flow group of size one that matches all packets and forwards them to the port that owns the table. Initialize the table dynamically on all bridge ports when adding a port to the bridge that has multicast enabled and on all existing bridge ports when receiving multicast enable notification. Signed-off-by: Vlad Buslov Reviewed-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/esw/bridge.c | 14 +- .../mellanox/mlx5/core/esw/bridge_mcast.c | 329 +++++++++++++++++- .../mellanox/mlx5/core/esw/bridge_priv.h | 28 ++ .../net/ethernet/mellanox/mlx5/core/fs_core.c | 2 +- 4 files changed, 370 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c index 35436aa9548d..4bc8c6fc394b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c @@ -61,7 +61,7 @@ mlx5_esw_bridge_pkt_reformat_vlan_pop_create(struct mlx5_eswitch *esw) return mlx5_packet_reformat_alloc(esw->dev, &reformat_params, MLX5_FLOW_NAMESPACE_FDB); } -static struct mlx5_flow_table * +struct mlx5_flow_table * mlx5_esw_bridge_table_create(int max_fte, u32 level, struct mlx5_eswitch *esw) { struct mlx5_flow_table_attr ft_attr = {}; @@ -1506,6 +1506,15 @@ static int mlx5_esw_bridge_vport_init(u16 vport_num, u16 esw_owner_vhca_id, u16 port->bridge = bridge; port->flags |= flags; xa_init(&port->vlans); + + err = mlx5_esw_bridge_port_mcast_init(port); + if (err) { + esw_warn(esw->dev, + "Failed to initialize port multicast (vport=%u,esw_owner_vhca_id=%u,err=%d)\n", + port->vport_num, port->esw_owner_vhca_id, err); + goto err_port_mcast; + } + err = mlx5_esw_bridge_port_insert(port, br_offloads); if (err) { esw_warn(esw->dev, @@ -1518,6 +1527,8 @@ static int mlx5_esw_bridge_vport_init(u16 vport_num, u16 esw_owner_vhca_id, u16 return 0; err_port_insert: + mlx5_esw_bridge_port_mcast_cleanup(port); +err_port_mcast: kvfree(port); return err; } @@ -1535,6 +1546,7 @@ static int mlx5_esw_bridge_vport_cleanup(struct mlx5_esw_bridge_offloads *br_off trace_mlx5_esw_bridge_vport_cleanup(port); mlx5_esw_bridge_port_vlans_flush(port, bridge); + mlx5_esw_bridge_port_mcast_cleanup(port); mlx5_esw_bridge_port_erase(port, br_offloads); kvfree(port); mlx5_esw_bridge_put(br_offloads, bridge); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c index d5a89a86c9e8..4f54cb41ed19 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c @@ -1,10 +1,283 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ +#include "lib/devcom.h" #include "bridge.h" #include "eswitch.h" #include "bridge_priv.h" +static int mlx5_esw_bridge_port_mcast_fts_init(struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge *bridge) +{ + struct mlx5_eswitch *esw = bridge->br_offloads->esw; + struct mlx5_flow_table *mcast_ft; + + mcast_ft = mlx5_esw_bridge_table_create(MLX5_ESW_BRIDGE_MCAST_TABLE_SIZE, + MLX5_ESW_BRIDGE_LEVEL_MCAST_TABLE, + esw); + if (IS_ERR(mcast_ft)) + return PTR_ERR(mcast_ft); + + port->mcast.ft = mcast_ft; + return 0; +} + +static void mlx5_esw_bridge_port_mcast_fts_cleanup(struct mlx5_esw_bridge_port *port) +{ + if (port->mcast.ft) + mlx5_destroy_flow_table(port->mcast.ft); + port->mcast.ft = NULL; +} + +static struct mlx5_flow_group * +mlx5_esw_bridge_mcast_filter_fg_create(struct mlx5_eswitch *esw, + struct mlx5_flow_table *mcast_ft) +{ + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); + struct mlx5_flow_group *fg; + u32 *in, *match; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) + return ERR_PTR(-ENOMEM); + + MLX5_SET(create_flow_group_in, in, match_criteria_enable, MLX5_MATCH_MISC_PARAMETERS_2); + match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); + + MLX5_SET(fte_match_param, match, misc_parameters_2.metadata_reg_c_0, + mlx5_eswitch_get_vport_metadata_mask()); + + MLX5_SET(create_flow_group_in, in, start_flow_index, + MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_FROM); + MLX5_SET(create_flow_group_in, in, end_flow_index, + MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_TO); + + fg = mlx5_create_flow_group(mcast_ft, in); + kvfree(in); + if (IS_ERR(fg)) + esw_warn(esw->dev, + "Failed to create filter flow group for bridge mcast table (err=%pe)\n", + fg); + + return fg; +} + +static struct mlx5_flow_group * +mlx5_esw_bridge_mcast_fwd_fg_create(struct mlx5_eswitch *esw, + struct mlx5_flow_table *mcast_ft) +{ + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); + struct mlx5_flow_group *fg; + u32 *in; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) + return ERR_PTR(-ENOMEM); + + MLX5_SET(create_flow_group_in, in, start_flow_index, + MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_FROM); + MLX5_SET(create_flow_group_in, in, end_flow_index, + MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_TO); + + fg = mlx5_create_flow_group(mcast_ft, in); + kvfree(in); + if (IS_ERR(fg)) + esw_warn(esw->dev, + "Failed to create forward flow group for bridge mcast table (err=%pe)\n", + fg); + + return fg; +} + +static int mlx5_esw_bridge_port_mcast_fgs_init(struct mlx5_esw_bridge_port *port) +{ + struct mlx5_eswitch *esw = port->bridge->br_offloads->esw; + struct mlx5_flow_table *mcast_ft = port->mcast.ft; + struct mlx5_flow_group *fwd_fg, *filter_fg; + int err; + + filter_fg = mlx5_esw_bridge_mcast_filter_fg_create(esw, mcast_ft); + if (IS_ERR(filter_fg)) + return PTR_ERR(filter_fg); + + fwd_fg = mlx5_esw_bridge_mcast_fwd_fg_create(esw, mcast_ft); + if (IS_ERR(fwd_fg)) { + err = PTR_ERR(fwd_fg); + goto err_fwd_fg; + } + + port->mcast.filter_fg = filter_fg; + port->mcast.fwd_fg = fwd_fg; + + return 0; + +err_fwd_fg: + mlx5_destroy_flow_group(filter_fg); + return err; +} + +static void mlx5_esw_bridge_port_mcast_fgs_cleanup(struct mlx5_esw_bridge_port *port) +{ + if (port->mcast.fwd_fg) + mlx5_destroy_flow_group(port->mcast.fwd_fg); + port->mcast.fwd_fg = NULL; + if (port->mcast.filter_fg) + mlx5_destroy_flow_group(port->mcast.filter_fg); + port->mcast.filter_fg = NULL; +} + +static struct mlx5_flow_handle * +mlx5_esw_bridge_mcast_flow_with_esw_create(struct mlx5_esw_bridge_port *port, + struct mlx5_eswitch *esw) +{ + struct mlx5_flow_act flow_act = { + .action = MLX5_FLOW_CONTEXT_ACTION_DROP, + .flags = FLOW_ACT_NO_APPEND, + }; + struct mlx5_flow_spec *rule_spec; + struct mlx5_flow_handle *handle; + + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); + if (!rule_spec) + return ERR_PTR(-ENOMEM); + + rule_spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2; + + MLX5_SET(fte_match_param, rule_spec->match_criteria, + misc_parameters_2.metadata_reg_c_0, mlx5_eswitch_get_vport_metadata_mask()); + MLX5_SET(fte_match_param, rule_spec->match_value, misc_parameters_2.metadata_reg_c_0, + mlx5_eswitch_get_vport_metadata_for_match(esw, port->vport_num)); + + handle = mlx5_add_flow_rules(port->mcast.ft, rule_spec, &flow_act, NULL, 0); + + kvfree(rule_spec); + return handle; +} + +static struct mlx5_flow_handle * +mlx5_esw_bridge_mcast_filter_flow_create(struct mlx5_esw_bridge_port *port) +{ + return mlx5_esw_bridge_mcast_flow_with_esw_create(port, port->bridge->br_offloads->esw); +} + +static struct mlx5_flow_handle * +mlx5_esw_bridge_mcast_filter_flow_peer_create(struct mlx5_esw_bridge_port *port) +{ + struct mlx5_devcom *devcom = port->bridge->br_offloads->esw->dev->priv.devcom; + static struct mlx5_flow_handle *handle; + struct mlx5_eswitch *peer_esw; + + peer_esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); + if (!peer_esw) + return ERR_PTR(-ENODEV); + + handle = mlx5_esw_bridge_mcast_flow_with_esw_create(port, peer_esw); + + mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); + return handle; +} + +static struct mlx5_flow_handle * +mlx5_esw_bridge_mcast_fwd_flow_create(struct mlx5_esw_bridge_port *port) +{ + struct mlx5_flow_act flow_act = { + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, + .flags = FLOW_ACT_NO_APPEND, + }; + struct mlx5_flow_destination dest = { + .type = MLX5_FLOW_DESTINATION_TYPE_VPORT, + .vport.num = port->vport_num, + }; + struct mlx5_esw_bridge *bridge = port->bridge; + struct mlx5_flow_spec *rule_spec; + struct mlx5_flow_handle *handle; + + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); + if (!rule_spec) + return ERR_PTR(-ENOMEM); + + if (MLX5_CAP_ESW_FLOWTABLE(bridge->br_offloads->esw->dev, flow_source) && + port->vport_num == MLX5_VPORT_UPLINK) + rule_spec->flow_context.flow_source = + MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT; + + if (MLX5_CAP_ESW(bridge->br_offloads->esw->dev, merged_eswitch)) { + dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID; + dest.vport.vhca_id = port->esw_owner_vhca_id; + } + handle = mlx5_add_flow_rules(port->mcast.ft, rule_spec, &flow_act, &dest, 1); + + kvfree(rule_spec); + return handle; +} + +static int mlx5_esw_bridge_port_mcast_fhs_init(struct mlx5_esw_bridge_port *port) +{ + struct mlx5_flow_handle *filter_handle, *fwd_handle; + + filter_handle = (port->flags & MLX5_ESW_BRIDGE_PORT_FLAG_PEER) ? + mlx5_esw_bridge_mcast_filter_flow_peer_create(port) : + mlx5_esw_bridge_mcast_filter_flow_create(port); + if (IS_ERR(filter_handle)) + return PTR_ERR(filter_handle); + + fwd_handle = mlx5_esw_bridge_mcast_fwd_flow_create(port); + if (IS_ERR(fwd_handle)) { + mlx5_del_flow_rules(filter_handle); + return PTR_ERR(fwd_handle); + } + + port->mcast.filter_handle = filter_handle; + port->mcast.fwd_handle = fwd_handle; + + return 0; +} + +static void mlx5_esw_bridge_port_mcast_fhs_cleanup(struct mlx5_esw_bridge_port *port) +{ + if (port->mcast.fwd_handle) + mlx5_del_flow_rules(port->mcast.fwd_handle); + port->mcast.fwd_handle = NULL; + if (port->mcast.filter_handle) + mlx5_del_flow_rules(port->mcast.filter_handle); + port->mcast.filter_handle = NULL; +} + +int mlx5_esw_bridge_port_mcast_init(struct mlx5_esw_bridge_port *port) +{ + struct mlx5_esw_bridge *bridge = port->bridge; + int err; + + if (!(bridge->flags & MLX5_ESW_BRIDGE_MCAST_FLAG)) + return 0; + + err = mlx5_esw_bridge_port_mcast_fts_init(port, bridge); + if (err) + return err; + + err = mlx5_esw_bridge_port_mcast_fgs_init(port); + if (err) + goto err_fgs; + + err = mlx5_esw_bridge_port_mcast_fhs_init(port); + if (err) + goto err_fhs; + return err; + +err_fhs: + mlx5_esw_bridge_port_mcast_fgs_cleanup(port); +err_fgs: + mlx5_esw_bridge_port_mcast_fts_cleanup(port); + return err; +} + +void mlx5_esw_bridge_port_mcast_cleanup(struct mlx5_esw_bridge_port *port) +{ + mlx5_esw_bridge_port_mcast_fhs_cleanup(port); + mlx5_esw_bridge_port_mcast_fgs_cleanup(port); + mlx5_esw_bridge_port_mcast_fts_cleanup(port); +} + static struct mlx5_flow_group * mlx5_esw_bridge_ingress_igmp_fg_create(struct mlx5_eswitch *esw, struct mlx5_flow_table *ingress_ft) @@ -251,6 +524,51 @@ mlx5_esw_bridge_ingress_mcast_fhs_cleanup(struct mlx5_esw_bridge_offloads *br_of br_offloads->igmp_handle = NULL; } +static int mlx5_esw_brige_mcast_init(struct mlx5_esw_bridge *bridge) +{ + struct mlx5_esw_bridge_offloads *br_offloads = bridge->br_offloads; + struct mlx5_esw_bridge_port *port, *failed; + unsigned long i; + int err; + + xa_for_each(&br_offloads->ports, i, port) { + if (port->bridge != bridge) + continue; + + err = mlx5_esw_bridge_port_mcast_init(port); + if (err) { + failed = port; + goto err_port; + } + } + return 0; + +err_port: + xa_for_each(&br_offloads->ports, i, port) { + if (port == failed) + break; + if (port->bridge != bridge) + continue; + + mlx5_esw_bridge_port_mcast_cleanup(port); + } + return err; +} + +static void mlx5_esw_brige_mcast_cleanup(struct mlx5_esw_bridge *bridge) +{ + struct mlx5_esw_bridge_offloads *br_offloads = bridge->br_offloads; + struct mlx5_esw_bridge_port *port; + unsigned long i; + + xa_for_each(&br_offloads->ports, i, port) { + if (port->bridge != bridge) + continue; + + mlx5_esw_bridge_port_mcast_cleanup(port); + } +} + static int mlx5_esw_brige_mcast_global_enable(struct mlx5_esw_bridge_offloads *br_offloads) { int err; @@ -306,11 +624,20 @@ int mlx5_esw_bridge_mcast_enable(struct mlx5_esw_bridge *bridge) return err; bridge->flags |= MLX5_ESW_BRIDGE_MCAST_FLAG; - return 0; + + err = mlx5_esw_brige_mcast_init(bridge); + if (err) { + esw_warn(bridge->br_offloads->esw->dev, "Failed to enable multicast (err=%d)\n", + err); + bridge->flags &= ~MLX5_ESW_BRIDGE_MCAST_FLAG; + mlx5_esw_brige_mcast_global_disable(bridge->br_offloads); + } + return err; } void mlx5_esw_bridge_mcast_disable(struct mlx5_esw_bridge *bridge) { + mlx5_esw_brige_mcast_cleanup(bridge); bridge->flags &= ~MLX5_ESW_BRIDGE_MCAST_FLAG; mlx5_esw_brige_mcast_global_disable(bridge->br_offloads); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h index dbb935db1b3c..7fdd719f363c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h @@ -81,9 +81,23 @@ static_assert(MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE == 524288); #define MLX5_ESW_BRIDGE_SKIP_TABLE_SIZE 0 +#define MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_SIZE 1 +#define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_SIZE 1 +#define MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_FROM 0 +#define MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_SIZE - 1) + +#define MLX5_ESW_BRIDGE_MCAST_TABLE_SIZE \ + (MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_TO + 1) enum { MLX5_ESW_BRIDGE_LEVEL_INGRESS_TABLE, MLX5_ESW_BRIDGE_LEVEL_EGRESS_TABLE, + MLX5_ESW_BRIDGE_LEVEL_MCAST_TABLE, MLX5_ESW_BRIDGE_LEVEL_SKIP_TABLE, }; @@ -138,6 +152,14 @@ struct mlx5_esw_bridge_port { u16 flags; struct mlx5_esw_bridge *bridge; struct xarray vlans; + struct { + struct mlx5_flow_table *ft; + struct mlx5_flow_group *filter_fg; + struct mlx5_flow_group *fwd_fg; + + struct mlx5_flow_handle *filter_handle; + struct mlx5_flow_handle *fwd_handle; + } mcast; }; struct mlx5_esw_bridge { @@ -161,6 +183,12 @@ struct mlx5_esw_bridge { u16 vlan_proto; }; +struct mlx5_flow_table *mlx5_esw_bridge_table_create(int max_fte, u32 level, + struct mlx5_eswitch *esw); + +int mlx5_esw_bridge_port_mcast_init(struct mlx5_esw_bridge_port *port); +void mlx5_esw_bridge_port_mcast_cleanup(struct mlx5_esw_bridge_port *port); + int mlx5_esw_bridge_mcast_enable(struct mlx5_esw_bridge *bridge); void mlx5_esw_bridge_mcast_disable(struct mlx5_esw_bridge *bridge); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 8e3da9d4fe1c..19da02c41616 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -2997,7 +2997,7 @@ static int init_fdb_root_ns(struct mlx5_flow_steering *steering) goto out_err; } - maj_prio = fs_create_prio(&steering->fdb_root_ns->ns, FDB_BR_OFFLOAD, 3); + maj_prio = fs_create_prio(&steering->fdb_root_ns->ns, FDB_BR_OFFLOAD, 4); if (IS_ERR(maj_prio)) { err = PTR_ERR(maj_prio); goto out_err; From patchwork Wed Apr 12 04:07:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208423 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6296C7619A for ; Wed, 12 Apr 2023 04:08:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229626AbjDLEI2 (ORCPT ); Wed, 12 Apr 2023 00:08:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229700AbjDLEIS (ORCPT ); Wed, 12 Apr 2023 00:08:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B86EA4EDC for ; Tue, 11 Apr 2023 21:08:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 37F1162DAD for ; Wed, 12 Apr 2023 04:08:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 893EAC433D2; Wed, 12 Apr 2023 04:08:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272486; bh=rLauej4VmkThQvY5uBCF9fPgh5uiblN6C+aPyiwStb8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NJpiqR9DplgbEbG2nEZl2Yr5wKTpKgViZPe3Lw/i1p2gsMaXZ7Rw6XcOthjFHL/l6 ISjvYATwRvqkCAEBtUz5LFstRm3umz8tv5HTo6q5UEygo+MKMVHOA8cXxye6UBmZrl 1b/K1jKg8IhGW1Fjour4QrPbIIjApqrAUyW20JiVRM1Gp+tGXKUgJvkAcVuXxrdLhA CHnMvqBsbV//8cYjEVkiztv5dq4dSvuQh5gSt/1Ks8XKu7mjwsy7ZHQhjEGBzSLvw4 0QlVZd2CTFCMLyOR3EhwJFzCmg7/Eze+tzygHFKquEgUW8Bv7CiJdkJBXVhUbD4E2u wGJ1KFa0e6abQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Vlad Buslov , Maor Dickman , Roi Dayan Subject: [net-next 07/15] net/mlx5: Bridge, support multicast VLAN pop Date: Tue, 11 Apr 2023 21:07:44 -0700 Message-Id: <20230412040752.14220-8-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Vlad Buslov When VLAN with 'untagged' flag is created on port also provision the per-port multicast table rule to pop the VLAN during packet replication. This functionality must be in per-port table because some subset of ports that are member of multicast group can require just a match on VLAN (trunk mode) while other subset can be configured to remove the VLAN tag from packets received on the ports (access mode). Signed-off-by: Vlad Buslov Reviewed-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/esw/bridge.c | 33 ++- .../mellanox/mlx5/core/esw/bridge_mcast.c | 190 +++++++++++++++++- .../mellanox/mlx5/core/esw/bridge_priv.h | 22 +- 3 files changed, 236 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c index 4bc8c6fc394b..52c976135397 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c @@ -1095,8 +1095,21 @@ mlx5_esw_bridge_vlan_push_mark_cleanup(struct mlx5_esw_bridge_vlan *vlan, struct } static int -mlx5_esw_bridge_vlan_push_pop_create(u16 vlan_proto, u16 flags, struct mlx5_esw_bridge_vlan *vlan, - struct mlx5_eswitch *esw) +mlx5_esw_bridge_vlan_push_pop_fhs_create(u16 vlan_proto, struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_vlan *vlan) +{ + return mlx5_esw_bridge_vlan_mcast_init(vlan_proto, port, vlan); +} + +static void +mlx5_esw_bridge_vlan_push_pop_fhs_cleanup(struct mlx5_esw_bridge_vlan *vlan) +{ + mlx5_esw_bridge_vlan_mcast_cleanup(vlan); +} + +static int +mlx5_esw_bridge_vlan_push_pop_create(u16 vlan_proto, u16 flags, struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_vlan *vlan, struct mlx5_eswitch *esw) { int err; @@ -1114,10 +1127,16 @@ mlx5_esw_bridge_vlan_push_pop_create(u16 vlan_proto, u16 flags, struct mlx5_esw_ err = mlx5_esw_bridge_vlan_pop_create(vlan, esw); if (err) goto err_vlan_pop; + + err = mlx5_esw_bridge_vlan_push_pop_fhs_create(vlan_proto, port, vlan); + if (err) + goto err_vlan_pop_fhs; } return 0; +err_vlan_pop_fhs: + mlx5_esw_bridge_vlan_pop_cleanup(vlan, esw); err_vlan_pop: if (vlan->pkt_mod_hdr_push_mark) mlx5_esw_bridge_vlan_push_mark_cleanup(vlan, esw); @@ -1142,7 +1161,7 @@ mlx5_esw_bridge_vlan_create(u16 vlan_proto, u16 vid, u16 flags, struct mlx5_esw_ vlan->flags = flags; INIT_LIST_HEAD(&vlan->fdb_list); - err = mlx5_esw_bridge_vlan_push_pop_create(vlan_proto, flags, vlan, esw); + err = mlx5_esw_bridge_vlan_push_pop_create(vlan_proto, flags, port, vlan, esw); if (err) goto err_vlan_push_pop; @@ -1154,6 +1173,8 @@ mlx5_esw_bridge_vlan_create(u16 vlan_proto, u16 vid, u16 flags, struct mlx5_esw_ return vlan; err_xa_insert: + if (vlan->mcast_handle) + mlx5_esw_bridge_vlan_push_pop_fhs_cleanup(vlan); if (vlan->pkt_reformat_pop) mlx5_esw_bridge_vlan_pop_cleanup(vlan, esw); if (vlan->pkt_mod_hdr_push_mark) @@ -1180,6 +1201,8 @@ static void mlx5_esw_bridge_vlan_flush(struct mlx5_esw_bridge_vlan *vlan, list_for_each_entry_safe(entry, tmp, &vlan->fdb_list, vlan_list) mlx5_esw_bridge_fdb_entry_notify_and_cleanup(entry, bridge); + if (vlan->mcast_handle) + mlx5_esw_bridge_vlan_push_pop_fhs_cleanup(vlan); if (vlan->pkt_reformat_pop) mlx5_esw_bridge_vlan_pop_cleanup(vlan, esw); if (vlan->pkt_mod_hdr_push_mark) @@ -1218,8 +1241,8 @@ static int mlx5_esw_bridge_port_vlans_recreate(struct mlx5_esw_bridge_port *port xa_for_each(&port->vlans, i, vlan) { mlx5_esw_bridge_vlan_flush(vlan, bridge); - err = mlx5_esw_bridge_vlan_push_pop_create(bridge->vlan_proto, vlan->flags, vlan, - br_offloads->esw); + err = mlx5_esw_bridge_vlan_push_pop_create(bridge->vlan_proto, vlan->flags, port, + vlan, br_offloads->esw); if (err) { esw_warn(br_offloads->esw->dev, "Failed to create VLAN=%u(proto=%x) push/pop actions (vport=%u,err=%d)\n", diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c index 4f54cb41ed19..99e2f9fc11a2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c @@ -62,6 +62,60 @@ mlx5_esw_bridge_mcast_filter_fg_create(struct mlx5_eswitch *esw, return fg; } +static struct mlx5_flow_group * +mlx5_esw_bridge_mcast_vlan_proto_fg_create(unsigned int from, unsigned int to, u16 vlan_proto, + struct mlx5_eswitch *esw, + struct mlx5_flow_table *mcast_ft) +{ + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); + struct mlx5_flow_group *fg; + u32 *in, *match; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) + return ERR_PTR(-ENOMEM); + + MLX5_SET(create_flow_group_in, in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS); + match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); + + if (vlan_proto == ETH_P_8021Q) + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.cvlan_tag); + else if (vlan_proto == ETH_P_8021AD) + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.svlan_tag); + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.first_vid); + + MLX5_SET(create_flow_group_in, in, start_flow_index, from); + MLX5_SET(create_flow_group_in, in, end_flow_index, to); + + fg = mlx5_create_flow_group(mcast_ft, in); + kvfree(in); + if (IS_ERR(fg)) + esw_warn(esw->dev, + "Failed to create VLAN(proto=%x) flow group for bridge mcast table (err=%pe)\n", + vlan_proto, fg); + + return fg; +} + +static struct mlx5_flow_group * +mlx5_esw_bridge_mcast_vlan_fg_create(struct mlx5_eswitch *esw, struct mlx5_flow_table *mcast_ft) +{ + unsigned int from = MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_FROM; + unsigned int to = MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_TO; + + return mlx5_esw_bridge_mcast_vlan_proto_fg_create(from, to, ETH_P_8021Q, esw, mcast_ft); +} + +static struct mlx5_flow_group * +mlx5_esw_bridge_mcast_qinq_fg_create(struct mlx5_eswitch *esw, + struct mlx5_flow_table *mcast_ft) +{ + unsigned int from = MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_FROM; + unsigned int to = MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_TO; + + return mlx5_esw_bridge_mcast_vlan_proto_fg_create(from, to, ETH_P_8021AD, esw, mcast_ft); +} + static struct mlx5_flow_group * mlx5_esw_bridge_mcast_fwd_fg_create(struct mlx5_eswitch *esw, struct mlx5_flow_table *mcast_ft) @@ -91,15 +145,27 @@ mlx5_esw_bridge_mcast_fwd_fg_create(struct mlx5_eswitch *esw, static int mlx5_esw_bridge_port_mcast_fgs_init(struct mlx5_esw_bridge_port *port) { + struct mlx5_flow_group *fwd_fg, *qinq_fg, *vlan_fg, *filter_fg; struct mlx5_eswitch *esw = port->bridge->br_offloads->esw; struct mlx5_flow_table *mcast_ft = port->mcast.ft; - struct mlx5_flow_group *fwd_fg, *filter_fg; int err; filter_fg = mlx5_esw_bridge_mcast_filter_fg_create(esw, mcast_ft); if (IS_ERR(filter_fg)) return PTR_ERR(filter_fg); + vlan_fg = mlx5_esw_bridge_mcast_vlan_fg_create(esw, mcast_ft); + if (IS_ERR(vlan_fg)) { + err = PTR_ERR(vlan_fg); + goto err_vlan_fg; + } + + qinq_fg = mlx5_esw_bridge_mcast_qinq_fg_create(esw, mcast_ft); + if (IS_ERR(qinq_fg)) { + err = PTR_ERR(qinq_fg); + goto err_qinq_fg; + } + fwd_fg = mlx5_esw_bridge_mcast_fwd_fg_create(esw, mcast_ft); if (IS_ERR(fwd_fg)) { err = PTR_ERR(fwd_fg); @@ -107,11 +173,17 @@ static int mlx5_esw_bridge_port_mcast_fgs_init(struct mlx5_esw_bridge_port *port } port->mcast.filter_fg = filter_fg; + port->mcast.vlan_fg = vlan_fg; + port->mcast.qinq_fg = qinq_fg; port->mcast.fwd_fg = fwd_fg; return 0; err_fwd_fg: + mlx5_destroy_flow_group(qinq_fg); +err_qinq_fg: + mlx5_destroy_flow_group(vlan_fg); +err_vlan_fg: mlx5_destroy_flow_group(filter_fg); return err; } @@ -121,6 +193,12 @@ static void mlx5_esw_bridge_port_mcast_fgs_cleanup(struct mlx5_esw_bridge_port * if (port->mcast.fwd_fg) mlx5_destroy_flow_group(port->mcast.fwd_fg); port->mcast.fwd_fg = NULL; + if (port->mcast.qinq_fg) + mlx5_destroy_flow_group(port->mcast.qinq_fg); + port->mcast.qinq_fg = NULL; + if (port->mcast.vlan_fg) + mlx5_destroy_flow_group(port->mcast.vlan_fg); + port->mcast.vlan_fg = NULL; if (port->mcast.filter_fg) mlx5_destroy_flow_group(port->mcast.filter_fg); port->mcast.filter_fg = NULL; @@ -177,6 +255,82 @@ mlx5_esw_bridge_mcast_filter_flow_peer_create(struct mlx5_esw_bridge_port *port) return handle; } +static struct mlx5_flow_handle * +mlx5_esw_bridge_mcast_vlan_flow_create(u16 vlan_proto, struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_vlan *vlan) +{ + struct mlx5_flow_act flow_act = { + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, + .flags = FLOW_ACT_NO_APPEND, + }; + struct mlx5_flow_destination dest = { + .type = MLX5_FLOW_DESTINATION_TYPE_VPORT, + .vport.num = port->vport_num, + }; + struct mlx5_esw_bridge *bridge = port->bridge; + struct mlx5_flow_spec *rule_spec; + struct mlx5_flow_handle *handle; + + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); + if (!rule_spec) + return ERR_PTR(-ENOMEM); + + if (MLX5_CAP_ESW_FLOWTABLE(bridge->br_offloads->esw->dev, flow_source) && + port->vport_num == MLX5_VPORT_UPLINK) + rule_spec->flow_context.flow_source = + MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT; + rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS; + + flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; + flow_act.pkt_reformat = vlan->pkt_reformat_pop; + + if (vlan_proto == ETH_P_8021Q) { + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, + outer_headers.cvlan_tag); + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_value, + outer_headers.cvlan_tag); + } else if (vlan_proto == ETH_P_8021AD) { + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, + outer_headers.svlan_tag); + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_value, + outer_headers.svlan_tag); + } + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, outer_headers.first_vid); + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.first_vid, vlan->vid); + + if (MLX5_CAP_ESW(bridge->br_offloads->esw->dev, merged_eswitch)) { + dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID; + dest.vport.vhca_id = port->esw_owner_vhca_id; + } + handle = mlx5_add_flow_rules(port->mcast.ft, rule_spec, &flow_act, &dest, 1); + + kvfree(rule_spec); + return handle; +} + +int mlx5_esw_bridge_vlan_mcast_init(u16 vlan_proto, struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_vlan *vlan) +{ + struct mlx5_flow_handle *handle; + + if (!(port->bridge->flags & MLX5_ESW_BRIDGE_MCAST_FLAG)) + return 0; + + handle = mlx5_esw_bridge_mcast_vlan_flow_create(vlan_proto, port, vlan); + if (IS_ERR(handle)) + return PTR_ERR(handle); + + vlan->mcast_handle = handle; + return 0; +} + +void mlx5_esw_bridge_vlan_mcast_cleanup(struct mlx5_esw_bridge_vlan *vlan) +{ + if (vlan->mcast_handle) + mlx5_del_flow_rules(vlan->mcast_handle); + vlan->mcast_handle = NULL; +} + static struct mlx5_flow_handle * mlx5_esw_bridge_mcast_fwd_flow_create(struct mlx5_esw_bridge_port *port) { @@ -214,6 +368,10 @@ mlx5_esw_bridge_mcast_fwd_flow_create(struct mlx5_esw_bridge_port *port) static int mlx5_esw_bridge_port_mcast_fhs_init(struct mlx5_esw_bridge_port *port) { struct mlx5_flow_handle *filter_handle, *fwd_handle; + struct mlx5_esw_bridge_vlan *vlan, *failed; + unsigned long index; + int err; + filter_handle = (port->flags & MLX5_ESW_BRIDGE_PORT_FLAG_PEER) ? mlx5_esw_bridge_mcast_filter_flow_peer_create(port) : @@ -223,18 +381,44 @@ static int mlx5_esw_bridge_port_mcast_fhs_init(struct mlx5_esw_bridge_port *port fwd_handle = mlx5_esw_bridge_mcast_fwd_flow_create(port); if (IS_ERR(fwd_handle)) { - mlx5_del_flow_rules(filter_handle); - return PTR_ERR(fwd_handle); + err = PTR_ERR(fwd_handle); + goto err_fwd; + } + + xa_for_each(&port->vlans, index, vlan) { + err = mlx5_esw_bridge_vlan_mcast_init(port->bridge->vlan_proto, port, vlan); + if (err) { + failed = vlan; + goto err_vlan; + } } port->mcast.filter_handle = filter_handle; port->mcast.fwd_handle = fwd_handle; return 0; + +err_vlan: + xa_for_each(&port->vlans, index, vlan) { + if (vlan == failed) + break; + + mlx5_esw_bridge_vlan_mcast_cleanup(vlan); + } + mlx5_del_flow_rules(fwd_handle); +err_fwd: + mlx5_del_flow_rules(filter_handle); + return err; } static void mlx5_esw_bridge_port_mcast_fhs_cleanup(struct mlx5_esw_bridge_port *port) { + struct mlx5_esw_bridge_vlan *vlan; + unsigned long index; + + xa_for_each(&port->vlans, index, vlan) + mlx5_esw_bridge_vlan_mcast_cleanup(vlan); + if (port->mcast.fwd_handle) mlx5_del_flow_rules(port->mcast.fwd_handle); port->mcast.fwd_handle = NULL; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h index 7fdd719f363c..36ff32001ce8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h @@ -83,17 +83,31 @@ static_assert(MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE == 524288); #define MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_SIZE 1 #define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_SIZE 1 +#define MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_SIZE 4095 +#define MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_SIZE MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_SIZE #define MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_FROM 0 #define MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_TO \ (MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_SIZE - 1) -#define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_FROM \ +#define MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_FROM \ (MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_TO + 1) +#define MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_TO \ + (MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_FROM + \ + MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_SIZE - 1) +#define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_FROM \ + (MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_TO + 1) #define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_TO \ (MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_FROM + \ MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_SIZE - 1) #define MLX5_ESW_BRIDGE_MCAST_TABLE_SIZE \ (MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_TO + 1) +static_assert(MLX5_ESW_BRIDGE_MCAST_TABLE_SIZE == 8192); + enum { MLX5_ESW_BRIDGE_LEVEL_INGRESS_TABLE, MLX5_ESW_BRIDGE_LEVEL_EGRESS_TABLE, @@ -144,6 +158,7 @@ struct mlx5_esw_bridge_vlan { struct mlx5_pkt_reformat *pkt_reformat_push; struct mlx5_pkt_reformat *pkt_reformat_pop; struct mlx5_modify_hdr *pkt_mod_hdr_push_mark; + struct mlx5_flow_handle *mcast_handle; }; struct mlx5_esw_bridge_port { @@ -155,6 +170,8 @@ struct mlx5_esw_bridge_port { struct { struct mlx5_flow_table *ft; struct mlx5_flow_group *filter_fg; + struct mlx5_flow_group *vlan_fg; + struct mlx5_flow_group *qinq_fg; struct mlx5_flow_group *fwd_fg; struct mlx5_flow_handle *filter_handle; @@ -188,6 +205,9 @@ struct mlx5_flow_table *mlx5_esw_bridge_table_create(int max_fte, u32 level, int mlx5_esw_bridge_port_mcast_init(struct mlx5_esw_bridge_port *port); void mlx5_esw_bridge_port_mcast_cleanup(struct mlx5_esw_bridge_port *port); +int mlx5_esw_bridge_vlan_mcast_init(u16 vlan_proto, struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_vlan *vlan); +void mlx5_esw_bridge_vlan_mcast_cleanup(struct mlx5_esw_bridge_vlan *vlan); int mlx5_esw_bridge_mcast_enable(struct mlx5_esw_bridge *bridge); void mlx5_esw_bridge_mcast_disable(struct mlx5_esw_bridge *bridge); From patchwork Wed Apr 12 04:07:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208425 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44A84C7619A for ; Wed, 12 Apr 2023 04:08:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229751AbjDLEIb (ORCPT ); Wed, 12 Apr 2023 00:08:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229706AbjDLEIS (ORCPT ); Wed, 12 Apr 2023 00:08:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52E481BE4 for ; Tue, 11 Apr 2023 21:08:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2580262DA3 for ; Wed, 12 Apr 2023 04:08:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6EF26C433D2; Wed, 12 Apr 2023 04:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272487; bh=QqHuyNmvVMmBBY2OtpT6NOKG9cON+wAQmLEOW5Gck3s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t0HeAmbr5dXGV8Cig3A4aBl6eQwqGpYJa6gV3RCgg5HVCJZLEi9PjKtrSiz2eBY+o LUnGD/keYkpXgIGypPGD83q7SAf/8gs9B286mvQQxv9/uN8G6ofM3k0LZ/qv+EEG2O mtrD6A8VasHRDIr1Kf8t61k6RHCOgQ2kBHcveY9Nmt0r1+mIh1PAEO4jadg4idS3aV dHFIz4ln98LCXhNfOjD24ZvjjYXRKjZOJoE+w4fWP2vpcU+qQYEL8NUtDQ9TWVBInw J5bwTlaokrY6DHr4E2CM6w0jC2A8kr3shVTNK6fdS2YQa0qMcfM0UMFXM2LSjoyzhR GtQKhdMkEitcA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Vlad Buslov , Maor Dickman , Roi Dayan Subject: [net-next 08/15] net/mlx5: Bridge, implement mdb offload Date: Tue, 11 Apr 2023 21:07:45 -0700 Message-Id: <20230412040752.14220-9-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Vlad Buslov Implement support for add/del SWITCHDEV_OBJ_ID_PORT_MDB events. For mdb destination addresses configure egress table rules to replicate to per-port multicast tables of all ports that are member of the multicast group as illustrated by 'MDB1' rule in the following diagram: +--------+--+ +---------------------------------------> Port 1 | | | +-^------+--+ | | | | +-----------------------------------------+ | +---------------------------+ | | EGRESS table | | +--> PORT 1 multicast table | | +----------------------------------+ +-----------------------------------------+ | | +---------------------------+ | | INGRESS table | | | | | | | | +----------------------------------+ | dst_mac=P1,vlan=X -> pop vlan, goto P1 +--+ | | FG0: | | | | | dst_mac=P1,vlan=Y -> pop vlan, goto P1 | | | src_port=dst_port -> drop | | | src_mac=M1,vlan=X -> goto egress +---> dst_mac=P2,vlan=X -> pop vlan, goto P2 +--+ | | FG1: | | | ... | | dst_mac=P2,vlan=Y -> goto P2 | | | | VLAN X -> pop, goto port | | | | | dst_mac=MDB1,vlan=Y -> goto mcast P1,P2 +-----+ | ... | | +----------------------------------+ | | | | | VLAN Y -> pop, goto port +-------+ +-----------------------------------------+ | | | FG3: | | | | matchall -> goto port | | | | | | | +---------------------------+ | | | | | | +--------+--+ +---------------------------------------> Port 2 | | | +-^------+--+ | | | | | +---------------------------+ | +--> PORT 2 multicast table | | +---------------------------+ | | | | | FG0: | | | src_port=dst_port -> drop | | | FG1: | | | VLAN X -> pop, goto port | | | ... | | | | | | FG3: | | | matchall -> goto port +-------+ | | +---------------------------+ MDB is managed by extending mlx5 bridge to store an entry in mlx5_esw_bridge->mdb_list linked list (used to iterate over all offloaded MDBs) and mlx5_esw_bridge->mdb_ht hash table (used to lookup existing MDB by MAC+VLAN). Every MDB entry can be attached to arbitrary amount of bridge ports that are stored in mlx5_esw_bridge_mdb_entry->ports xarray in order to allow both efficient lookup of the port and also iteration over all ports that the entry is attached to. Every time MDB is attached/detached to/from a port, the hardware rule is recreated with list of destinations corresponding to all attached ports. When the entry is detached from the last port it is removed from mdb and destroyed which means that the ports xarray also acts as implicit reference counting mechanism. Signed-off-by: Vlad Buslov Reviewed-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/en/rep/bridge.c | 12 + .../ethernet/mellanox/mlx5/core/esw/bridge.c | 75 ++++- .../ethernet/mellanox/mlx5/core/esw/bridge.h | 6 + .../mellanox/mlx5/core/esw/bridge_mcast.c | 295 ++++++++++++++++++ .../mellanox/mlx5/core/esw/bridge_priv.h | 29 ++ 5 files changed, 413 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c index 9701eb3c5f1b..dd0dd3f028a3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c @@ -220,6 +220,7 @@ mlx5_esw_bridge_port_obj_add(struct net_device *dev, struct netlink_ext_ack *extack = switchdev_notifier_info_to_extack(&port_obj_info->info); const struct switchdev_obj *obj = port_obj_info->obj; const struct switchdev_obj_port_vlan *vlan; + const struct switchdev_obj_port_mdb *mdb; u16 vport_num, esw_owner_vhca_id; int err; @@ -235,6 +236,11 @@ mlx5_esw_bridge_port_obj_add(struct net_device *dev, err = mlx5_esw_bridge_port_vlan_add(vport_num, esw_owner_vhca_id, vlan->vid, vlan->flags, br_offloads, extack); break; + case SWITCHDEV_OBJ_ID_PORT_MDB: + mdb = SWITCHDEV_OBJ_PORT_MDB(obj); + err = mlx5_esw_bridge_port_mdb_add(vport_num, esw_owner_vhca_id, mdb->addr, + mdb->vid, br_offloads, extack); + break; default: return -EOPNOTSUPP; } @@ -248,6 +254,7 @@ mlx5_esw_bridge_port_obj_del(struct net_device *dev, { const struct switchdev_obj *obj = port_obj_info->obj; const struct switchdev_obj_port_vlan *vlan; + const struct switchdev_obj_port_mdb *mdb; u16 vport_num, esw_owner_vhca_id; if (!mlx5_esw_bridge_rep_vport_num_vhca_id_get(dev, br_offloads->esw, &vport_num, @@ -261,6 +268,11 @@ mlx5_esw_bridge_port_obj_del(struct net_device *dev, vlan = SWITCHDEV_OBJ_PORT_VLAN(obj); mlx5_esw_bridge_port_vlan_del(vport_num, esw_owner_vhca_id, vlan->vid, br_offloads); break; + case SWITCHDEV_OBJ_ID_PORT_MDB: + mdb = SWITCHDEV_OBJ_PORT_MDB(obj); + mlx5_esw_bridge_port_mdb_del(vport_num, esw_owner_vhca_id, mdb->addr, mdb->vid, + br_offloads); + break; default: return -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c index 52c976135397..be4787539c6c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c @@ -840,6 +840,10 @@ static struct mlx5_esw_bridge *mlx5_esw_bridge_create(int ifindex, if (err) goto err_fdb_ht; + err = mlx5_esw_bridge_mdb_init(bridge); + if (err) + goto err_mdb_ht; + INIT_LIST_HEAD(&bridge->fdb_list); bridge->ifindex = ifindex; bridge->refcnt = 1; @@ -849,6 +853,8 @@ static struct mlx5_esw_bridge *mlx5_esw_bridge_create(int ifindex, return bridge; +err_mdb_ht: + rhashtable_destroy(&bridge->fdb_ht); err_fdb_ht: mlx5_esw_bridge_egress_table_cleanup(bridge); err_egress_tbl: @@ -870,6 +876,7 @@ static void mlx5_esw_bridge_put(struct mlx5_esw_bridge_offloads *br_offloads, mlx5_esw_bridge_egress_table_cleanup(bridge); mlx5_esw_bridge_mcast_disable(bridge); list_del(&bridge->list); + mlx5_esw_bridge_mdb_cleanup(bridge); rhashtable_destroy(&bridge->fdb_ht); kvfree(bridge); @@ -909,7 +916,7 @@ static unsigned long mlx5_esw_bridge_port_key_from_data(u16 vport_num, u16 esw_o return vport_num | (unsigned long)esw_owner_vhca_id << sizeof(vport_num) * BITS_PER_BYTE; } -static unsigned long mlx5_esw_bridge_port_key(struct mlx5_esw_bridge_port *port) +unsigned long mlx5_esw_bridge_port_key(struct mlx5_esw_bridge_port *port) { return mlx5_esw_bridge_port_key_from_data(port->vport_num, port->esw_owner_vhca_id); } @@ -1192,7 +1199,8 @@ static void mlx5_esw_bridge_vlan_erase(struct mlx5_esw_bridge_port *port, xa_erase(&port->vlans, vlan->vid); } -static void mlx5_esw_bridge_vlan_flush(struct mlx5_esw_bridge_vlan *vlan, +static void mlx5_esw_bridge_vlan_flush(struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_vlan *vlan, struct mlx5_esw_bridge *bridge) { struct mlx5_eswitch *esw = bridge->br_offloads->esw; @@ -1200,6 +1208,7 @@ static void mlx5_esw_bridge_vlan_flush(struct mlx5_esw_bridge_vlan *vlan, list_for_each_entry_safe(entry, tmp, &vlan->fdb_list, vlan_list) mlx5_esw_bridge_fdb_entry_notify_and_cleanup(entry, bridge); + mlx5_esw_bridge_port_mdb_vlan_flush(port, vlan); if (vlan->mcast_handle) mlx5_esw_bridge_vlan_push_pop_fhs_cleanup(vlan); @@ -1216,7 +1225,7 @@ static void mlx5_esw_bridge_vlan_cleanup(struct mlx5_esw_bridge_port *port, struct mlx5_esw_bridge *bridge) { trace_mlx5_esw_bridge_vlan_cleanup(vlan); - mlx5_esw_bridge_vlan_flush(vlan, bridge); + mlx5_esw_bridge_vlan_flush(port, vlan, bridge); mlx5_esw_bridge_vlan_erase(port, vlan); kvfree(vlan); } @@ -1240,7 +1249,7 @@ static int mlx5_esw_bridge_port_vlans_recreate(struct mlx5_esw_bridge_port *port int err; xa_for_each(&port->vlans, i, vlan) { - mlx5_esw_bridge_vlan_flush(vlan, bridge); + mlx5_esw_bridge_vlan_flush(port, vlan, bridge); err = mlx5_esw_bridge_vlan_push_pop_create(bridge->vlan_proto, vlan->flags, port, vlan, br_offloads->esw); if (err) { @@ -1450,6 +1459,7 @@ int mlx5_esw_bridge_vlan_filtering_set(u16 vport_num, u16 esw_owner_vhca_id, boo return 0; mlx5_esw_bridge_fdb_flush(bridge); + mlx5_esw_bridge_mdb_flush(bridge); if (enable) bridge->flags |= MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG; else @@ -1476,6 +1486,7 @@ int mlx5_esw_bridge_vlan_proto_set(u16 vport_num, u16 esw_owner_vhca_id, u16 pro } mlx5_esw_bridge_fdb_flush(bridge); + mlx5_esw_bridge_mdb_flush(bridge); bridge->vlan_proto = proto; mlx5_esw_bridge_vlans_recreate(bridge); @@ -1792,6 +1803,62 @@ void mlx5_esw_bridge_update(struct mlx5_esw_bridge_offloads *br_offloads) } } +int mlx5_esw_bridge_port_mdb_add(u16 vport_num, u16 esw_owner_vhca_id, const unsigned char *addr, + u16 vid, struct mlx5_esw_bridge_offloads *br_offloads, + struct netlink_ext_ack *extack) +{ + struct mlx5_esw_bridge_vlan *vlan; + struct mlx5_esw_bridge_port *port; + struct mlx5_esw_bridge *bridge; + int err; + + port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); + if (!port) { + esw_warn(br_offloads->esw->dev, + "Failed to lookup bridge port to add MDB (MAC=%pM,vport=%u)\n", + addr, vport_num); + NL_SET_ERR_MSG_FMT_MOD(extack, + "Failed to lookup bridge port to add MDB (MAC=%pM,vport=%u)\n", + addr, vport_num); + return -EINVAL; + } + + bridge = port->bridge; + if (bridge->flags & MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG && vid) { + vlan = mlx5_esw_bridge_vlan_lookup(vid, port); + if (!vlan) { + esw_warn(br_offloads->esw->dev, + "Failed to lookup bridge port vlan metadata to create MDB (MAC=%pM,vid=%u,vport=%u)\n", + addr, vid, vport_num); + NL_SET_ERR_MSG_FMT_MOD(extack, + "Failed to lookup bridge port vlan metadata to create MDB (MAC=%pM,vid=%u,vport=%u)\n", + addr, vid, vport_num); + return -EINVAL; + } + } + + err = mlx5_esw_bridge_port_mdb_attach(port, addr, vid); + if (err) { + NL_SET_ERR_MSG_FMT_MOD(extack, "Failed to add MDB (MAC=%pM,vid=%u,vport=%u)\n", + addr, vid, vport_num); + return err; + } + + return 0; +} + +void mlx5_esw_bridge_port_mdb_del(u16 vport_num, u16 esw_owner_vhca_id, const unsigned char *addr, + u16 vid, struct mlx5_esw_bridge_offloads *br_offloads) +{ + struct mlx5_esw_bridge_port *port; + + port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); + if (!port) + return; + + mlx5_esw_bridge_port_mdb_detach(port, addr, vid); +} + static void mlx5_esw_bridge_flush(struct mlx5_esw_bridge_offloads *br_offloads) { struct mlx5_esw_bridge_port *port; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h index b18f137173d9..9cab66467289 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h @@ -79,4 +79,10 @@ int mlx5_esw_bridge_port_vlan_add(u16 vport_num, u16 esw_owner_vhca_id, u16 vid, void mlx5_esw_bridge_port_vlan_del(u16 vport_num, u16 esw_owner_vhca_id, u16 vid, struct mlx5_esw_bridge_offloads *br_offloads); +int mlx5_esw_bridge_port_mdb_add(u16 vport_num, u16 esw_owner_vhca_id, const unsigned char *addr, + u16 vid, struct mlx5_esw_bridge_offloads *br_offloads, + struct netlink_ext_ack *extack); +void mlx5_esw_bridge_port_mdb_del(u16 vport_num, u16 esw_owner_vhca_id, const unsigned char *addr, + u16 vid, struct mlx5_esw_bridge_offloads *br_offloads); + #endif /* __MLX5_ESW_BRIDGE_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c index 99e2f9fc11a2..d17fe6d374b5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c @@ -6,6 +6,300 @@ #include "eswitch.h" #include "bridge_priv.h" +static const struct rhashtable_params mdb_ht_params = { + .key_offset = offsetof(struct mlx5_esw_bridge_mdb_entry, key), + .key_len = sizeof(struct mlx5_esw_bridge_mdb_key), + .head_offset = offsetof(struct mlx5_esw_bridge_mdb_entry, ht_node), + .automatic_shrinking = true, +}; + +int mlx5_esw_bridge_mdb_init(struct mlx5_esw_bridge *bridge) +{ + INIT_LIST_HEAD(&bridge->mdb_list); + return rhashtable_init(&bridge->mdb_ht, &mdb_ht_params); +} + +void mlx5_esw_bridge_mdb_cleanup(struct mlx5_esw_bridge *bridge) +{ + rhashtable_destroy(&bridge->mdb_ht); +} + +static struct mlx5_esw_bridge_port * +mlx5_esw_bridge_mdb_port_lookup(struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_mdb_entry *entry) +{ + return xa_load(&entry->ports, mlx5_esw_bridge_port_key(port)); +} + +static int mlx5_esw_bridge_mdb_port_insert(struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_mdb_entry *entry) +{ + int err = xa_insert(&entry->ports, mlx5_esw_bridge_port_key(port), port, GFP_KERNEL); + + if (!err) + entry->num_ports++; + return err; +} + +static void mlx5_esw_bridge_mdb_port_remove(struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_mdb_entry *entry) +{ + xa_erase(&entry->ports, mlx5_esw_bridge_port_key(port)); + entry->num_ports--; +} + +static struct mlx5_flow_handle * +mlx5_esw_bridge_mdb_flow_create(u16 esw_owner_vhca_id, struct mlx5_esw_bridge_mdb_entry *entry, + struct mlx5_esw_bridge *bridge) +{ + struct mlx5_flow_act flow_act = { + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, + .flags = FLOW_ACT_NO_APPEND | FLOW_ACT_IGNORE_FLOW_LEVEL, + }; + int num_dests = entry->num_ports, i = 0; + struct mlx5_flow_destination *dests; + struct mlx5_esw_bridge_port *port; + struct mlx5_flow_spec *rule_spec; + struct mlx5_flow_handle *handle; + u8 *dmac_v, *dmac_c; + unsigned long idx; + + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); + if (!rule_spec) + return ERR_PTR(-ENOMEM); + + dests = kvcalloc(num_dests, sizeof(*dests), GFP_KERNEL); + if (!dests) { + kvfree(rule_spec); + return ERR_PTR(-ENOMEM); + } + + xa_for_each(&entry->ports, idx, port) { + dests[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; + dests[i].ft = port->mcast.ft; + i++; + } + + rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS; + dmac_v = MLX5_ADDR_OF(fte_match_param, rule_spec->match_value, outer_headers.dmac_47_16); + ether_addr_copy(dmac_v, entry->key.addr); + dmac_c = MLX5_ADDR_OF(fte_match_param, rule_spec->match_criteria, outer_headers.dmac_47_16); + eth_broadcast_addr(dmac_c); + + if (entry->key.vid) { + if (bridge->vlan_proto == ETH_P_8021Q) { + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, + outer_headers.cvlan_tag); + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_value, + outer_headers.cvlan_tag); + } else if (bridge->vlan_proto == ETH_P_8021AD) { + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, + outer_headers.svlan_tag); + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_value, + outer_headers.svlan_tag); + } + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, + outer_headers.first_vid); + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.first_vid, + entry->key.vid); + } + + handle = mlx5_add_flow_rules(bridge->egress_ft, rule_spec, &flow_act, dests, num_dests); + + kvfree(dests); + kvfree(rule_spec); + return handle; +} + +static int +mlx5_esw_bridge_port_mdb_offload(struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_mdb_entry *entry) +{ + struct mlx5_flow_handle *handle; + + handle = mlx5_esw_bridge_mdb_flow_create(port->esw_owner_vhca_id, entry, port->bridge); + if (entry->egress_handle) { + mlx5_del_flow_rules(entry->egress_handle); + entry->egress_handle = NULL; + } + if (IS_ERR(handle)) + return PTR_ERR(handle); + + entry->egress_handle = handle; + return 0; +} + +static struct mlx5_esw_bridge_mdb_entry * +mlx5_esw_bridge_mdb_lookup(struct mlx5_esw_bridge *bridge, + const unsigned char *addr, u16 vid) +{ + struct mlx5_esw_bridge_mdb_key key = {}; + + ether_addr_copy(key.addr, addr); + key.vid = vid; + return rhashtable_lookup_fast(&bridge->mdb_ht, &key, mdb_ht_params); +} + +static struct mlx5_esw_bridge_mdb_entry * +mlx5_esw_bridge_port_mdb_entry_init(struct mlx5_esw_bridge_port *port, + const unsigned char *addr, u16 vid) +{ + struct mlx5_esw_bridge *bridge = port->bridge; + struct mlx5_esw_bridge_mdb_entry *entry; + int err; + + entry = kvzalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return ERR_PTR(-ENOMEM); + + ether_addr_copy(entry->key.addr, addr); + entry->key.vid = vid; + xa_init(&entry->ports); + err = rhashtable_insert_fast(&bridge->mdb_ht, &entry->ht_node, mdb_ht_params); + if (err) + goto err_ht_insert; + + list_add(&entry->list, &bridge->mdb_list); + + return entry; + +err_ht_insert: + xa_destroy(&entry->ports); + kvfree(entry); + return ERR_PTR(err); +} + +static void mlx5_esw_bridge_port_mdb_entry_cleanup(struct mlx5_esw_bridge *bridge, + struct mlx5_esw_bridge_mdb_entry *entry) +{ + if (entry->egress_handle) + mlx5_del_flow_rules(entry->egress_handle); + list_del(&entry->list); + rhashtable_remove_fast(&bridge->mdb_ht, &entry->ht_node, mdb_ht_params); + xa_destroy(&entry->ports); + kvfree(entry); +} + +int mlx5_esw_bridge_port_mdb_attach(struct mlx5_esw_bridge_port *port, const unsigned char *addr, + u16 vid) +{ + struct mlx5_esw_bridge *bridge = port->bridge; + struct mlx5_esw_bridge_mdb_entry *entry; + int err; + + if (!(bridge->flags & MLX5_ESW_BRIDGE_MCAST_FLAG)) + return -EOPNOTSUPP; + + entry = mlx5_esw_bridge_mdb_lookup(bridge, addr, vid); + if (entry) { + if (mlx5_esw_bridge_mdb_port_lookup(port, entry)) { + esw_warn(bridge->br_offloads->esw->dev, "MDB attach entry is already attached to port (MAC=%pM,vid=%u,vport=%u)\n", + addr, vid, port->vport_num); + return 0; + } + } else { + entry = mlx5_esw_bridge_port_mdb_entry_init(port, addr, vid); + if (IS_ERR(entry)) { + err = PTR_ERR(entry); + esw_warn(bridge->br_offloads->esw->dev, "MDB attach failed to init entry (MAC=%pM,vid=%u,vport=%u,err=%d)\n", + addr, vid, port->vport_num, err); + return err; + } + } + + err = mlx5_esw_bridge_mdb_port_insert(port, entry); + if (err) { + if (!entry->num_ports) + mlx5_esw_bridge_port_mdb_entry_cleanup(bridge, entry); /* new mdb entry */ + esw_warn(bridge->br_offloads->esw->dev, + "MDB attach failed to insert port (MAC=%pM,vid=%u,vport=%u,err=%d)\n", + addr, vid, port->vport_num, err); + return err; + } + + err = mlx5_esw_bridge_port_mdb_offload(port, entry); + if (err) + /* Single mdb can be used by multiple ports, so just log the + * error and continue. + */ + esw_warn(bridge->br_offloads->esw->dev, "MDB attach failed to offload (MAC=%pM,vid=%u,vport=%u,err=%d)\n", + addr, vid, port->vport_num, err); + return 0; +} + +static void mlx5_esw_bridge_port_mdb_entry_detach(struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_mdb_entry *entry) +{ + struct mlx5_esw_bridge *bridge = port->bridge; + int err; + + mlx5_esw_bridge_mdb_port_remove(port, entry); + if (!entry->num_ports) { + mlx5_esw_bridge_port_mdb_entry_cleanup(bridge, entry); + return; + } + + err = mlx5_esw_bridge_port_mdb_offload(port, entry); + if (err) + /* Single mdb can be used by multiple ports, so just log the + * error and continue. + */ + esw_warn(bridge->br_offloads->esw->dev, "MDB detach failed to offload (MAC=%pM,vid=%u,vport=%u)\n", + entry->key.addr, entry->key.vid, port->vport_num); +} + +void mlx5_esw_bridge_port_mdb_detach(struct mlx5_esw_bridge_port *port, const unsigned char *addr, + u16 vid) +{ + struct mlx5_esw_bridge *bridge = port->bridge; + struct mlx5_esw_bridge_mdb_entry *entry; + + entry = mlx5_esw_bridge_mdb_lookup(bridge, addr, vid); + if (!entry) { + esw_debug(bridge->br_offloads->esw->dev, + "MDB detach entry not found (MAC=%pM,vid=%u,vport=%u)\n", + addr, vid, port->vport_num); + return; + } + + if (!mlx5_esw_bridge_mdb_port_lookup(port, entry)) { + esw_debug(bridge->br_offloads->esw->dev, + "MDB detach entry not attached to the port (MAC=%pM,vid=%u,vport=%u)\n", + addr, vid, port->vport_num); + return; + } + + mlx5_esw_bridge_port_mdb_entry_detach(port, entry); +} + +void mlx5_esw_bridge_port_mdb_vlan_flush(struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_vlan *vlan) +{ + struct mlx5_esw_bridge *bridge = port->bridge; + struct mlx5_esw_bridge_mdb_entry *entry, *tmp; + + list_for_each_entry_safe(entry, tmp, &bridge->mdb_list, list) + if (entry->key.vid == vlan->vid && mlx5_esw_bridge_mdb_port_lookup(port, entry)) + mlx5_esw_bridge_port_mdb_entry_detach(port, entry); +} + +static void mlx5_esw_bridge_port_mdb_flush(struct mlx5_esw_bridge_port *port) +{ + struct mlx5_esw_bridge *bridge = port->bridge; + struct mlx5_esw_bridge_mdb_entry *entry, *tmp; + + list_for_each_entry_safe(entry, tmp, &bridge->mdb_list, list) + if (mlx5_esw_bridge_mdb_port_lookup(port, entry)) + mlx5_esw_bridge_port_mdb_entry_detach(port, entry); +} + +void mlx5_esw_bridge_mdb_flush(struct mlx5_esw_bridge *bridge) +{ + struct mlx5_esw_bridge_mdb_entry *entry, *tmp; + + list_for_each_entry_safe(entry, tmp, &bridge->mdb_list, list) + mlx5_esw_bridge_port_mdb_entry_cleanup(bridge, entry); +} static int mlx5_esw_bridge_port_mcast_fts_init(struct mlx5_esw_bridge_port *port, struct mlx5_esw_bridge *bridge) { @@ -457,6 +751,7 @@ int mlx5_esw_bridge_port_mcast_init(struct mlx5_esw_bridge_port *port) void mlx5_esw_bridge_port_mcast_cleanup(struct mlx5_esw_bridge_port *port) { + mlx5_esw_bridge_port_mdb_flush(port); mlx5_esw_bridge_port_mcast_fhs_cleanup(port); mlx5_esw_bridge_port_mcast_fgs_cleanup(port); mlx5_esw_bridge_port_mcast_fts_cleanup(port); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h index 36ff32001ce8..849028f94be2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h @@ -125,6 +125,11 @@ struct mlx5_esw_bridge_fdb_key { u16 vid; }; +struct mlx5_esw_bridge_mdb_key { + unsigned char addr[ETH_ALEN]; + u16 vid; +}; + enum { MLX5_ESW_BRIDGE_FLAG_ADDED_BY_USER = BIT(0), MLX5_ESW_BRIDGE_FLAG_PEER = BIT(1), @@ -151,6 +156,16 @@ struct mlx5_esw_bridge_fdb_entry { struct mlx5_flow_handle *filter_handle; }; +struct mlx5_esw_bridge_mdb_entry { + struct mlx5_esw_bridge_mdb_key key; + struct rhash_head ht_node; + struct list_head list; + struct xarray ports; + int num_ports; + + struct mlx5_flow_handle *egress_handle; +}; + struct mlx5_esw_bridge_vlan { u16 vid; u16 flags; @@ -188,6 +203,9 @@ struct mlx5_esw_bridge { struct list_head fdb_list; struct rhashtable fdb_ht; + struct list_head mdb_list; + struct rhashtable mdb_ht; + struct mlx5_flow_table *egress_ft; struct mlx5_flow_group *egress_vlan_fg; struct mlx5_flow_group *egress_qinq_fg; @@ -202,6 +220,7 @@ struct mlx5_esw_bridge { struct mlx5_flow_table *mlx5_esw_bridge_table_create(int max_fte, u32 level, struct mlx5_eswitch *esw); +unsigned long mlx5_esw_bridge_port_key(struct mlx5_esw_bridge_port *port); int mlx5_esw_bridge_port_mcast_init(struct mlx5_esw_bridge_port *port); void mlx5_esw_bridge_port_mcast_cleanup(struct mlx5_esw_bridge_port *port); @@ -212,4 +231,14 @@ void mlx5_esw_bridge_vlan_mcast_cleanup(struct mlx5_esw_bridge_vlan *vlan); int mlx5_esw_bridge_mcast_enable(struct mlx5_esw_bridge *bridge); void mlx5_esw_bridge_mcast_disable(struct mlx5_esw_bridge *bridge); +int mlx5_esw_bridge_mdb_init(struct mlx5_esw_bridge *bridge); +void mlx5_esw_bridge_mdb_cleanup(struct mlx5_esw_bridge *bridge); +int mlx5_esw_bridge_port_mdb_attach(struct mlx5_esw_bridge_port *port, const unsigned char *addr, + u16 vid); +void mlx5_esw_bridge_port_mdb_detach(struct mlx5_esw_bridge_port *port, const unsigned char *addr, + u16 vid); +void mlx5_esw_bridge_port_mdb_vlan_flush(struct mlx5_esw_bridge_port *port, + struct mlx5_esw_bridge_vlan *vlan); +void mlx5_esw_bridge_mdb_flush(struct mlx5_esw_bridge *bridge); + #endif /* _MLX5_ESW_BRIDGE_PRIVATE_ */ From patchwork Wed Apr 12 04:07:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208424 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32D54C77B73 for ; Wed, 12 Apr 2023 04:08:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229671AbjDLEI3 (ORCPT ); Wed, 12 Apr 2023 00:08:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229701AbjDLEIS (ORCPT ); Wed, 12 Apr 2023 00:08:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74A0C526A for ; Tue, 11 Apr 2023 21:08:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 012E262DA9 for ; Wed, 12 Apr 2023 04:08:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53A94C433EF; Wed, 12 Apr 2023 04:08:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272488; bh=032JmDWv8d5jlUbjFKOcXwfkSvYVfLttYNSAmhacHPE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ekj+Xv9HgRWpX5j5QEcXc3uMCeVySgpLuZRH2elmqxByjVa/Ht2MnAaWMBo1Unhxp WOv+NFxbBJTI+tnZp3Pu18UjX1i+61bZ9jhHQ0frnOZS5K5lrMAF2znGJJkUiBGJON CYx+r5E3oj0OciYI2f2ogI0hr81l3+GQfWh7+1GkQlYTuBeRbnHYrBpJ8WOZT6D7MC Ew7CvK/6xRlkhVpYyU2RSihE2uZE6TgJL4qL2o0DIdPEcfTFMbsIbfsqA5cBxbFKga jluI/YJfNPlZbjE9oiYdyPo0BRWG4tOP2dC+sJ2WOu4GLQVWTx7ZIJATXq0w81k1yF HQVF8xstgqyjA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Vlad Buslov , Maor Dickman , Roi Dayan Subject: [net-next 09/15] net/mlx5: Bridge, add tracepoints for multicast Date: Tue, 11 Apr 2023 21:07:46 -0700 Message-Id: <20230412040752.14220-10-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Vlad Buslov Pass target struct net_device to mdb attach/detach handler in order to expose the port name to the new tracepoints. Implemented following tracepoints: - Attach mdb to port. - Detach mdb from port. Usage example: ># cd /sys/kernel/debug/tracing ># echo mlx5:mlx5_esw_bridge_port_mdb_attach >> set_event ># cat trace ... kworker/0:0-19071 [000] ..... 259004.253848: mlx5_esw_bridge_port_mdb_attach: net_device=enp8s0f0_0 addr=33:33:ff:00:00:01 vid=0 num_ports=1 offloaded=1 Signed-off-by: Vlad Buslov Reviewed-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/en/rep/bridge.c | 4 +-- .../ethernet/mellanox/mlx5/core/esw/bridge.c | 14 ++++---- .../ethernet/mellanox/mlx5/core/esw/bridge.h | 10 +++--- .../mellanox/mlx5/core/esw/bridge_mcast.c | 12 ++++--- .../mellanox/mlx5/core/esw/bridge_priv.h | 8 ++--- .../mlx5/core/esw/diag/bridge_tracepoint.h | 35 +++++++++++++++++++ 6 files changed, 63 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c index dd0dd3f028a3..fd191925ab4b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c @@ -238,7 +238,7 @@ mlx5_esw_bridge_port_obj_add(struct net_device *dev, break; case SWITCHDEV_OBJ_ID_PORT_MDB: mdb = SWITCHDEV_OBJ_PORT_MDB(obj); - err = mlx5_esw_bridge_port_mdb_add(vport_num, esw_owner_vhca_id, mdb->addr, + err = mlx5_esw_bridge_port_mdb_add(dev, vport_num, esw_owner_vhca_id, mdb->addr, mdb->vid, br_offloads, extack); break; default: @@ -270,7 +270,7 @@ mlx5_esw_bridge_port_obj_del(struct net_device *dev, break; case SWITCHDEV_OBJ_ID_PORT_MDB: mdb = SWITCHDEV_OBJ_PORT_MDB(obj); - mlx5_esw_bridge_port_mdb_del(vport_num, esw_owner_vhca_id, mdb->addr, mdb->vid, + mlx5_esw_bridge_port_mdb_del(dev, vport_num, esw_owner_vhca_id, mdb->addr, mdb->vid, br_offloads); break; default: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c index be4787539c6c..1ba03e219111 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c @@ -1803,8 +1803,9 @@ void mlx5_esw_bridge_update(struct mlx5_esw_bridge_offloads *br_offloads) } } -int mlx5_esw_bridge_port_mdb_add(u16 vport_num, u16 esw_owner_vhca_id, const unsigned char *addr, - u16 vid, struct mlx5_esw_bridge_offloads *br_offloads, +int mlx5_esw_bridge_port_mdb_add(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, + const unsigned char *addr, u16 vid, + struct mlx5_esw_bridge_offloads *br_offloads, struct netlink_ext_ack *extack) { struct mlx5_esw_bridge_vlan *vlan; @@ -1837,7 +1838,7 @@ int mlx5_esw_bridge_port_mdb_add(u16 vport_num, u16 esw_owner_vhca_id, const uns } } - err = mlx5_esw_bridge_port_mdb_attach(port, addr, vid); + err = mlx5_esw_bridge_port_mdb_attach(dev, port, addr, vid); if (err) { NL_SET_ERR_MSG_FMT_MOD(extack, "Failed to add MDB (MAC=%pM,vid=%u,vport=%u)\n", addr, vid, vport_num); @@ -1847,8 +1848,9 @@ int mlx5_esw_bridge_port_mdb_add(u16 vport_num, u16 esw_owner_vhca_id, const uns return 0; } -void mlx5_esw_bridge_port_mdb_del(u16 vport_num, u16 esw_owner_vhca_id, const unsigned char *addr, - u16 vid, struct mlx5_esw_bridge_offloads *br_offloads) +void mlx5_esw_bridge_port_mdb_del(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, + const unsigned char *addr, u16 vid, + struct mlx5_esw_bridge_offloads *br_offloads) { struct mlx5_esw_bridge_port *port; @@ -1856,7 +1858,7 @@ void mlx5_esw_bridge_port_mdb_del(u16 vport_num, u16 esw_owner_vhca_id, const un if (!port) return; - mlx5_esw_bridge_port_mdb_detach(port, addr, vid); + mlx5_esw_bridge_port_mdb_detach(dev, port, addr, vid); } static void mlx5_esw_bridge_flush(struct mlx5_esw_bridge_offloads *br_offloads) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h index 9cab66467289..a9dd18c73d6a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h @@ -79,10 +79,12 @@ int mlx5_esw_bridge_port_vlan_add(u16 vport_num, u16 esw_owner_vhca_id, u16 vid, void mlx5_esw_bridge_port_vlan_del(u16 vport_num, u16 esw_owner_vhca_id, u16 vid, struct mlx5_esw_bridge_offloads *br_offloads); -int mlx5_esw_bridge_port_mdb_add(u16 vport_num, u16 esw_owner_vhca_id, const unsigned char *addr, - u16 vid, struct mlx5_esw_bridge_offloads *br_offloads, +int mlx5_esw_bridge_port_mdb_add(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, + const unsigned char *addr, u16 vid, + struct mlx5_esw_bridge_offloads *br_offloads, struct netlink_ext_ack *extack); -void mlx5_esw_bridge_port_mdb_del(u16 vport_num, u16 esw_owner_vhca_id, const unsigned char *addr, - u16 vid, struct mlx5_esw_bridge_offloads *br_offloads); +void mlx5_esw_bridge_port_mdb_del(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, + const unsigned char *addr, u16 vid, + struct mlx5_esw_bridge_offloads *br_offloads); #endif /* __MLX5_ESW_BRIDGE_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c index d17fe6d374b5..2eae594a5e80 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c @@ -5,6 +5,7 @@ #include "bridge.h" #include "eswitch.h" #include "bridge_priv.h" +#include "diag/bridge_tracepoint.h" static const struct rhashtable_params mdb_ht_params = { .key_offset = offsetof(struct mlx5_esw_bridge_mdb_entry, key), @@ -180,8 +181,8 @@ static void mlx5_esw_bridge_port_mdb_entry_cleanup(struct mlx5_esw_bridge *bridg kvfree(entry); } -int mlx5_esw_bridge_port_mdb_attach(struct mlx5_esw_bridge_port *port, const unsigned char *addr, - u16 vid) +int mlx5_esw_bridge_port_mdb_attach(struct net_device *dev, struct mlx5_esw_bridge_port *port, + const unsigned char *addr, u16 vid) { struct mlx5_esw_bridge *bridge = port->bridge; struct mlx5_esw_bridge_mdb_entry *entry; @@ -224,6 +225,8 @@ int mlx5_esw_bridge_port_mdb_attach(struct mlx5_esw_bridge_port *port, const uns */ esw_warn(bridge->br_offloads->esw->dev, "MDB attach failed to offload (MAC=%pM,vid=%u,vport=%u,err=%d)\n", addr, vid, port->vport_num, err); + + trace_mlx5_esw_bridge_port_mdb_attach(dev, entry); return 0; } @@ -248,8 +251,8 @@ static void mlx5_esw_bridge_port_mdb_entry_detach(struct mlx5_esw_bridge_port *p entry->key.addr, entry->key.vid, port->vport_num); } -void mlx5_esw_bridge_port_mdb_detach(struct mlx5_esw_bridge_port *port, const unsigned char *addr, - u16 vid) +void mlx5_esw_bridge_port_mdb_detach(struct net_device *dev, struct mlx5_esw_bridge_port *port, + const unsigned char *addr, u16 vid) { struct mlx5_esw_bridge *bridge = port->bridge; struct mlx5_esw_bridge_mdb_entry *entry; @@ -269,6 +272,7 @@ void mlx5_esw_bridge_port_mdb_detach(struct mlx5_esw_bridge_port *port, const un return; } + trace_mlx5_esw_bridge_port_mdb_detach(dev, entry); mlx5_esw_bridge_port_mdb_entry_detach(port, entry); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h index 849028f94be2..c9595801bdb4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h @@ -233,10 +233,10 @@ void mlx5_esw_bridge_mcast_disable(struct mlx5_esw_bridge *bridge); int mlx5_esw_bridge_mdb_init(struct mlx5_esw_bridge *bridge); void mlx5_esw_bridge_mdb_cleanup(struct mlx5_esw_bridge *bridge); -int mlx5_esw_bridge_port_mdb_attach(struct mlx5_esw_bridge_port *port, const unsigned char *addr, - u16 vid); -void mlx5_esw_bridge_port_mdb_detach(struct mlx5_esw_bridge_port *port, const unsigned char *addr, - u16 vid); +int mlx5_esw_bridge_port_mdb_attach(struct net_device *dev, struct mlx5_esw_bridge_port *port, + const unsigned char *addr, u16 vid); +void mlx5_esw_bridge_port_mdb_detach(struct net_device *dev, struct mlx5_esw_bridge_port *port, + const unsigned char *addr, u16 vid); void mlx5_esw_bridge_port_mdb_vlan_flush(struct mlx5_esw_bridge_port *port, struct mlx5_esw_bridge_vlan *vlan); void mlx5_esw_bridge_mdb_flush(struct mlx5_esw_bridge *bridge); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h index 51ac24e6ec3c..1808da214094 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h @@ -110,6 +110,41 @@ DEFINE_EVENT(mlx5_esw_bridge_port_template, TP_ARGS(port) ); +DECLARE_EVENT_CLASS(mlx5_esw_bridge_mdb_port_change_template, + TP_PROTO(const struct net_device *dev, + const struct mlx5_esw_bridge_mdb_entry *mdb), + TP_ARGS(dev, mdb), + TP_STRUCT__entry( + __array(char, dev_name, IFNAMSIZ) + __array(unsigned char, addr, ETH_ALEN) + __field(u16, vid) + __field(int, num_ports) + __field(bool, offloaded)), + TP_fast_assign( + strscpy(__entry->dev_name, netdev_name(dev), IFNAMSIZ); + memcpy(__entry->addr, mdb->key.addr, ETH_ALEN); + __entry->vid = mdb->key.vid; + __entry->num_ports = mdb->num_ports; + __entry->offloaded = mdb->egress_handle;), + TP_printk("net_device=%s addr=%pM vid=%u num_ports=%d offloaded=%d", + __entry->dev_name, + __entry->addr, + __entry->vid, + __entry->num_ports, + __entry->offloaded)); + +DEFINE_EVENT(mlx5_esw_bridge_mdb_port_change_template, + mlx5_esw_bridge_port_mdb_attach, + TP_PROTO(const struct net_device *dev, + const struct mlx5_esw_bridge_mdb_entry *mdb), + TP_ARGS(dev, mdb)); + +DEFINE_EVENT(mlx5_esw_bridge_mdb_port_change_template, + mlx5_esw_bridge_port_mdb_detach, + TP_PROTO(const struct net_device *dev, + const struct mlx5_esw_bridge_mdb_entry *mdb), + TP_ARGS(dev, mdb)); + #endif /* This part must be outside protection */ From patchwork Wed Apr 12 04:07:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208429 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24791C77B6E for ; Wed, 12 Apr 2023 04:08:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229758AbjDLEIh (ORCPT ); Wed, 12 Apr 2023 00:08:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229704AbjDLEIS (ORCPT ); Wed, 12 Apr 2023 00:08:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2D714696 for ; Tue, 11 Apr 2023 21:08:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D4BBA62DA5 for ; Wed, 12 Apr 2023 04:08:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34D64C4339B; Wed, 12 Apr 2023 04:08:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272489; bh=RNVh7v1CPtfo+tEEk56vxjRRrlZquvkdT0kmb1NPMwg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fjAsqEbzqgLqtmdpJsGxq6DpyJEKvxfNFY44DcOy0gP314aUZnFBWOb9Dedr6y+aC wc6bddZ9xKK2na7YB7Uf0ZY64hP2xifzyBkdb0NusxOLDbHGzAXpDDU6R7kY7PAnhj lIgWEah5pQG5gXYuC28bTJcpDgm3CXB+e0C7II52LZD3TWtSSjFlC+s9UZip96V3OX 1d7gwfV47fG7xJz8aVgmdr0DTTTW3j/WyWmjSUbWRdw43JZS+ScFcpv+DGsEEcdIIH B8vs/0dfqfP9Zatw2jUvhEFtxIFjEII8cNHpQ2SQ4n34YpVEA/PhmR4ZQ+RR1mWUf/ dTB/vlcjiV1FA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Parav Pandit , Bodong Wang Subject: [net-next 10/15] net/mlx5: Create a new profile for SFs Date: Tue, 11 Apr 2023 21:07:47 -0700 Message-Id: <20230412040752.14220-11-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Parav Pandit Create a new profile for SFs in order to disable the command cache. Each function command cache consumes ~500KB of memory, when using a large number of SFs this savings is notable on memory constarined systems. Use a new profile to provide for future differences between SFs and PFs. The mr_cache not used for non-PF functions, so it is excluded from the new profile. Signed-off-by: Parav Pandit Reviewed-by: Bodong Wang Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 6 +++--- drivers/net/ethernet/mellanox/mlx5/core/main.c | 9 +++++++++ drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h | 1 + drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c | 2 +- include/linux/mlx5/driver.h | 1 + 5 files changed, 15 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c index b00e33ed05e9..d53de39539a8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c @@ -1802,7 +1802,7 @@ static struct mlx5_cmd_msg *alloc_msg(struct mlx5_core_dev *dev, int in_size, if (in_size <= 16) goto cache_miss; - for (i = 0; i < MLX5_NUM_COMMAND_CACHES; i++) { + for (i = 0; i < dev->profile.num_cmd_caches; i++) { ch = &cmd->cache[i]; if (in_size > ch->max_inbox_size) continue; @@ -2097,7 +2097,7 @@ static void destroy_msg_cache(struct mlx5_core_dev *dev) struct mlx5_cmd_msg *n; int i; - for (i = 0; i < MLX5_NUM_COMMAND_CACHES; i++) { + for (i = 0; i < dev->profile.num_cmd_caches; i++) { ch = &dev->cmd.cache[i]; list_for_each_entry_safe(msg, n, &ch->head, list) { list_del(&msg->list); @@ -2127,7 +2127,7 @@ static void create_msg_cache(struct mlx5_core_dev *dev) int k; /* Initialize and fill the caches with initial entries */ - for (k = 0; k < MLX5_NUM_COMMAND_CACHES; k++) { + for (k = 0; k < dev->profile.num_cmd_caches; k++) { ch = &cmd->cache[k]; spin_lock_init(&ch->lock); INIT_LIST_HEAD(&ch->head); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index f95df73d1089..a95d1218def9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -100,15 +100,19 @@ enum { static struct mlx5_profile profile[] = { [0] = { .mask = 0, + .num_cmd_caches = MLX5_NUM_COMMAND_CACHES, }, [1] = { .mask = MLX5_PROF_MASK_QP_SIZE, .log_max_qp = 12, + .num_cmd_caches = MLX5_NUM_COMMAND_CACHES, + }, [2] = { .mask = MLX5_PROF_MASK_QP_SIZE | MLX5_PROF_MASK_MR_CACHE, .log_max_qp = LOG_MAX_SUPPORTED_QPS, + .num_cmd_caches = MLX5_NUM_COMMAND_CACHES, .mr_cache[0] = { .size = 500, .limit = 250 @@ -174,6 +178,11 @@ static struct mlx5_profile profile[] = { .limit = 4 }, }, + [3] = { + .mask = MLX5_PROF_MASK_QP_SIZE, + .log_max_qp = LOG_MAX_SUPPORTED_QPS, + .num_cmd_caches = 0, + }, }; static int wait_fw_init(struct mlx5_core_dev *dev, u32 max_wait_mili, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h index be0785f83083..5eaab99678ee 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -142,6 +142,7 @@ enum mlx5_semaphore_space_address { }; #define MLX5_DEFAULT_PROF 2 +#define MLX5_SF_PROF 3 static inline int mlx5_flexible_inlen(struct mlx5_core_dev *dev, size_t fixed, size_t item_size, size_t num_items, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c index a7377619ba6f..e2f26d0bc615 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c @@ -28,7 +28,7 @@ static int mlx5_sf_dev_probe(struct auxiliary_device *adev, const struct auxilia mdev->priv.adev_idx = adev->id; sf_dev->mdev = mdev; - err = mlx5_mdev_init(mdev, MLX5_DEFAULT_PROF); + err = mlx5_mdev_init(mdev, MLX5_SF_PROF); if (err) { mlx5_core_warn(mdev, "mlx5_mdev_init on err=%d\n", err); goto mdev_err; diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index f243bd10a5e1..135a3c8d8237 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -751,6 +751,7 @@ enum { struct mlx5_profile { u64 mask; u8 log_max_qp; + u8 num_cmd_caches; struct { int size; int limit; From patchwork Wed Apr 12 04:07:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208426 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B01CC77B73 for ; Wed, 12 Apr 2023 04:08:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229752AbjDLEId (ORCPT ); Wed, 12 Apr 2023 00:08:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229651AbjDLEIZ (ORCPT ); Wed, 12 Apr 2023 00:08:25 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D44854ECD for ; Tue, 11 Apr 2023 21:08:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B556262DAE for ; Wed, 12 Apr 2023 04:08:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 14609C433EF; Wed, 12 Apr 2023 04:08:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272490; bh=vcijwLJLYxksmAzd2HY6o0VW0kdGSFz/N1Mh45n+HNo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=a5+EZf/xSl9h47H+35JwsfoAIVXksLfHWp/We0YiOZYmtj9VDLMVTLqEXyXf0EYtF OG3c9tAYlXcrvxbVTuscKQ25F8e2aDHZnl8dZSpMtExyWU0tDkqDOe2lQfJWjEVwd7 YPpMq+F35ucp4M5ZM9p8PpvADlRSweN0FMGXgEUfpo/AbO1tf1a+iOVzUQD4KSasES MeavoVbog3bHgu7jKCfMJFjOhnXV8OaCpEk8IxNiWh8TD+3fcLozQ1cHGu18CuYpxR QiYM3lCdjDuscCyLpvtpvUwwt0dWKt7Ohwjq8bFsM7If4ssAZ2Ppx01rlDxSqA4QED hfSjTsmloutWA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Yevgeny Kliteynik , Alex Vesker Subject: [net-next 11/15] net/mlx5: DR, Set counter ID on the last STE for STEv1 TX Date: Tue, 11 Apr 2023 21:07:48 -0700 Message-Id: <20230412040752.14220-12-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik In STEv1 counter action can be set either by filling counter ID on STE, in which case it is executed before other actions on this STE, or as a single action, in which case it is executed in accordance with the actions order. FW steering on STEv1 devices implements counter as counter ID on STE, and this counter is set on the last STE. Fix SMFS to be consistent with this behaviour - move TX counter to the last STE, this way the counter will include all actions of the previous STEs that might have changed packet headers length, e.g. encap, vlan push, etc. Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c index 084145f18084..27cc6931bbde 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c @@ -604,9 +604,6 @@ void dr_ste_v1_set_actions_tx(struct mlx5dr_domain *dmn, allow_modify_hdr = false; } - if (action_type_set[DR_ACTION_TYP_CTR]) - dr_ste_v1_set_counter_id(last_ste, attr->ctr_id); - if (action_type_set[DR_ACTION_TYP_MODIFY_HDR]) { if (!allow_modify_hdr || action_sz < DR_STE_ACTION_DOUBLE_SZ) { dr_ste_v1_arr_init_next_match(&last_ste, added_stes, @@ -724,6 +721,10 @@ void dr_ste_v1_set_actions_tx(struct mlx5dr_domain *dmn, attr->range.max); } + /* set counter ID on the last STE to adhere to DMFS behavior */ + if (action_type_set[DR_ACTION_TYP_CTR]) + dr_ste_v1_set_counter_id(last_ste, attr->ctr_id); + dr_ste_v1_set_hit_gvmi(last_ste, attr->hit_gvmi); dr_ste_v1_set_hit_addr(last_ste, attr->final_icm_addr, 1); } From patchwork Wed Apr 12 04:07:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208427 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9E6EC77B6E for ; Wed, 12 Apr 2023 04:08:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229694AbjDLEIe (ORCPT ); Wed, 12 Apr 2023 00:08:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229678AbjDLEIZ (ORCPT ); Wed, 12 Apr 2023 00:08:25 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03BAC526F for ; Tue, 11 Apr 2023 21:08:11 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A711762DAF for ; Wed, 12 Apr 2023 04:08:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F170FC4339B; Wed, 12 Apr 2023 04:08:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272491; bh=ur2fieG9igI7SIYfbcenD6pQvds9yIlt5XkEZoVkq60=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RzTqGwMiGTyDL2xHTVtyWUlPnOdF+Xwr9xPAi63wwR4OhLaH7FJohYeh6kYL2kJ3V JPeFDFrHgCQMXbKxjTEBwxkqA/cDwCFnfCMc9oVZ0MKnEb9x60Eaay6y4YgxzNFDbp jUJ651nOJjzdDfO5fRxKzO4EVWOTJ8PTqn8LFk86G4HMG3R8fLbwR2M/jNMSugWMUM f7ZLNHqynP2T75nyWfGBw5rguM5YL3ophcB211brnGPZYRpTL1csC5XBSUBvwk6t+r MqmHNsSkUvzmq5Ge4bXu0nEU7aUWLW0ebODH/wZzQ/EC0U02HDr7VoSKs+RLX8lKxR XdY/bm6vZUUOw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Yevgeny Kliteynik , Muhammad Sammar , Alex Vesker Subject: [net-next 12/15] net/mlx5: Add mlx5_ifc bits for modify header argument Date: Tue, 11 Apr 2023 21:07:49 -0700 Message-Id: <20230412040752.14220-13-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik Add enum value for modify-header argument object and mlx5_bits for the related capabilities. Signed-off-by: Muhammad Sammar Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Signed-off-by: Saeed Mahameed --- include/linux/mlx5/mlx5_ifc.h | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 02c628f4fe26..6c84bf6eec85 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -78,12 +78,15 @@ enum { enum { MLX5_OBJ_TYPE_SW_ICM = 0x0008, + MLX5_OBJ_TYPE_HEADER_MODIFY_ARGUMENT = 0x23, }; enum { MLX5_GENERAL_OBJ_TYPES_CAP_SW_ICM = (1ULL << MLX5_OBJ_TYPE_SW_ICM), MLX5_GENERAL_OBJ_TYPES_CAP_GENEVE_TLV_OPT = (1ULL << 11), MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q = (1ULL << 13), + MLX5_GENERAL_OBJ_TYPES_CAP_HEADER_MODIFY_ARGUMENT = + (1ULL << MLX5_OBJ_TYPE_HEADER_MODIFY_ARGUMENT), MLX5_GENERAL_OBJ_TYPES_CAP_MACSEC_OFFLOAD = (1ULL << 39), }; @@ -321,6 +324,10 @@ enum { MLX5_FT_NIC_TX_RDMA_2_NIC_TX = BIT(1), }; +enum { + MLX5_CMD_OP_MOD_UPDATE_HEADER_MODIFY_ARGUMENT = 0x1, +}; + struct mlx5_ifc_flow_table_fields_supported_bits { u8 outer_dmac[0x1]; u8 outer_smac[0x1]; @@ -1927,7 +1934,14 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_750[0x4]; u8 max_dynamic_vf_msix_table_size[0xc]; - u8 reserved_at_760[0x20]; + u8 reserved_at_760[0x3]; + u8 log_max_num_header_modify_argument[0x5]; + u8 reserved_at_768[0x4]; + u8 log_header_modify_argument_granularity[0x4]; + u8 reserved_at_770[0x3]; + u8 log_header_modify_argument_max_alloc[0x5]; + u8 reserved_at_778[0x8]; + u8 vhca_tunnel_commands[0x40]; u8 match_definer_format_supported[0x40]; }; @@ -6361,6 +6375,18 @@ struct mlx5_ifc_general_obj_out_cmd_hdr_bits { u8 reserved_at_60[0x20]; }; +struct mlx5_ifc_modify_header_arg_bits { + u8 reserved_at_0[0x80]; + + u8 reserved_at_80[0x8]; + u8 access_pd[0x18]; +}; + +struct mlx5_ifc_create_modify_header_arg_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_modify_header_arg_bits arg; +}; + struct mlx5_ifc_create_match_definer_in_bits { struct mlx5_ifc_general_obj_in_cmd_hdr_bits general_obj_in_cmd_hdr; From patchwork Wed Apr 12 04:07:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208428 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C559DC7619A for ; Wed, 12 Apr 2023 04:08:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229759AbjDLEIg (ORCPT ); Wed, 12 Apr 2023 00:08:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229679AbjDLEIZ (ORCPT ); Wed, 12 Apr 2023 00:08:25 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4A3B49D4 for ; Tue, 11 Apr 2023 21:08:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8645262DA3 for ; Wed, 12 Apr 2023 04:08:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DB6D0C433EF; Wed, 12 Apr 2023 04:08:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272492; bh=DX7QCqzGofF++JEIobzcrOcTiKJDOLC7MGXdfraJTrg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LgI7YScVZGiTeMD1ycpniGxTiiuMUkgn3gjygne9sxqruPM3895ktpF0x/at4l6Z1 YDQ0efwulGM+C5xQ7bi7JUWwjTMjyH0wHOd0h90xgSRteqk2o4ZKthYg7LOkCxD4Gy 3e96kp2K5V4lQpG7p+AqdEHQZ914yEo3bT4Jg5zeQuKOSdz4dBYqh8ZQAciYDOJX1N NN0qHGxItIiKTZLThe/jg/0Ik4Uk988ZAbCOK7jwPs9N3wjOEPME+uioQeqHj8jvAL uU3a1Z/Q2+Imy/htqLHLZYN02BOPg/7rU7krnyfbTZ2HmAabWvOqBvR15dODxx/5bY MEAYgBzeaKSXw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Yevgeny Kliteynik , Alex Vesker Subject: [net-next 13/15] net/mlx5: Add new WQE for updating flow table Date: Tue, 11 Apr 2023 21:07:50 -0700 Message-Id: <20230412040752.14220-14-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik Add new WQE type: FLOW_TBL_ACCESS, which will be used for writing modify header arguments. This type has specific control segment and special data segment. Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Signed-off-by: Saeed Mahameed --- include/linux/mlx5/device.h | 2 ++ include/linux/mlx5/qp.h | 10 ++++++++++ 2 files changed, 12 insertions(+) diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h index af4dd536a52c..e4aa147ab390 100644 --- a/include/linux/mlx5/device.h +++ b/include/linux/mlx5/device.h @@ -442,6 +442,8 @@ enum { MLX5_OPCODE_UMR = 0x25, + MLX5_OPCODE_FLOW_TBL_ACCESS = 0x2c, + MLX5_OPCODE_ACCESS_ASO = 0x2d, }; diff --git a/include/linux/mlx5/qp.h b/include/linux/mlx5/qp.h index df55fbb65717..bd53cf4be7bd 100644 --- a/include/linux/mlx5/qp.h +++ b/include/linux/mlx5/qp.h @@ -499,6 +499,16 @@ struct mlx5_stride_block_ctrl_seg { __be16 num_entries; }; +struct mlx5_wqe_flow_update_ctrl_seg { + __be32 flow_idx_update; + __be32 dest_handle; + u8 reserved0[40]; +}; + +struct mlx5_wqe_header_modify_argument_update_seg { + u8 argument_list[64]; +}; + struct mlx5_core_qp { struct mlx5_core_rsc_common common; /* must be first */ void (*event) (struct mlx5_core_qp *, int); From patchwork Wed Apr 12 04:07:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208430 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 352C7C7619A for ; Wed, 12 Apr 2023 04:08:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229773AbjDLEIt (ORCPT ); Wed, 12 Apr 2023 00:08:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229720AbjDLEI0 (ORCPT ); Wed, 12 Apr 2023 00:08:26 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 880CA4C1B for ; Tue, 11 Apr 2023 21:08:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 695AD62DA5 for ; Wed, 12 Apr 2023 04:08:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BCD39C433D2; Wed, 12 Apr 2023 04:08:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272492; bh=ypxZFrcT+gYp7fjTju/kCgC618tzHxZo5mFpe1G1ni0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WOzEoqWUNhgnwBKqd2XU4RqVXEP4uS9fNOMXi8eYQ189NiYFeXZ/D87+zDolc0oe0 P5xcDcyie1r4ubS4wqLSCc9DTpK0mN21UqCyPvVLi7KoA5MFmkncyhf+ZrGX/rpIos 2cMVXXameAK8riK8i7kr/x3ORbFR/KpR2ADkz8l7636qb7jhEcgY3lfIBm0wr+FSdo 9JZkOnAh2MDcquU2cJ0Tu8iq9GWu5wKpD7YiBTN/T9ta7CWq64oIn9Yvcv4SSHvS/2 Ktb1K77/yylZX1cHI+S8gAD9ppuyUSqC6Np6dgPuiDGMFyygWeHurB/Bx8kshXtWu+ XLaO4wbSOG7gg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Yevgeny Kliteynik , Alex Vesker Subject: [net-next 14/15] net/mlx5: DR, Prepare sending new WQE type Date: Tue, 11 Apr 2023 21:07:51 -0700 Message-Id: <20230412040752.14220-15-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik The send engine should be ready to handle more opcodes in addition to RDMA_WRITE/RDMA_READ. Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/steering/dr_send.c | 60 ++++++++++++------- 1 file changed, 39 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c index fd2d31cdbcf9..00bb65613300 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c @@ -18,7 +18,12 @@ struct dr_data_seg { unsigned int send_flags; }; +enum send_info_type { + WRITE_ICM = 0, +}; + struct postsend_info { + enum send_info_type type; struct dr_data_seg write; struct dr_data_seg read; u64 remote_addr; @@ -402,10 +407,12 @@ static void dr_rdma_segments(struct mlx5dr_qp *dr_qp, u64 remote_addr, static void dr_post_send(struct mlx5dr_qp *dr_qp, struct postsend_info *send_info) { - dr_rdma_segments(dr_qp, send_info->remote_addr, send_info->rkey, - &send_info->write, MLX5_OPCODE_RDMA_WRITE, false); - dr_rdma_segments(dr_qp, send_info->remote_addr, send_info->rkey, - &send_info->read, MLX5_OPCODE_RDMA_READ, true); + if (send_info->type == WRITE_ICM) { + dr_rdma_segments(dr_qp, send_info->remote_addr, send_info->rkey, + &send_info->write, MLX5_OPCODE_RDMA_WRITE, false); + dr_rdma_segments(dr_qp, send_info->remote_addr, send_info->rkey, + &send_info->read, MLX5_OPCODE_RDMA_READ, true); + } } /** @@ -476,9 +483,26 @@ static int dr_handle_pending_wc(struct mlx5dr_domain *dmn, return 0; } -static void dr_fill_data_segs(struct mlx5dr_send_ring *send_ring, - struct postsend_info *send_info) +static void dr_fill_write_icm_segs(struct mlx5dr_domain *dmn, + struct mlx5dr_send_ring *send_ring, + struct postsend_info *send_info) { + u32 buff_offset; + + if (send_info->write.length > dmn->info.max_inline_size) { + buff_offset = (send_ring->tx_head & + (dmn->send_ring->signal_th - 1)) * + send_ring->max_post_send_size; + /* Copy to ring mr */ + memcpy(send_ring->buf + buff_offset, + (void *)(uintptr_t)send_info->write.addr, + send_info->write.length); + send_info->write.addr = (uintptr_t)send_ring->mr->dma_addr + buff_offset; + send_info->write.lkey = send_ring->mr->mkey; + + send_ring->tx_head++; + } + send_ring->pending_wqe++; if (send_ring->pending_wqe % send_ring->signal_th == 0) @@ -496,11 +520,18 @@ static void dr_fill_data_segs(struct mlx5dr_send_ring *send_ring, send_info->read.send_flags = 0; } +static void dr_fill_data_segs(struct mlx5dr_domain *dmn, + struct mlx5dr_send_ring *send_ring, + struct postsend_info *send_info) +{ + if (send_info->type == WRITE_ICM) + dr_fill_write_icm_segs(dmn, send_ring, send_info); +} + static int dr_postsend_icm_data(struct mlx5dr_domain *dmn, struct postsend_info *send_info) { struct mlx5dr_send_ring *send_ring = dmn->send_ring; - u32 buff_offset; int ret; if (unlikely(dmn->mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR || @@ -517,20 +548,7 @@ static int dr_postsend_icm_data(struct mlx5dr_domain *dmn, if (ret) goto out_unlock; - if (send_info->write.length > dmn->info.max_inline_size) { - buff_offset = (send_ring->tx_head & - (dmn->send_ring->signal_th - 1)) * - send_ring->max_post_send_size; - /* Copy to ring mr */ - memcpy(send_ring->buf + buff_offset, - (void *)(uintptr_t)send_info->write.addr, - send_info->write.length); - send_info->write.addr = (uintptr_t)send_ring->mr->dma_addr + buff_offset; - send_info->write.lkey = send_ring->mr->mkey; - } - - send_ring->tx_head++; - dr_fill_data_segs(send_ring, send_info); + dr_fill_data_segs(dmn, send_ring, send_info); dr_post_send(send_ring->qp, send_info); out_unlock: From patchwork Wed Apr 12 04:07:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13208431 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6D40C77B6E for ; Wed, 12 Apr 2023 04:08:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229799AbjDLEI6 (ORCPT ); Wed, 12 Apr 2023 00:08:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229736AbjDLEI1 (ORCPT ); Wed, 12 Apr 2023 00:08:27 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 990E659E4 for ; Tue, 11 Apr 2023 21:08:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4776462D9E for ; Wed, 12 Apr 2023 04:08:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 988E7C4339B; Wed, 12 Apr 2023 04:08:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681272493; bh=OAMbT/mTzE44ifStltscH+182C4xaXvhTgWl7RxTg1k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tGciMSbZb8/5t2v1eCLAXqJxoVjz8A5+M6wLdkV/5uYN9R4sCTm1hrDV0JV5M7X3U Ej6JTsLZ6snP9aL6qKFqFnTBqgB8jH/xy66lSQZdZ6Vt7ilpRZgVS9Ni/+UgC0jHWM nxxXxJO7yxgEL0rOW1yhJKxUViYecZs4EB4VNJFzCaZgMy231eViUk56uF57z7C2xp dQFhvv1kHvYrKjO1Sf3sHG5S+7NDm2NLarWbgwhg6tYiQvFu2Mg6iI2kJ973KnoFw4 bWdUl5xL1qe91meCv0phGpB/kSha7Exa0CnvnwC/+IrJdYBjK8dzOoeLdBInfgRFGg F1VzzZi4k3Llg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Yevgeny Kliteynik , Muhammad Sammar , Alex Vesker Subject: [net-next 15/15] net/mlx5: DR, Add modify-header-pattern ICM pool Date: Tue, 11 Apr 2023 21:07:52 -0700 Message-Id: <20230412040752.14220-16-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412040752.14220-1-saeed@kernel.org> References: <20230412040752.14220-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik There is a new ICM area for that memory, so we need to handle it as we did for the others ICM types. The patch added that specific pool with its requirements and management. Signed-off-by: Muhammad Sammar Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/Makefile | 2 +- .../mellanox/mlx5/core/steering/dr_cmd.c | 6 +++ .../mellanox/mlx5/core/steering/dr_domain.c | 45 +++++++++++++++++-- .../mellanox/mlx5/core/steering/dr_icm_pool.c | 41 ++++++++++++----- .../mellanox/mlx5/core/steering/dr_ptrn.c | 43 ++++++++++++++++++ .../mellanox/mlx5/core/steering/dr_types.h | 11 +++++ 6 files changed, 132 insertions(+), 16 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ptrn.c diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index 4a009b720bee..39c2c8dc7e07 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -112,7 +112,7 @@ mlx5_core-$(CONFIG_MLX5_SW_STEERING) += steering/dr_domain.o steering/dr_table.o steering/dr_ste_v2.o \ steering/dr_cmd.o steering/dr_fw.o \ steering/dr_action.o steering/fs_dr.o \ - steering/dr_definer.o \ + steering/dr_definer.o steering/dr_ptrn.o \ steering/dr_dbg.o lib/smfs.o # # SF device diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c index 07b6a6dcb92f..229f3684100c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c @@ -200,6 +200,12 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev, caps->hdr_modify_icm_addr = MLX5_CAP64_DEV_MEM(mdev, header_modify_sw_icm_start_address); + caps->log_modify_pattern_icm_size = + MLX5_CAP_DEV_MEM(mdev, log_header_modify_pattern_sw_icm_size); + + caps->hdr_modify_pattern_icm_addr = + MLX5_CAP64_DEV_MEM(mdev, header_modify_pattern_sw_icm_start_address); + caps->roce_min_src_udp = MLX5_CAP_ROCE(mdev, r_roce_min_src_udp_port); caps->is_ecpf = mlx5_core_is_ecpf_esw_manager(mdev); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c index 5b8bb2ca31e6..7a0381572c4c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c @@ -10,6 +10,33 @@ ((dmn)->info.caps.dmn_type##_sw_owner_v2 && \ (dmn)->info.caps.sw_format_ver <= MLX5_STEERING_FORMAT_CONNECTX_7)) +bool mlx5dr_domain_is_support_ptrn_arg(struct mlx5dr_domain *dmn) +{ + return false; +} + +static int dr_domain_init_modify_header_resources(struct mlx5dr_domain *dmn) +{ + if (!mlx5dr_domain_is_support_ptrn_arg(dmn)) + return 0; + + dmn->ptrn_mgr = mlx5dr_ptrn_mgr_create(dmn); + if (!dmn->ptrn_mgr) { + mlx5dr_err(dmn, "Couldn't create ptrn_mgr\n"); + return -ENOMEM; + } + + return 0; +} + +static void dr_domain_destroy_modify_header_resources(struct mlx5dr_domain *dmn) +{ + if (!mlx5dr_domain_is_support_ptrn_arg(dmn)) + return; + + mlx5dr_ptrn_mgr_destroy(dmn->ptrn_mgr); +} + static void dr_domain_init_csum_recalc_fts(struct mlx5dr_domain *dmn) { /* Per vport cached FW FT for checksum recalculation, this @@ -149,14 +176,22 @@ static int dr_domain_init_resources(struct mlx5dr_domain *dmn) goto clean_uar; } + ret = dr_domain_init_modify_header_resources(dmn); + if (ret) { + mlx5dr_err(dmn, "Couldn't create modify-header-resources\n"); + goto clean_mem_resources; + } + ret = mlx5dr_send_ring_alloc(dmn); if (ret) { mlx5dr_err(dmn, "Couldn't create send-ring\n"); - goto clean_mem_resources; + goto clean_modify_hdr; } return 0; +clean_modify_hdr: + dr_domain_destroy_modify_header_resources(dmn); clean_mem_resources: dr_domain_uninit_mem_resources(dmn); clean_uar: @@ -170,6 +205,7 @@ static int dr_domain_init_resources(struct mlx5dr_domain *dmn) static void dr_domain_uninit_resources(struct mlx5dr_domain *dmn) { mlx5dr_send_ring_free(dmn, dmn->send_ring); + dr_domain_destroy_modify_header_resources(dmn); dr_domain_uninit_mem_resources(dmn); mlx5_put_uars_page(dmn->mdev, dmn->uar); mlx5_core_dealloc_pd(dmn->mdev, dmn->pdn); @@ -215,7 +251,7 @@ static int dr_domain_query_vport(struct mlx5dr_domain *dmn, return 0; } -static int dr_domain_query_esw_mngr(struct mlx5dr_domain *dmn) +static int dr_domain_query_esw_mgr(struct mlx5dr_domain *dmn) { return dr_domain_query_vport(dmn, 0, false, &dmn->info.caps.vports.esw_manager_caps); @@ -321,7 +357,7 @@ static int dr_domain_query_fdb_caps(struct mlx5_core_dev *mdev, * vports (vport 0, VFs and SFs) will be queried dynamically. */ - ret = dr_domain_query_esw_mngr(dmn); + ret = dr_domain_query_esw_mgr(dmn); if (ret) { mlx5dr_err(dmn, "Failed to query eswitch manager vport caps (err: %d)", ret); goto free_vports_caps_xa; @@ -435,6 +471,9 @@ mlx5dr_domain_create(struct mlx5_core_dev *mdev, enum mlx5dr_domain_type type) dmn->info.max_log_action_icm_sz = DR_CHUNK_SIZE_4K; dmn->info.max_log_sw_icm_sz = min_t(u32, DR_CHUNK_SIZE_1024K, dmn->info.caps.log_icm_size); + dmn->info.max_log_modify_hdr_pattern_icm_sz = + min_t(u32, DR_CHUNK_SIZE_4K, + dmn->info.caps.log_modify_pattern_icm_size); if (!dmn->info.supp_sw_steering) { mlx5dr_err(dmn, "SW steering is not supported\n"); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c index 3eb6719bc8eb..04fc170a6c16 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c @@ -107,9 +107,9 @@ static struct mlx5dr_icm_mr * dr_icm_pool_mr_create(struct mlx5dr_icm_pool *pool) { struct mlx5_core_dev *mdev = pool->dmn->mdev; - enum mlx5_sw_icm_type dm_type; + enum mlx5_sw_icm_type dm_type = 0; struct mlx5dr_icm_mr *icm_mr; - size_t log_align_base; + size_t log_align_base = 0; int err; icm_mr = kvzalloc(sizeof(*icm_mr), GFP_KERNEL); @@ -121,14 +121,25 @@ dr_icm_pool_mr_create(struct mlx5dr_icm_pool *pool) icm_mr->dm.length = mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, pool->icm_type); - if (pool->icm_type == DR_ICM_TYPE_STE) { + switch (pool->icm_type) { + case DR_ICM_TYPE_STE: dm_type = MLX5_SW_ICM_TYPE_STEERING; log_align_base = ilog2(icm_mr->dm.length); - } else { + break; + case DR_ICM_TYPE_MODIFY_ACTION: dm_type = MLX5_SW_ICM_TYPE_HEADER_MODIFY; /* Align base is 64B */ log_align_base = ilog2(DR_ICM_MODIFY_HDR_ALIGN_BASE); + break; + case DR_ICM_TYPE_MODIFY_HDR_PTRN: + dm_type = MLX5_SW_ICM_TYPE_HEADER_MODIFY_PATTERN; + /* Align base is 64B */ + log_align_base = ilog2(DR_ICM_MODIFY_HDR_ALIGN_BASE); + break; + default: + WARN_ON(pool->icm_type); } + icm_mr->dm.type = dm_type; err = mlx5_dm_sw_icm_alloc(mdev, icm_mr->dm.type, icm_mr->dm.length, @@ -493,27 +504,33 @@ struct mlx5dr_icm_pool *mlx5dr_icm_pool_create(struct mlx5dr_domain *dmn, enum mlx5dr_icm_type icm_type) { u32 num_of_chunks, entry_size, max_hot_size; - enum mlx5dr_icm_chunk_size max_log_chunk_sz; struct mlx5dr_icm_pool *pool; - if (icm_type == DR_ICM_TYPE_STE) - max_log_chunk_sz = dmn->info.max_log_sw_icm_sz; - else - max_log_chunk_sz = dmn->info.max_log_action_icm_sz; - pool = kvzalloc(sizeof(*pool), GFP_KERNEL); if (!pool) return NULL; pool->dmn = dmn; pool->icm_type = icm_type; - pool->max_log_chunk_sz = max_log_chunk_sz; pool->chunks_kmem_cache = dmn->chunks_kmem_cache; INIT_LIST_HEAD(&pool->buddy_mem_list); - mutex_init(&pool->mutex); + switch (icm_type) { + case DR_ICM_TYPE_STE: + pool->max_log_chunk_sz = dmn->info.max_log_sw_icm_sz; + break; + case DR_ICM_TYPE_MODIFY_ACTION: + pool->max_log_chunk_sz = dmn->info.max_log_action_icm_sz; + break; + case DR_ICM_TYPE_MODIFY_HDR_PTRN: + pool->max_log_chunk_sz = dmn->info.max_log_modify_hdr_pattern_icm_sz; + break; + default: + WARN_ON(icm_type); + } + entry_size = mlx5dr_icm_pool_dm_type_to_entry_size(pool->icm_type); max_hot_size = mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ptrn.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ptrn.c new file mode 100644 index 000000000000..698e79d278bf --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ptrn.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +// Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + +#include "dr_types.h" + +struct mlx5dr_ptrn_mgr { + struct mlx5dr_domain *dmn; + struct mlx5dr_icm_pool *ptrn_icm_pool; +}; + +struct mlx5dr_ptrn_mgr *mlx5dr_ptrn_mgr_create(struct mlx5dr_domain *dmn) +{ + struct mlx5dr_ptrn_mgr *mgr; + + if (!mlx5dr_domain_is_support_ptrn_arg(dmn)) + return NULL; + + mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); + if (!mgr) + return NULL; + + mgr->dmn = dmn; + mgr->ptrn_icm_pool = mlx5dr_icm_pool_create(dmn, DR_ICM_TYPE_MODIFY_HDR_PTRN); + if (!mgr->ptrn_icm_pool) { + mlx5dr_err(dmn, "Couldn't get modify-header-pattern memory\n"); + goto free_mgr; + } + + return mgr; + +free_mgr: + kfree(mgr); + return NULL; +} + +void mlx5dr_ptrn_mgr_destroy(struct mlx5dr_ptrn_mgr *mgr) +{ + if (!mgr) + return; + + mlx5dr_icm_pool_destroy(mgr->ptrn_icm_pool); + kfree(mgr); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h index 2b769dcbd453..5b9faa714f42 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h @@ -26,6 +26,8 @@ #define mlx5dr_info(dmn, arg...) mlx5_core_info((dmn)->mdev, ##arg) #define mlx5dr_dbg(dmn, arg...) mlx5_core_dbg((dmn)->mdev, ##arg) +struct mlx5dr_ptrn_mgr; + static inline bool dr_is_flex_parser_0_id(u8 parser_id) { return parser_id <= DR_STE_MAX_FLEX_0_ID; @@ -66,6 +68,7 @@ enum mlx5dr_icm_chunk_size { enum mlx5dr_icm_type { DR_ICM_TYPE_STE, DR_ICM_TYPE_MODIFY_ACTION, + DR_ICM_TYPE_MODIFY_HDR_PTRN, }; static inline enum mlx5dr_icm_chunk_size @@ -861,6 +864,8 @@ struct mlx5dr_cmd_caps { u64 esw_tx_drop_address; u32 log_icm_size; u64 hdr_modify_icm_addr; + u32 log_modify_pattern_icm_size; + u64 hdr_modify_pattern_icm_addr; u32 flex_protocols; u8 flex_parser_id_icmp_dw0; u8 flex_parser_id_icmp_dw1; @@ -910,6 +915,7 @@ struct mlx5dr_domain_info { u32 max_send_wr; u32 max_log_sw_icm_sz; u32 max_log_action_icm_sz; + u32 max_log_modify_hdr_pattern_icm_sz; struct mlx5dr_domain_rx_tx rx; struct mlx5dr_domain_rx_tx tx; struct mlx5dr_cmd_caps caps; @@ -928,6 +934,7 @@ struct mlx5dr_domain { struct mlx5dr_send_info_pool *send_info_pool_tx; struct kmem_cache *chunks_kmem_cache; struct kmem_cache *htbls_kmem_cache; + struct mlx5dr_ptrn_mgr *ptrn_mgr; struct mlx5dr_send_ring *send_ring; struct mlx5dr_domain_info info; struct xarray csum_fts_xa; @@ -1526,4 +1533,8 @@ static inline bool mlx5dr_supp_match_ranges(struct mlx5_core_dev *dev) (1ULL << MLX5_IFC_DEFINER_FORMAT_ID_SELECT)); } +bool mlx5dr_domain_is_support_ptrn_arg(struct mlx5dr_domain *dmn); +struct mlx5dr_ptrn_mgr *mlx5dr_ptrn_mgr_create(struct mlx5dr_domain *dmn); +void mlx5dr_ptrn_mgr_destroy(struct mlx5dr_ptrn_mgr *mgr); + #endif /* _DR_TYPES_H_ */