From patchwork Fri Dec 2 20:10:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063180 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F264C4332F for ; Fri, 2 Dec 2022 20:11:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234312AbiLBULF (ORCPT ); Fri, 2 Dec 2022 15:11:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234818AbiLBULB (ORCPT ); Fri, 2 Dec 2022 15:11:01 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 918C0F37FC for ; Fri, 2 Dec 2022 12:10:58 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 442BAB82277 for ; Fri, 2 Dec 2022 20:10:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 72137C433C1; Fri, 2 Dec 2022 20:10:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011856; bh=wlrwo+8/RwiRmKkYEGja0Hs8rQmUkIOyHb6eMOY6LwM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=r7u52RSDR/uo2pw/Jrn32zI7FoWEzpf7O+eBxCpykg2BtDDw+rCLs1Pn69SHMtcq+ eQD3+p4wVpTfhyRgHBr5861gNiewzt5f0sfS08S2WOR9KPJaJ1MeEylq93t8vOPqoD nddZ1OBrp63efv/mHBLtKwysHHrORFbT24Ed1IYJCwDeUt1VIItk9ZSrYUHWZaQ13F nZtAgdYveVVI84muBXW0MVZxi1lLzLhyohWmJRm0zGa5PAlpW3jv6fTxRD5vdWJQl2 b5V2kTJRkAbOAeRecMF+LxbHg3PZ+hWSP8SXMEHTRW7rlerEp6iIkAWIy/1SsRynOm ficAXEulBWBjA== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Saeed Mahameed Subject: [PATCH xfrm-next 01/16] net/mlx5: Return ready to use ASO WQE Date: Fri, 2 Dec 2022 22:10:22 +0200 Message-Id: <5bbd3960d71aa6c63398393561dfffd67ce43f14.1670011671.git.leonro@nvidia.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky There is no need in hiding returned ASO WQE type by providing void*, use the real type instead. Do it together with zeroing that memory, so ASO WQE will be ready to use immediately. Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c | 1 - drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c | 7 +++++-- drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h | 2 +- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c index be74e1403328..25cd449e8aad 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c @@ -162,7 +162,6 @@ mlx5e_tc_meter_modify(struct mlx5_core_dev *mdev, MLX5_ACCESS_ASO_OPC_MOD_FLOW_METER); aso_ctrl = &aso_wqe->aso_ctrl; - memset(aso_ctrl, 0, sizeof(*aso_ctrl)); aso_ctrl->data_mask_mode = MLX5_ASO_DATA_MASK_MODE_BYTEWISE_64BYTE << 6; aso_ctrl->condition_1_0_operand = MLX5_ASO_ALWAYS_TRUE | MLX5_ASO_ALWAYS_TRUE << 4; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c index 0f9e4f01c85a..5a80fb7dbbca 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c @@ -353,12 +353,15 @@ void mlx5_aso_build_wqe(struct mlx5_aso *aso, u8 ds_cnt, cseg->general_id = cpu_to_be32(obj_id); } -void *mlx5_aso_get_wqe(struct mlx5_aso *aso) +struct mlx5_aso_wqe *mlx5_aso_get_wqe(struct mlx5_aso *aso) { + struct mlx5_aso_wqe *wqe; u16 pi; pi = mlx5_wq_cyc_ctr2ix(&aso->wq, aso->pc); - return mlx5_wq_cyc_get_wqe(&aso->wq, pi); + wqe = mlx5_wq_cyc_get_wqe(&aso->wq, pi); + memset(wqe, 0, sizeof(*wqe)); + return wqe; } void mlx5_aso_post_wqe(struct mlx5_aso *aso, bool with_data, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h index 2d40dcf9d42e..4312614bf3bc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h @@ -77,7 +77,7 @@ enum { struct mlx5_aso; -void *mlx5_aso_get_wqe(struct mlx5_aso *aso); +struct mlx5_aso_wqe *mlx5_aso_get_wqe(struct mlx5_aso *aso); void mlx5_aso_build_wqe(struct mlx5_aso *aso, u8 ds_cnt, struct mlx5_aso_wqe *aso_wqe, u32 obj_id, u32 opc_mode); From patchwork Fri Dec 2 20:10:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063178 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E75A9C47088 for ; Fri, 2 Dec 2022 20:10:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234461AbiLBUKu (ORCPT ); Fri, 2 Dec 2022 15:10:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229583AbiLBUKt (ORCPT ); Fri, 2 Dec 2022 15:10:49 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C1D1BE10A for ; Fri, 2 Dec 2022 12:10:48 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 37F486221E for ; Fri, 2 Dec 2022 20:10:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1728DC433D6; Fri, 2 Dec 2022 20:10:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011847; bh=/UcOl2HaYIM5kMOlEOQsCFE5cfN9tpY0BXCGOKTNLIs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bztO9UWwNgkUIKxq5uOcYMDPchxZzesKqdpWVsIW/iUq/hQYdsGAJKB7KVhfUn2Td v/h6eZbLD02xRh7PhHZ6sQ3OoDW2xRb0K2RUQlIha2Oo9MYESulH4Ejkxu5V3QQBlO 25gyE6QGt96IJz8bTSb8i+itegMRqO1DdVgNupEF5RHPUJrDFhbInF8LPexuklNLZF DvWFqRA2Xa0HZyketiOYPTEek1pxv83DH3XtD3HPCLVQ+R9l2hlv9+ui2HiGmO1tFI PMuhWd4p0t73BTC/heV3bMV42HYSFwLmYRc02JwN6z5s0cNwT/9GAAMrZN7n5EhRUE Gsougrrirj1yA== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 02/16] net/mlx5: Add HW definitions for IPsec packet offload Date: Fri, 2 Dec 2022 22:10:23 +0200 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Add all needed bits to support IPsec packet offload mode. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../net/ethernet/mellanox/mlx5/core/lib/aso.h | 1 + include/linux/mlx5/mlx5_ifc.h | 53 +++++++++++++++++-- 2 files changed, 51 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h index 4312614bf3bc..c8fc3c838642 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.h @@ -71,6 +71,7 @@ enum { }; enum { + MLX5_ACCESS_ASO_OPC_MOD_IPSEC = 0x0, MLX5_ACCESS_ASO_OPC_MOD_FLOW_METER = 0x2, MLX5_ACCESS_ASO_OPC_MOD_MACSEC = 0x5, }; diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 5a4e914e2a6f..300b56ea5ff4 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -445,7 +445,10 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 max_modify_header_actions[0x8]; u8 max_ft_level[0x8]; - u8 reserved_at_40[0x6]; + u8 reformat_add_esp_trasport[0x1]; + u8 reserved_at_41[0x2]; + u8 reformat_del_esp_trasport[0x1]; + u8 reserved_at_44[0x2]; u8 execute_aso[0x1]; u8 reserved_at_47[0x19]; @@ -638,8 +641,10 @@ struct mlx5_ifc_fte_match_set_misc2_bits { u8 reserved_at_1a0[0x8]; u8 macsec_syndrome[0x8]; + u8 ipsec_syndrome[0x8]; + u8 reserved_at_1b8[0x8]; - u8 reserved_at_1b0[0x50]; + u8 reserved_at_1c0[0x40]; }; struct mlx5_ifc_fte_match_set_misc3_bits { @@ -6384,6 +6389,9 @@ enum mlx5_reformat_ctx_type { MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL = 0x2, MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2 = 0x3, MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL = 0x4, + MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV4 = 0x5, + MLX5_REFORMAT_TYPE_DEL_ESP_TRANSPORT = 0x8, + MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV6 = 0xb, MLX5_REFORMAT_TYPE_INSERT_HDR = 0xf, MLX5_REFORMAT_TYPE_REMOVE_HDR = 0x10, MLX5_REFORMAT_TYPE_ADD_MACSEC = 0x11, @@ -11563,6 +11571,41 @@ enum { MLX5_IPSEC_OBJECT_ICV_LEN_16B, }; +enum { + MLX5_IPSEC_ASO_REG_C_0_1 = 0x0, + MLX5_IPSEC_ASO_REG_C_2_3 = 0x1, + MLX5_IPSEC_ASO_REG_C_4_5 = 0x2, + MLX5_IPSEC_ASO_REG_C_6_7 = 0x3, +}; + +enum { + MLX5_IPSEC_ASO_MODE = 0x0, + MLX5_IPSEC_ASO_REPLAY_PROTECTION = 0x1, + MLX5_IPSEC_ASO_INC_SN = 0x2, +}; + +struct mlx5_ifc_ipsec_aso_bits { + u8 valid[0x1]; + u8 reserved_at_201[0x1]; + u8 mode[0x2]; + u8 window_sz[0x2]; + u8 soft_lft_arm[0x1]; + u8 hard_lft_arm[0x1]; + u8 remove_flow_enable[0x1]; + u8 esn_event_arm[0x1]; + u8 reserved_at_20a[0x16]; + + u8 remove_flow_pkt_cnt[0x20]; + + u8 remove_flow_soft_lft[0x20]; + + u8 reserved_at_260[0x80]; + + u8 mode_parameter[0x20]; + + u8 replay_protection_window[0x100]; +}; + struct mlx5_ifc_ipsec_obj_bits { u8 modify_field_select[0x40]; u8 full_offload[0x1]; @@ -11584,7 +11627,11 @@ struct mlx5_ifc_ipsec_obj_bits { u8 implicit_iv[0x40]; - u8 reserved_at_100[0x700]; + u8 reserved_at_100[0x8]; + u8 ipsec_aso_access_pd[0x18]; + u8 reserved_at_120[0xe0]; + + struct mlx5_ifc_ipsec_aso_bits ipsec_aso; }; struct mlx5_ifc_create_ipsec_obj_in_bits { From patchwork Fri Dec 2 20:10:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063179 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB0C0C4332F for ; Fri, 2 Dec 2022 20:10:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234659AbiLBUK4 (ORCPT ); Fri, 2 Dec 2022 15:10:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234462AbiLBUKz (ORCPT ); Fri, 2 Dec 2022 15:10:55 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6A3FBE126 for ; Fri, 2 Dec 2022 12:10:54 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 44AB6B8228E for ; Fri, 2 Dec 2022 20:10:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59BBEC433D6; Fri, 2 Dec 2022 20:10:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011851; bh=y9rGafyKvPlvJeO/9XrNoLzi0uHvOe1ytZLFBH/Ok+s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lhaGEJm6345bL7oSTLJWhW6nKAzis3si+oTVr7FqU7zbKM/QOFjLb8HaRgZjjD9XL D5z+vZa2Cq+bIKC6Up0X/VLP4zJmaB97CqBUEnXY99jXz2mf8Ge1i7po6bDXDV5Bjz 8DWQXTY8p7K57CMwPRGZaq0FXNQ0wr66OEsTGczhM6hVIWNLIaOIu15cx+CPwZpc3l dwCJEg+mo5VPiCbLYu/8cjrdDdGHpIVrg7hkTeN+Nm4oFKMsidfRrHazkZvQhZyFZm SlcqIH5vq4pQwt0kgsxxf8CBafF7Jiv33Vgrx822sUZCvCDmC1sEzRlczB72KFXvqP 1bOIHJUuO5DLw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 03/16] net/mlx5e: Advertise IPsec packet offload support Date: Fri, 2 Dec 2022 22:10:24 +0200 Message-Id: <71077abb6b10fb5ec6f8caec56fa8181db093a82.1670011671.git.leonro@nvidia.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Add needed capabilities check to determine if device supports IPsec packet offload mode. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h | 1 + .../ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c | 6 ++++++ 2 files changed, 7 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 4c47347d0ee2..fa052a89c4dd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -88,6 +88,7 @@ struct mlx5_accel_esp_xfrm_attrs { enum mlx5_ipsec_cap { MLX5_IPSEC_CAP_CRYPTO = 1 << 0, MLX5_IPSEC_CAP_ESN = 1 << 1, + MLX5_IPSEC_CAP_PACKET_OFFLOAD = 1 << 2, }; struct mlx5e_priv; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c index 792724ce7336..3d5a1f875398 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c @@ -31,6 +31,12 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev) MLX5_CAP_ETH(mdev, insert_trailer) && MLX5_CAP_ETH(mdev, swp)) caps |= MLX5_IPSEC_CAP_CRYPTO; + if (MLX5_CAP_IPSEC(mdev, ipsec_full_offload) && + MLX5_CAP_FLOWTABLE_NIC_TX(mdev, reformat_add_esp_trasport) && + MLX5_CAP_FLOWTABLE_NIC_RX(mdev, reformat_del_esp_trasport) && + MLX5_CAP_FLOWTABLE_NIC_RX(mdev, decap)) + caps |= MLX5_IPSEC_CAP_PACKET_OFFLOAD; + if (!caps) return 0; From patchwork Fri Dec 2 20:10:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063183 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8E29C4332F for ; Fri, 2 Dec 2022 20:11:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234848AbiLBULW (ORCPT ); Fri, 2 Dec 2022 15:11:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234847AbiLBULR (ORCPT ); Fri, 2 Dec 2022 15:11:17 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0643DF4E95 for ; Fri, 2 Dec 2022 12:11:11 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A80E0B8228D for ; Fri, 2 Dec 2022 20:11:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1576C433C1; Fri, 2 Dec 2022 20:11:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011868; bh=DkDh/Nj5EH6cjDbVROQy87TUgU5jqCpfnQV39Ds1a4A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZfkRQuVU2152QqOyLYudwB5dx4vqh4pcpnuX5Qrc0YkmhRc41FUfN8SB8+iE5TOV2 2edcOctZsoc15+QtdsXyahWaMqdhMLPKrj1+3ORLGOAI4nLMRV1jKxTPr2D+7zXE9k U6z7tiT7n3QF/3Vd6WllkTtoOauZY0mMZwIBe+NMw6uMaz9pTugxZEOEr102xIF0L2 uUfdEecIg8aM5WJkmCCjOtZ4Ywr0K4sYO5Uw7Atb/mIvhrlcnt2DnNtxFcXkcNTIUA AUvgd+GCd316wzEzC4DC3flcPS7JNLjSeyEUDSj9OqGzsi9exONr//wP6TcTa2pWUo ri6MrOizy0Fng== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 04/16] net/mlx5e: Store replay window in XFRM attributes Date: Fri, 2 Dec 2022 22:10:25 +0200 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky As a preparation for future extension of IPsec hardware object to allow configuration of packet offload mode, extend the XFRM validator to check replay window values. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c | 12 ++++++++++++ .../net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h | 1 + 2 files changed, 13 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index e6411533f911..734b486db5d6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -166,6 +166,7 @@ mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry, attrs->esn = sa_entry->esn_state.esn; if (sa_entry->esn_state.overlap) attrs->flags |= MLX5_ACCEL_ESP_FLAGS_ESN_STATE_OVERLAP; + attrs->replay_window = x->replay_esn->replay_window; } /* action */ @@ -257,6 +258,17 @@ static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x) netdev_info(netdev, "Unsupported xfrm offload type\n"); return -EINVAL; } + if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET) { + if (x->replay_esn && x->replay_esn->replay_window != 32 && + x->replay_esn->replay_window != 64 && + x->replay_esn->replay_window != 128 && + x->replay_esn->replay_window != 256) { + netdev_info(netdev, + "Unsupported replay window size %u\n", + x->replay_esn->replay_window); + return -EINVAL; + } + } return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index fa052a89c4dd..6fe55675bee9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -83,6 +83,7 @@ struct mlx5_accel_esp_xfrm_attrs { } daddr; u8 is_ipv6; + u32 replay_window; }; enum mlx5_ipsec_cap { From patchwork Fri Dec 2 20:10:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063181 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA309C4332F for ; Fri, 2 Dec 2022 20:11:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234410AbiLBULJ (ORCPT ); Fri, 2 Dec 2022 15:11:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234685AbiLBULF (ORCPT ); Fri, 2 Dec 2022 15:11:05 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAD5EBE126 for ; Fri, 2 Dec 2022 12:11:03 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id C4635CE1FA5 for ; Fri, 2 Dec 2022 20:11:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8507AC433C1; Fri, 2 Dec 2022 20:10:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011860; bh=xzxiK7zwX7e0BR4TIgxLs6ousyvgr9Nx+R7Pa40ZvcY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tSvuUO3ny6c74AH9qVqmRgTSDT3VwgafloYfxYMlvpv93j1ej/BDHRpyAGWsSL/Ty FtynOBeUvQ0ZIyl7eiGUpiGwTX4gUGG3+1KQxDxpUpRp+Soc0km99RuNyfM4FfqBie VjimuOb3QmcQWok+KAw/3PwEzkQnKGkAMvS1RGYXAzPquan3Q3IfVt9tDlPRRoUFVc qdWVCvrW6a1biPP/Bh1cgQgNBJ0U/XoYwlL8YahYG/BbYdoha2oeOKP37uBnhApvCn SXGGL7Ul9ky9W5RtQC/4m+jVNhSJ3gO0J37OOw67baaiBwbwmvA9NwtShajtcI/dSI l32pZPN/6qdhg== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 05/16] net/mlx5e: Remove extra layers of defines Date: Fri, 2 Dec 2022 22:10:26 +0200 Message-Id: <21697336425b13c8c5765d392b4400270859a4b9.1670011671.git.leonro@nvidia.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Instead of performing redefinition of XFRM core defines to same values but with MLX5_* prefix, cache the input values as is by making sure that the proper storage objects are used. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec.c | 17 ++++----------- .../mellanox/mlx5/core/en_accel/ipsec.h | 18 ++++------------ .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 21 ++++++++++--------- .../mlx5/core/en_accel/ipsec_offload.c | 10 ++++----- 4 files changed, 23 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index 734b486db5d6..14ed72b26bbe 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -162,29 +162,20 @@ mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry, /* esn */ if (sa_entry->esn_state.trigger) { - attrs->flags |= MLX5_ACCEL_ESP_FLAGS_ESN_TRIGGERED; + attrs->esn_trigger = true; attrs->esn = sa_entry->esn_state.esn; - if (sa_entry->esn_state.overlap) - attrs->flags |= MLX5_ACCEL_ESP_FLAGS_ESN_STATE_OVERLAP; + attrs->esn_overlap = sa_entry->esn_state.overlap; attrs->replay_window = x->replay_esn->replay_window; } - /* action */ - attrs->action = (x->xso.dir == XFRM_DEV_OFFLOAD_OUT) ? - MLX5_ACCEL_ESP_ACTION_ENCRYPT : - MLX5_ACCEL_ESP_ACTION_DECRYPT; - /* flags */ - attrs->flags |= (x->props.mode == XFRM_MODE_TRANSPORT) ? - MLX5_ACCEL_ESP_FLAGS_TRANSPORT : - MLX5_ACCEL_ESP_FLAGS_TUNNEL; - + attrs->dir = x->xso.dir; /* spi */ attrs->spi = be32_to_cpu(x->id.spi); /* source , destination ips */ memcpy(&attrs->saddr, x->props.saddr.a6, sizeof(attrs->saddr)); memcpy(&attrs->daddr, x->id.daddr.a6, sizeof(attrs->daddr)); - attrs->is_ipv6 = (x->props.family != AF_INET); + attrs->family = x->props.family; } static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 6fe55675bee9..05790b7d062f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -43,18 +43,6 @@ #define MLX5E_IPSEC_SADB_RX_BITS 10 #define MLX5E_IPSEC_ESN_SCOPE_MID 0x80000000L -enum mlx5_accel_esp_flags { - MLX5_ACCEL_ESP_FLAGS_TUNNEL = 0, /* Default */ - MLX5_ACCEL_ESP_FLAGS_TRANSPORT = 1UL << 0, - MLX5_ACCEL_ESP_FLAGS_ESN_TRIGGERED = 1UL << 1, - MLX5_ACCEL_ESP_FLAGS_ESN_STATE_OVERLAP = 1UL << 2, -}; - -enum mlx5_accel_esp_action { - MLX5_ACCEL_ESP_ACTION_DECRYPT, - MLX5_ACCEL_ESP_ACTION_ENCRYPT, -}; - struct aes_gcm_keymat { u64 seq_iv; @@ -66,7 +54,6 @@ struct aes_gcm_keymat { }; struct mlx5_accel_esp_xfrm_attrs { - enum mlx5_accel_esp_action action; u32 esn; u32 spi; u32 flags; @@ -82,7 +69,10 @@ struct mlx5_accel_esp_xfrm_attrs { __be32 a6[4]; } daddr; - u8 is_ipv6; + u8 dir : 2; + u8 esn_overlap : 1; + u8 esn_trigger : 1; + u8 family; u32 replay_window; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index b859e4a4c744..886263228b5d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -341,7 +341,7 @@ static void setup_fte_common(struct mlx5_accel_esp_xfrm_attrs *attrs, struct mlx5_flow_spec *spec, struct mlx5_flow_act *flow_act) { - u8 ip_version = attrs->is_ipv6 ? 6 : 4; + u8 ip_version = (attrs->family == AF_INET) ? 4 : 6; spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS | MLX5_MATCH_MISC_PARAMETERS; @@ -411,7 +411,7 @@ static int rx_add_rule(struct mlx5e_priv *priv, int err = 0; accel_esp = priv->ipsec->rx_fs; - type = attrs->is_ipv6 ? ACCEL_FS_ESP6 : ACCEL_FS_ESP4; + type = (attrs->family == AF_INET) ? ACCEL_FS_ESP4 : ACCEL_FS_ESP6; fs_prot = &accel_esp->fs_prot[type]; err = rx_ft_get(priv, type); @@ -453,8 +453,8 @@ static int rx_add_rule(struct mlx5e_priv *priv, rule = mlx5_add_flow_rules(fs_prot->ft, spec, &flow_act, &dest, 1); if (IS_ERR(rule)) { err = PTR_ERR(rule); - netdev_err(priv->netdev, "fail to add ipsec rule attrs->action=0x%x, err=%d\n", - attrs->action, err); + netdev_err(priv->netdev, "fail to add RX ipsec rule err=%d\n", + err); goto out_err; } @@ -505,8 +505,8 @@ static int tx_add_rule(struct mlx5e_priv *priv, rule = mlx5_add_flow_rules(priv->ipsec->tx_fs->ft, spec, &flow_act, NULL, 0); if (IS_ERR(rule)) { err = PTR_ERR(rule); - netdev_err(priv->netdev, "fail to add ipsec rule attrs->action=0x%x, err=%d\n", - sa_entry->attrs.action, err); + netdev_err(priv->netdev, "fail to add TX ipsec rule err=%d\n", + err); goto out; } @@ -522,7 +522,7 @@ static int tx_add_rule(struct mlx5e_priv *priv, int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_priv *priv, struct mlx5e_ipsec_sa_entry *sa_entry) { - if (sa_entry->attrs.action == MLX5_ACCEL_ESP_ACTION_ENCRYPT) + if (sa_entry->attrs.dir == XFRM_DEV_OFFLOAD_OUT) return tx_add_rule(priv, sa_entry); return rx_add_rule(priv, sa_entry); @@ -533,17 +533,18 @@ void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_priv *priv, { struct mlx5e_ipsec_rule *ipsec_rule = &sa_entry->ipsec_rule; struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); + enum accel_fs_esp_type type; mlx5_del_flow_rules(ipsec_rule->rule); - if (sa_entry->attrs.action == MLX5_ACCEL_ESP_ACTION_ENCRYPT) { + if (sa_entry->attrs.dir == XFRM_DEV_OFFLOAD_OUT) { tx_ft_put(priv); return; } mlx5_modify_header_dealloc(mdev, ipsec_rule->set_modify_hdr); - rx_ft_put(priv, - sa_entry->attrs.is_ipv6 ? ACCEL_FS_ESP6 : ACCEL_FS_ESP4); + type = (sa_entry->attrs.family == AF_INET) ? ACCEL_FS_ESP4 : ACCEL_FS_ESP6; + rx_ft_put(priv, type); } void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c index 3d5a1f875398..3f2aeb07ea84 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c @@ -72,11 +72,10 @@ static int mlx5_create_ipsec_obj(struct mlx5e_ipsec_sa_entry *sa_entry) salt_iv_p = MLX5_ADDR_OF(ipsec_obj, obj, implicit_iv); memcpy(salt_iv_p, &aes_gcm->seq_iv, sizeof(aes_gcm->seq_iv)); /* esn */ - if (attrs->flags & MLX5_ACCEL_ESP_FLAGS_ESN_TRIGGERED) { + if (attrs->esn_trigger) { MLX5_SET(ipsec_obj, obj, esn_en, 1); MLX5_SET(ipsec_obj, obj, esn_msb, attrs->esn); - if (attrs->flags & MLX5_ACCEL_ESP_FLAGS_ESN_STATE_OVERLAP) - MLX5_SET(ipsec_obj, obj, esn_overlap, 1); + MLX5_SET(ipsec_obj, obj, esn_overlap, attrs->esn_overlap); } MLX5_SET(ipsec_obj, obj, dekn, sa_entry->enc_key_id); @@ -158,7 +157,7 @@ static int mlx5_modify_ipsec_obj(struct mlx5e_ipsec_sa_entry *sa_entry, void *obj; int err; - if (!(attrs->flags & MLX5_ACCEL_ESP_FLAGS_ESN_TRIGGERED)) + if (!attrs->esn_trigger) return 0; general_obj_types = MLX5_CAP_GEN_64(mdev, general_obj_types); @@ -189,8 +188,7 @@ static int mlx5_modify_ipsec_obj(struct mlx5e_ipsec_sa_entry *sa_entry, MLX5_MODIFY_IPSEC_BITMASK_ESN_OVERLAP | MLX5_MODIFY_IPSEC_BITMASK_ESN_MSB); MLX5_SET(ipsec_obj, obj, esn_msb, attrs->esn); - if (attrs->flags & MLX5_ACCEL_ESP_FLAGS_ESN_STATE_OVERLAP) - MLX5_SET(ipsec_obj, obj, esn_overlap, 1); + MLX5_SET(ipsec_obj, obj, esn_overlap, attrs->esn_overlap); /* general object fields set */ MLX5_SET(general_obj_in_cmd_hdr, in, opcode, MLX5_CMD_OP_MODIFY_GENERAL_OBJECT); From patchwork Fri Dec 2 20:10:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063182 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34028C47088 for ; Fri, 2 Dec 2022 20:11:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234838AbiLBULT (ORCPT ); Fri, 2 Dec 2022 15:11:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234812AbiLBULQ (ORCPT ); Fri, 2 Dec 2022 15:11:16 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D90BDF140A for ; Fri, 2 Dec 2022 12:11:07 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 2201CCE1FA5 for ; Fri, 2 Dec 2022 20:11:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE513C433C1; Fri, 2 Dec 2022 20:11:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011864; bh=FuDQCNEjCV89VCeI5HJPE0tt64WwyDwKNaEvp+Din6E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R7+iGXgoI3VlKKITWOQ9/K2sVKNJk2V/2iDWk1XsqjZFKeE7zdWvT5arjtr7Q4ZX2 fI7qAz76FJVTj6MB12lnvtjjSo7jXwHTL8Zv2T1D0tQ+fpjvgdoxXQf3v6XMeDMrh4 fXWHb+wn9g/pfcgODmU/Q3gA8oav9Esd6YFyWCcqlb6dhDiGp4J749r5X9L3oOSmGp pe1vlC452GRq5st6cp9f8anobTHIlVvyjDah4MBZBQZ+CmIjzjgwidPBorgvGlE1E5 Z0ZZB5hPKYNDOPwuCjzFq3djhCF24zM9BFXO1H8dsFuUPgc2Jbd9FlpoIEqyAxVpiw jXhCj4uignpOw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 06/16] net/mlx5e: Create symmetric IPsec RX and TX flow steering structs Date: Fri, 2 Dec 2022 22:10:27 +0200 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Remove AF family obfuscation by creating symmetric structs for RX and TX IPsec flow steering chains. This simplifies to us low level IPsec FS creation logic without need to dig into multiple levels of structs. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec.h | 7 +- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 277 ++++++++---------- 2 files changed, 130 insertions(+), 154 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 05790b7d062f..6b961ff08ed7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -94,7 +94,7 @@ struct mlx5e_ipsec_sw_stats { atomic64_t ipsec_tx_drop_trailer; }; -struct mlx5e_accel_fs_esp; +struct mlx5e_ipsec_rx; struct mlx5e_ipsec_tx; struct mlx5e_ipsec { @@ -103,8 +103,9 @@ struct mlx5e_ipsec { spinlock_t sadb_rx_lock; /* Protects sadb_rx */ struct mlx5e_ipsec_sw_stats sw_stats; struct workqueue_struct *wq; - struct mlx5e_accel_fs_esp *rx_fs; - struct mlx5e_ipsec_tx *tx_fs; + struct mlx5e_ipsec_rx *rx_ipv4; + struct mlx5e_ipsec_rx *rx_ipv6; + struct mlx5e_ipsec_tx *tx; }; struct mlx5e_ipsec_esn_state { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index 886263228b5d..a8cf3f8d0515 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -9,49 +9,40 @@ #define NUM_IPSEC_FTE BIT(15) -enum accel_fs_esp_type { - ACCEL_FS_ESP4, - ACCEL_FS_ESP6, - ACCEL_FS_ESP_NUM_TYPES, -}; - struct mlx5e_ipsec_rx_err { struct mlx5_flow_table *ft; struct mlx5_flow_handle *rule; struct mlx5_modify_hdr *copy_modify_hdr; }; -struct mlx5e_accel_fs_esp_prot { - struct mlx5_flow_table *ft; +struct mlx5e_ipsec_ft { + struct mutex mutex; /* Protect changes to this struct */ + struct mlx5_flow_table *sa; + u32 refcnt; +}; + +struct mlx5e_ipsec_rx { + struct mlx5e_ipsec_ft ft; struct mlx5_flow_group *miss_group; struct mlx5_flow_handle *miss_rule; struct mlx5_flow_destination default_dest; struct mlx5e_ipsec_rx_err rx_err; - u32 refcnt; - struct mutex prot_mutex; /* protect ESP4/ESP6 protocol */ -}; - -struct mlx5e_accel_fs_esp { - struct mlx5e_accel_fs_esp_prot fs_prot[ACCEL_FS_ESP_NUM_TYPES]; }; struct mlx5e_ipsec_tx { + struct mlx5e_ipsec_ft ft; struct mlx5_flow_namespace *ns; - struct mlx5_flow_table *ft; - struct mutex mutex; /* Protect IPsec TX steering */ - u32 refcnt; }; /* IPsec RX flow steering */ -static enum mlx5_traffic_types fs_esp2tt(enum accel_fs_esp_type i) +static enum mlx5_traffic_types family2tt(u32 family) { - if (i == ACCEL_FS_ESP4) + if (family == AF_INET) return MLX5_TT_IPV4_IPSEC_ESP; return MLX5_TT_IPV6_IPSEC_ESP; } -static int rx_err_add_rule(struct mlx5e_priv *priv, - struct mlx5e_accel_fs_esp_prot *fs_prot, +static int rx_err_add_rule(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, struct mlx5e_ipsec_rx_err *rx_err) { u8 action[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; @@ -89,7 +80,7 @@ static int rx_err_add_rule(struct mlx5e_priv *priv, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; flow_act.modify_hdr = modify_hdr; fte = mlx5_add_flow_rules(rx_err->ft, spec, &flow_act, - &fs_prot->default_dest, 1); + &rx->default_dest, 1); if (IS_ERR(fte)) { err = PTR_ERR(fte); netdev_err(priv->netdev, "fail to add ipsec rx err copy rule err=%d\n", err); @@ -108,11 +99,10 @@ static int rx_err_add_rule(struct mlx5e_priv *priv, return err; } -static int rx_fs_create(struct mlx5e_priv *priv, - struct mlx5e_accel_fs_esp_prot *fs_prot) +static int rx_fs_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx) { int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); - struct mlx5_flow_table *ft = fs_prot->ft; + struct mlx5_flow_table *ft = rx->ft.sa; struct mlx5_flow_group *miss_group; struct mlx5_flow_handle *miss_rule; MLX5_DECLARE_FLOW_ACT(flow_act); @@ -136,56 +126,45 @@ static int rx_fs_create(struct mlx5e_priv *priv, netdev_err(priv->netdev, "fail to create ipsec rx miss_group err=%d\n", err); goto out; } - fs_prot->miss_group = miss_group; + rx->miss_group = miss_group; /* Create miss rule */ - miss_rule = mlx5_add_flow_rules(ft, spec, &flow_act, &fs_prot->default_dest, 1); + miss_rule = + mlx5_add_flow_rules(ft, spec, &flow_act, &rx->default_dest, 1); if (IS_ERR(miss_rule)) { - mlx5_destroy_flow_group(fs_prot->miss_group); + mlx5_destroy_flow_group(rx->miss_group); err = PTR_ERR(miss_rule); netdev_err(priv->netdev, "fail to create ipsec rx miss_rule err=%d\n", err); goto out; } - fs_prot->miss_rule = miss_rule; + rx->miss_rule = miss_rule; out: kvfree(flow_group_in); kvfree(spec); return err; } -static void rx_destroy(struct mlx5e_priv *priv, enum accel_fs_esp_type type) +static void rx_destroy(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx) { - struct mlx5e_accel_fs_esp_prot *fs_prot; - struct mlx5e_accel_fs_esp *accel_esp; - - accel_esp = priv->ipsec->rx_fs; + mlx5_del_flow_rules(rx->miss_rule); + mlx5_destroy_flow_group(rx->miss_group); + mlx5_destroy_flow_table(rx->ft.sa); - /* The netdev unreg already happened, so all offloaded rule are already removed */ - fs_prot = &accel_esp->fs_prot[type]; - - mlx5_del_flow_rules(fs_prot->miss_rule); - mlx5_destroy_flow_group(fs_prot->miss_group); - mlx5_destroy_flow_table(fs_prot->ft); - - mlx5_del_flow_rules(fs_prot->rx_err.rule); - mlx5_modify_header_dealloc(priv->mdev, fs_prot->rx_err.copy_modify_hdr); - mlx5_destroy_flow_table(fs_prot->rx_err.ft); + mlx5_del_flow_rules(rx->rx_err.rule); + mlx5_modify_header_dealloc(priv->mdev, rx->rx_err.copy_modify_hdr); + mlx5_destroy_flow_table(rx->rx_err.ft); } -static int rx_create(struct mlx5e_priv *priv, enum accel_fs_esp_type type) +static int rx_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, + u32 family) { struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(priv->fs, false); struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(priv->fs, false); struct mlx5_flow_table_attr ft_attr = {}; - struct mlx5e_accel_fs_esp_prot *fs_prot; - struct mlx5e_accel_fs_esp *accel_esp; struct mlx5_flow_table *ft; int err; - accel_esp = priv->ipsec->rx_fs; - fs_prot = &accel_esp->fs_prot[type]; - fs_prot->default_dest = - mlx5_ttc_get_default_dest(ttc, fs_esp2tt(type)); + rx->default_dest = mlx5_ttc_get_default_dest(ttc, family2tt(family)); ft_attr.max_fte = 1; ft_attr.autogroup.max_num_groups = 1; @@ -195,8 +174,8 @@ static int rx_create(struct mlx5e_priv *priv, enum accel_fs_esp_type type) if (IS_ERR(ft)) return PTR_ERR(ft); - fs_prot->rx_err.ft = ft; - err = rx_err_add_rule(priv, fs_prot, &fs_prot->rx_err); + rx->rx_err.ft = ft; + err = rx_err_add_rule(priv, rx, &rx->rx_err); if (err) goto err_add; @@ -211,76 +190,82 @@ static int rx_create(struct mlx5e_priv *priv, enum accel_fs_esp_type type) err = PTR_ERR(ft); goto err_fs_ft; } - fs_prot->ft = ft; + rx->ft.sa = ft; - err = rx_fs_create(priv, fs_prot); + err = rx_fs_create(priv, rx); if (err) goto err_fs; return 0; err_fs: - mlx5_destroy_flow_table(fs_prot->ft); + mlx5_destroy_flow_table(rx->ft.sa); err_fs_ft: - mlx5_del_flow_rules(fs_prot->rx_err.rule); - mlx5_modify_header_dealloc(priv->mdev, fs_prot->rx_err.copy_modify_hdr); + mlx5_del_flow_rules(rx->rx_err.rule); + mlx5_modify_header_dealloc(priv->mdev, rx->rx_err.copy_modify_hdr); err_add: - mlx5_destroy_flow_table(fs_prot->rx_err.ft); + mlx5_destroy_flow_table(rx->rx_err.ft); return err; } -static int rx_ft_get(struct mlx5e_priv *priv, enum accel_fs_esp_type type) +static struct mlx5e_ipsec_rx *rx_ft_get(struct mlx5e_priv *priv, u32 family) { struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(priv->fs, false); - struct mlx5e_accel_fs_esp_prot *fs_prot; struct mlx5_flow_destination dest = {}; - struct mlx5e_accel_fs_esp *accel_esp; + struct mlx5e_ipsec_rx *rx; int err = 0; - accel_esp = priv->ipsec->rx_fs; - fs_prot = &accel_esp->fs_prot[type]; - mutex_lock(&fs_prot->prot_mutex); - if (fs_prot->refcnt) + if (family == AF_INET) + rx = priv->ipsec->rx_ipv4; + else + rx = priv->ipsec->rx_ipv6; + + mutex_lock(&rx->ft.mutex); + if (rx->ft.refcnt) goto skip; /* create FT */ - err = rx_create(priv, type); + err = rx_create(priv, rx, family); if (err) goto out; /* connect */ dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; - dest.ft = fs_prot->ft; - mlx5_ttc_fwd_dest(ttc, fs_esp2tt(type), &dest); + dest.ft = rx->ft.sa; + mlx5_ttc_fwd_dest(ttc, family2tt(family), &dest); skip: - fs_prot->refcnt++; + rx->ft.refcnt++; out: - mutex_unlock(&fs_prot->prot_mutex); - return err; + mutex_unlock(&rx->ft.mutex); + if (err) + return ERR_PTR(err); + return rx; } -static void rx_ft_put(struct mlx5e_priv *priv, enum accel_fs_esp_type type) +static void rx_ft_put(struct mlx5e_priv *priv, u32 family) { struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(priv->fs, false); - struct mlx5e_accel_fs_esp_prot *fs_prot; - struct mlx5e_accel_fs_esp *accel_esp; - - accel_esp = priv->ipsec->rx_fs; - fs_prot = &accel_esp->fs_prot[type]; - mutex_lock(&fs_prot->prot_mutex); - fs_prot->refcnt--; - if (fs_prot->refcnt) + struct mlx5e_ipsec_rx *rx; + + if (family == AF_INET) + rx = priv->ipsec->rx_ipv4; + else + rx = priv->ipsec->rx_ipv6; + + mutex_lock(&rx->ft.mutex); + rx->ft.refcnt--; + if (rx->ft.refcnt) goto out; /* disconnect */ - mlx5_ttc_fwd_default_dest(ttc, fs_esp2tt(type)); + mlx5_ttc_fwd_default_dest(ttc, family2tt(family)); /* remove FT */ - rx_destroy(priv, type); + rx_destroy(priv, rx); out: - mutex_unlock(&fs_prot->prot_mutex); + mutex_unlock(&rx->ft.mutex); } /* IPsec TX flow steering */ @@ -293,47 +278,49 @@ static int tx_create(struct mlx5e_priv *priv) ft_attr.max_fte = NUM_IPSEC_FTE; ft_attr.autogroup.max_num_groups = 1; - ft = mlx5_create_auto_grouped_flow_table(ipsec->tx_fs->ns, &ft_attr); + ft = mlx5_create_auto_grouped_flow_table(ipsec->tx->ns, &ft_attr); if (IS_ERR(ft)) { err = PTR_ERR(ft); netdev_err(priv->netdev, "fail to create ipsec tx ft err=%d\n", err); return err; } - ipsec->tx_fs->ft = ft; + ipsec->tx->ft.sa = ft; return 0; } -static int tx_ft_get(struct mlx5e_priv *priv) +static struct mlx5e_ipsec_tx *tx_ft_get(struct mlx5e_priv *priv) { - struct mlx5e_ipsec_tx *tx_fs = priv->ipsec->tx_fs; + struct mlx5e_ipsec_tx *tx = priv->ipsec->tx; int err = 0; - mutex_lock(&tx_fs->mutex); - if (tx_fs->refcnt) + mutex_lock(&tx->ft.mutex); + if (tx->ft.refcnt) goto skip; err = tx_create(priv); if (err) goto out; skip: - tx_fs->refcnt++; + tx->ft.refcnt++; out: - mutex_unlock(&tx_fs->mutex); - return err; + mutex_unlock(&tx->ft.mutex); + if (err) + return ERR_PTR(err); + return tx; } static void tx_ft_put(struct mlx5e_priv *priv) { - struct mlx5e_ipsec_tx *tx_fs = priv->ipsec->tx_fs; + struct mlx5e_ipsec_tx *tx = priv->ipsec->tx; - mutex_lock(&tx_fs->mutex); - tx_fs->refcnt--; - if (tx_fs->refcnt) + mutex_lock(&tx->ft.mutex); + tx->ft.refcnt--; + if (tx->ft.refcnt) goto out; - mlx5_destroy_flow_table(tx_fs->ft); + mlx5_destroy_flow_table(tx->ft.sa); out: - mutex_unlock(&tx_fs->mutex); + mutex_unlock(&tx->ft.mutex); } static void setup_fte_common(struct mlx5_accel_esp_xfrm_attrs *attrs, @@ -401,22 +388,16 @@ static int rx_add_rule(struct mlx5e_priv *priv, struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; u32 ipsec_obj_id = sa_entry->ipsec_obj_id; struct mlx5_modify_hdr *modify_hdr = NULL; - struct mlx5e_accel_fs_esp_prot *fs_prot; struct mlx5_flow_destination dest = {}; - struct mlx5e_accel_fs_esp *accel_esp; struct mlx5_flow_act flow_act = {}; struct mlx5_flow_handle *rule; - enum accel_fs_esp_type type; struct mlx5_flow_spec *spec; + struct mlx5e_ipsec_rx *rx; int err = 0; - accel_esp = priv->ipsec->rx_fs; - type = (attrs->family == AF_INET) ? ACCEL_FS_ESP4 : ACCEL_FS_ESP6; - fs_prot = &accel_esp->fs_prot[type]; - - err = rx_ft_get(priv, type); - if (err) - return err; + rx = rx_ft_get(priv, attrs->family); + if (IS_ERR(rx)) + return PTR_ERR(rx); spec = kvzalloc(sizeof(*spec), GFP_KERNEL); if (!spec) { @@ -449,8 +430,8 @@ static int rx_add_rule(struct mlx5e_priv *priv, MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; flow_act.modify_hdr = modify_hdr; - dest.ft = fs_prot->rx_err.ft; - rule = mlx5_add_flow_rules(fs_prot->ft, spec, &flow_act, &dest, 1); + dest.ft = rx->rx_err.ft; + rule = mlx5_add_flow_rules(rx->ft.sa, spec, &flow_act, &dest, 1); if (IS_ERR(rule)) { err = PTR_ERR(rule); netdev_err(priv->netdev, "fail to add RX ipsec rule err=%d\n", @@ -465,7 +446,7 @@ static int rx_add_rule(struct mlx5e_priv *priv, out_err: if (modify_hdr) mlx5_modify_header_dealloc(priv->mdev, modify_hdr); - rx_ft_put(priv, type); + rx_ft_put(priv, attrs->family); out: kvfree(spec); @@ -478,11 +459,12 @@ static int tx_add_rule(struct mlx5e_priv *priv, struct mlx5_flow_act flow_act = {}; struct mlx5_flow_handle *rule; struct mlx5_flow_spec *spec; + struct mlx5e_ipsec_tx *tx; int err = 0; - err = tx_ft_get(priv); - if (err) - return err; + tx = tx_ft_get(priv); + if (IS_ERR(tx)) + return PTR_ERR(tx); spec = kvzalloc(sizeof(*spec), GFP_KERNEL); if (!spec) { @@ -502,7 +484,7 @@ static int tx_add_rule(struct mlx5e_priv *priv, flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW | MLX5_FLOW_CONTEXT_ACTION_CRYPTO_ENCRYPT; - rule = mlx5_add_flow_rules(priv->ipsec->tx_fs->ft, spec, &flow_act, NULL, 0); + rule = mlx5_add_flow_rules(tx->ft.sa, spec, &flow_act, NULL, 0); if (IS_ERR(rule)) { err = PTR_ERR(rule); netdev_err(priv->netdev, "fail to add TX ipsec rule err=%d\n", @@ -533,7 +515,6 @@ void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_priv *priv, { struct mlx5e_ipsec_rule *ipsec_rule = &sa_entry->ipsec_rule; struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); - enum accel_fs_esp_type type; mlx5_del_flow_rules(ipsec_rule->rule); @@ -543,38 +524,30 @@ void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_priv *priv, } mlx5_modify_header_dealloc(mdev, ipsec_rule->set_modify_hdr); - type = (sa_entry->attrs.family == AF_INET) ? ACCEL_FS_ESP4 : ACCEL_FS_ESP6; - rx_ft_put(priv, type); + rx_ft_put(priv, sa_entry->attrs.family); } void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec) { - struct mlx5e_accel_fs_esp_prot *fs_prot; - struct mlx5e_accel_fs_esp *accel_esp; - enum accel_fs_esp_type i; - - if (!ipsec->rx_fs) + if (!ipsec->tx) return; - mutex_destroy(&ipsec->tx_fs->mutex); - WARN_ON(ipsec->tx_fs->refcnt); - kfree(ipsec->tx_fs); + mutex_destroy(&ipsec->tx->ft.mutex); + WARN_ON(ipsec->tx->ft.refcnt); + kfree(ipsec->tx); - accel_esp = ipsec->rx_fs; - for (i = 0; i < ACCEL_FS_ESP_NUM_TYPES; i++) { - fs_prot = &accel_esp->fs_prot[i]; - mutex_destroy(&fs_prot->prot_mutex); - WARN_ON(fs_prot->refcnt); - } - kfree(ipsec->rx_fs); + mutex_destroy(&ipsec->rx_ipv4->ft.mutex); + WARN_ON(ipsec->rx_ipv4->ft.refcnt); + kfree(ipsec->rx_ipv4); + + mutex_destroy(&ipsec->rx_ipv6->ft.mutex); + WARN_ON(ipsec->rx_ipv6->ft.refcnt); + kfree(ipsec->rx_ipv6); } int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec) { - struct mlx5e_accel_fs_esp_prot *fs_prot; - struct mlx5e_accel_fs_esp *accel_esp; struct mlx5_flow_namespace *ns; - enum accel_fs_esp_type i; int err = -ENOMEM; ns = mlx5_get_flow_namespace(ipsec->mdev, @@ -582,26 +555,28 @@ int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec) if (!ns) return -EOPNOTSUPP; - ipsec->tx_fs = kzalloc(sizeof(*ipsec->tx_fs), GFP_KERNEL); - if (!ipsec->tx_fs) + ipsec->tx = kzalloc(sizeof(*ipsec->tx), GFP_KERNEL); + if (!ipsec->tx) return -ENOMEM; - ipsec->rx_fs = kzalloc(sizeof(*ipsec->rx_fs), GFP_KERNEL); - if (!ipsec->rx_fs) - goto err_rx; + ipsec->rx_ipv4 = kzalloc(sizeof(*ipsec->rx_ipv4), GFP_KERNEL); + if (!ipsec->rx_ipv4) + goto err_rx_ipv4; - mutex_init(&ipsec->tx_fs->mutex); - ipsec->tx_fs->ns = ns; + ipsec->rx_ipv6 = kzalloc(sizeof(*ipsec->rx_ipv6), GFP_KERNEL); + if (!ipsec->rx_ipv6) + goto err_rx_ipv6; - accel_esp = ipsec->rx_fs; - for (i = 0; i < ACCEL_FS_ESP_NUM_TYPES; i++) { - fs_prot = &accel_esp->fs_prot[i]; - mutex_init(&fs_prot->prot_mutex); - } + mutex_init(&ipsec->tx->ft.mutex); + mutex_init(&ipsec->rx_ipv4->ft.mutex); + mutex_init(&ipsec->rx_ipv6->ft.mutex); + ipsec->tx->ns = ns; return 0; -err_rx: - kfree(ipsec->tx_fs); +err_rx_ipv6: + kfree(ipsec->rx_ipv4); +err_rx_ipv4: + kfree(ipsec->tx); return err; } From patchwork Fri Dec 2 20:10:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063186 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F505C4332F for ; Fri, 2 Dec 2022 20:11:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234906AbiLBULt (ORCPT ); Fri, 2 Dec 2022 15:11:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234850AbiLBULc (ORCPT ); Fri, 2 Dec 2022 15:11:32 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F8BBF4EB1 for ; Fri, 2 Dec 2022 12:11:23 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3069EB82277 for ; Fri, 2 Dec 2022 20:11:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 253BAC433D6; Fri, 2 Dec 2022 20:11:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011880; bh=8udgB2mfn5rau7TWNtRD1cvLa8A3Yi97kdq+uvT/lag=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H0Ys7y0I11M0eHp4YWrDsbSXuDYqLjI7Ug/DqE8xEI1pAb0v69sduirfJYWka5Lzi AplCFM9Fm26UZxIOjhjgUnr7kYYT3bauJLcujfAZAFddIJUSJH3Z6+xnDfgc9NnukA eJxmwXyXG4lius1OIaQIa2WAZuIpdw8fjLB1vSpzcKrxnTFzBaAX5HVdNiXBbqBrdB GOI3jz5OERMPB5K0IZGWW8acahUIhyXNEIedoBCjCGK8JTApSVNEi9/WwUW7+yi9iB PWTS5fwBYll3SjZVZn9o2Rapa/GszLCif14oS44lkcrZGgpF2+jzoSdxcIgbJDh+oF GWNeJOS+FLDXQ== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Saeed Mahameed Subject: [PATCH xfrm-next 07/16] net/mlx5e: Use mlx5 print routines for low level IPsec code Date: Fri, 2 Dec 2022 22:10:28 +0200 Message-Id: <20559c87a20899c42cadd16f9385992d0686fd3c.1670011671.git.leonro@nvidia.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Low level mlx5 code needs to use mlx5_core print routines and not netdev ones, as the failures are relevant to the HW itself and not to its netdev. This change allows us to remove access to mlx5 priv structure, which holds high level driver data that isn't needed for mlx5 IPsec code. Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 26 ++++++++++--------- 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index a8cf3f8d0515..08feff765032 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -70,8 +70,8 @@ static int rx_err_add_rule(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, if (IS_ERR(modify_hdr)) { err = PTR_ERR(modify_hdr); - netdev_err(priv->netdev, - "fail to alloc ipsec copy modify_header_id err=%d\n", err); + mlx5_core_err(mdev, + "fail to alloc ipsec copy modify_header_id err=%d\n", err); goto out_spec; } @@ -83,7 +83,7 @@ static int rx_err_add_rule(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, &rx->default_dest, 1); if (IS_ERR(fte)) { err = PTR_ERR(fte); - netdev_err(priv->netdev, "fail to add ipsec rx err copy rule err=%d\n", err); + mlx5_core_err(mdev, "fail to add ipsec rx err copy rule err=%d\n", err); goto out; } @@ -103,6 +103,7 @@ static int rx_fs_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx) { int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); struct mlx5_flow_table *ft = rx->ft.sa; + struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_flow_group *miss_group; struct mlx5_flow_handle *miss_rule; MLX5_DECLARE_FLOW_ACT(flow_act); @@ -123,7 +124,7 @@ static int rx_fs_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx) miss_group = mlx5_create_flow_group(ft, flow_group_in); if (IS_ERR(miss_group)) { err = PTR_ERR(miss_group); - netdev_err(priv->netdev, "fail to create ipsec rx miss_group err=%d\n", err); + mlx5_core_err(mdev, "fail to create ipsec rx miss_group err=%d\n", err); goto out; } rx->miss_group = miss_group; @@ -134,7 +135,7 @@ static int rx_fs_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx) if (IS_ERR(miss_rule)) { mlx5_destroy_flow_group(rx->miss_group); err = PTR_ERR(miss_rule); - netdev_err(priv->netdev, "fail to create ipsec rx miss_rule err=%d\n", err); + mlx5_core_err(mdev, "fail to create ipsec rx miss_rule err=%d\n", err); goto out; } rx->miss_rule = miss_rule; @@ -273,6 +274,7 @@ static int tx_create(struct mlx5e_priv *priv) { struct mlx5_flow_table_attr ft_attr = {}; struct mlx5e_ipsec *ipsec = priv->ipsec; + struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_flow_table *ft; int err; @@ -281,7 +283,7 @@ static int tx_create(struct mlx5e_priv *priv) ft = mlx5_create_auto_grouped_flow_table(ipsec->tx->ns, &ft_attr); if (IS_ERR(ft)) { err = PTR_ERR(ft); - netdev_err(priv->netdev, "fail to create ipsec tx ft err=%d\n", err); + mlx5_core_err(mdev, "fail to create ipsec tx ft err=%d\n", err); return err; } ipsec->tx->ft.sa = ft; @@ -386,6 +388,7 @@ static int rx_add_rule(struct mlx5e_priv *priv, u8 action[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; struct mlx5e_ipsec_rule *ipsec_rule = &sa_entry->ipsec_rule; struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; + struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); u32 ipsec_obj_id = sa_entry->ipsec_obj_id; struct mlx5_modify_hdr *modify_hdr = NULL; struct mlx5_flow_destination dest = {}; @@ -419,8 +422,8 @@ static int rx_add_rule(struct mlx5e_priv *priv, 1, action); if (IS_ERR(modify_hdr)) { err = PTR_ERR(modify_hdr); - netdev_err(priv->netdev, - "fail to alloc ipsec set modify_header_id err=%d\n", err); + mlx5_core_err(mdev, + "fail to alloc ipsec set modify_header_id err=%d\n", err); modify_hdr = NULL; goto out_err; } @@ -434,8 +437,7 @@ static int rx_add_rule(struct mlx5e_priv *priv, rule = mlx5_add_flow_rules(rx->ft.sa, spec, &flow_act, &dest, 1); if (IS_ERR(rule)) { err = PTR_ERR(rule); - netdev_err(priv->netdev, "fail to add RX ipsec rule err=%d\n", - err); + mlx5_core_err(mdev, "fail to add RX ipsec rule err=%d\n", err); goto out_err; } @@ -456,6 +458,7 @@ static int rx_add_rule(struct mlx5e_priv *priv, static int tx_add_rule(struct mlx5e_priv *priv, struct mlx5e_ipsec_sa_entry *sa_entry) { + struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); struct mlx5_flow_act flow_act = {}; struct mlx5_flow_handle *rule; struct mlx5_flow_spec *spec; @@ -487,8 +490,7 @@ static int tx_add_rule(struct mlx5e_priv *priv, rule = mlx5_add_flow_rules(tx->ft.sa, spec, &flow_act, NULL, 0); if (IS_ERR(rule)) { err = PTR_ERR(rule); - netdev_err(priv->netdev, "fail to add TX ipsec rule err=%d\n", - err); + mlx5_core_err(mdev, "fail to add TX ipsec rule err=%d\n", err); goto out; } From patchwork Fri Dec 2 20:10:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063184 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28D86C4332F for ; Fri, 2 Dec 2022 20:11:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234852AbiLBULd (ORCPT ); Fri, 2 Dec 2022 15:11:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234685AbiLBULS (ORCPT ); Fri, 2 Dec 2022 15:11:18 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73A63F4EA3 for ; Fri, 2 Dec 2022 12:11:13 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F23626221E for ; Fri, 2 Dec 2022 20:11:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE8DEC433C1; Fri, 2 Dec 2022 20:11:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011872; bh=cew3o5183PzKIFxHFUpGU3XDci8yb74G8KgqViaYHaA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XHGPQ497BF1FQH9TW/JXkQU3VUi2Yp2QK8IBuAJrI23HBN2j9HVE7pErRiVw13L8s UheE75RG4ZiW+EWlhuhn4MK8lOROVnAXZi1Ppe4MFrbXlDbx88xMDlS+JRK/ZCHN4H pWIneKgqB0GMqH+RFJkcpXW0drL0CUS5EGVMwuJnGZRairBDRNHllcw8EjI+UrfoF4 smfBcWwJY9RIW5fDOGXjOBcq39MbiKSAAYj0X4QucSxP8yxURlxfGzoqienPvILBid 8lrGR5LexL3Ao61wLIWhKgYSox7txK2bZ9v+qIJpbZW/ZlH0nCiz35SkYaA4ctW0rf dgT5SZ7THv3Xw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 08/16] net/mlx5e: Remove accesses to priv for low level IPsec FS code Date: Fri, 2 Dec 2022 22:10:29 +0200 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky mlx5 priv structure is driver main structure that holds high level data. That information is not needed for IPsec flow steering logic and the pointer to mlx5e_priv was not supposed to be passed in the first place. This change "cleans" the logic to rely on internal to IPsec structures without touching global mlx5e_priv. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec.c | 8 +- .../mellanox/mlx5/core/en_accel/ipsec.h | 7 +- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 100 +++++++++--------- 3 files changed, 56 insertions(+), 59 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index 14ed72b26bbe..f518322c1ac1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -306,7 +306,7 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x) if (err) goto err_xfrm; - err = mlx5e_accel_ipsec_fs_add_rule(priv, sa_entry); + err = mlx5e_accel_ipsec_fs_add_rule(sa_entry); if (err) goto err_hw_ctx; @@ -324,7 +324,7 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x) goto out; err_add_rule: - mlx5e_accel_ipsec_fs_del_rule(priv, sa_entry); + mlx5e_accel_ipsec_fs_del_rule(sa_entry); err_hw_ctx: mlx5_ipsec_free_sa_ctx(sa_entry); err_xfrm: @@ -344,10 +344,9 @@ static void mlx5e_xfrm_del_state(struct xfrm_state *x) static void mlx5e_xfrm_free_state(struct xfrm_state *x) { struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x); - struct mlx5e_priv *priv = netdev_priv(x->xso.dev); cancel_work_sync(&sa_entry->modify_work.work); - mlx5e_accel_ipsec_fs_del_rule(priv, sa_entry); + mlx5e_accel_ipsec_fs_del_rule(sa_entry); mlx5_ipsec_free_sa_ctx(sa_entry); kfree(sa_entry); } @@ -378,6 +377,7 @@ void mlx5e_ipsec_init(struct mlx5e_priv *priv) if (ret) goto err_fs_init; + ipsec->fs = priv->fs; priv->ipsec = ipsec; netdev_dbg(priv->netdev, "IPSec attached to netdevice\n"); return; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 6b961ff08ed7..db0ccf2a797a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -103,6 +103,7 @@ struct mlx5e_ipsec { spinlock_t sadb_rx_lock; /* Protects sadb_rx */ struct mlx5e_ipsec_sw_stats sw_stats; struct workqueue_struct *wq; + struct mlx5e_flow_steering *fs; struct mlx5e_ipsec_rx *rx_ipv4; struct mlx5e_ipsec_rx *rx_ipv6; struct mlx5e_ipsec_tx *tx; @@ -148,10 +149,8 @@ struct xfrm_state *mlx5e_ipsec_sadb_rx_lookup(struct mlx5e_ipsec *dev, void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec); int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec); -int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_priv *priv, - struct mlx5e_ipsec_sa_entry *sa_entry); -void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_priv *priv, - struct mlx5e_ipsec_sa_entry *sa_entry); +int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry); +void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_ipsec_sa_entry *sa_entry); int mlx5_ipsec_create_sa_ctx(struct mlx5e_ipsec_sa_entry *sa_entry); void mlx5_ipsec_free_sa_ctx(struct mlx5e_ipsec_sa_entry *sa_entry); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index 08feff765032..8e87d8d02511 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -42,11 +42,11 @@ static enum mlx5_traffic_types family2tt(u32 family) return MLX5_TT_IPV6_IPSEC_ESP; } -static int rx_err_add_rule(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, +static int rx_err_add_rule(struct mlx5_core_dev *mdev, + struct mlx5e_ipsec_rx *rx, struct mlx5e_ipsec_rx_err *rx_err) { u8 action[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; - struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_flow_act flow_act = {}; struct mlx5_modify_hdr *modify_hdr; struct mlx5_flow_handle *fte; @@ -99,11 +99,10 @@ static int rx_err_add_rule(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, return err; } -static int rx_fs_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx) +static int rx_fs_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) { int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); struct mlx5_flow_table *ft = rx->ft.sa; - struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_flow_group *miss_group; struct mlx5_flow_handle *miss_rule; MLX5_DECLARE_FLOW_ACT(flow_act); @@ -145,22 +144,22 @@ static int rx_fs_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx) return err; } -static void rx_destroy(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx) +static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) { mlx5_del_flow_rules(rx->miss_rule); mlx5_destroy_flow_group(rx->miss_group); mlx5_destroy_flow_table(rx->ft.sa); mlx5_del_flow_rules(rx->rx_err.rule); - mlx5_modify_header_dealloc(priv->mdev, rx->rx_err.copy_modify_hdr); + mlx5_modify_header_dealloc(mdev, rx->rx_err.copy_modify_hdr); mlx5_destroy_flow_table(rx->rx_err.ft); } -static int rx_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, - u32 family) +static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, + struct mlx5e_ipsec_rx *rx, u32 family) { - struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(priv->fs, false); - struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(priv->fs, false); + struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(ipsec->fs, false); + struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false); struct mlx5_flow_table_attr ft_attr = {}; struct mlx5_flow_table *ft; int err; @@ -176,7 +175,7 @@ static int rx_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, return PTR_ERR(ft); rx->rx_err.ft = ft; - err = rx_err_add_rule(priv, rx, &rx->rx_err); + err = rx_err_add_rule(mdev, rx, &rx->rx_err); if (err) goto err_add; @@ -193,7 +192,7 @@ static int rx_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, } rx->ft.sa = ft; - err = rx_fs_create(priv, rx); + err = rx_fs_create(mdev, rx); if (err) goto err_fs; @@ -203,30 +202,31 @@ static int rx_create(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx, mlx5_destroy_flow_table(rx->ft.sa); err_fs_ft: mlx5_del_flow_rules(rx->rx_err.rule); - mlx5_modify_header_dealloc(priv->mdev, rx->rx_err.copy_modify_hdr); + mlx5_modify_header_dealloc(mdev, rx->rx_err.copy_modify_hdr); err_add: mlx5_destroy_flow_table(rx->rx_err.ft); return err; } -static struct mlx5e_ipsec_rx *rx_ft_get(struct mlx5e_priv *priv, u32 family) +static struct mlx5e_ipsec_rx *rx_ft_get(struct mlx5_core_dev *mdev, + struct mlx5e_ipsec *ipsec, u32 family) { - struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(priv->fs, false); + struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false); struct mlx5_flow_destination dest = {}; struct mlx5e_ipsec_rx *rx; int err = 0; if (family == AF_INET) - rx = priv->ipsec->rx_ipv4; + rx = ipsec->rx_ipv4; else - rx = priv->ipsec->rx_ipv6; + rx = ipsec->rx_ipv6; mutex_lock(&rx->ft.mutex); if (rx->ft.refcnt) goto skip; /* create FT */ - err = rx_create(priv, rx, family); + err = rx_create(mdev, ipsec, rx, family); if (err) goto out; @@ -244,15 +244,16 @@ static struct mlx5e_ipsec_rx *rx_ft_get(struct mlx5e_priv *priv, u32 family) return rx; } -static void rx_ft_put(struct mlx5e_priv *priv, u32 family) +static void rx_ft_put(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, + u32 family) { - struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(priv->fs, false); + struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false); struct mlx5e_ipsec_rx *rx; if (family == AF_INET) - rx = priv->ipsec->rx_ipv4; + rx = ipsec->rx_ipv4; else - rx = priv->ipsec->rx_ipv6; + rx = ipsec->rx_ipv6; mutex_lock(&rx->ft.mutex); rx->ft.refcnt--; @@ -263,43 +264,42 @@ static void rx_ft_put(struct mlx5e_priv *priv, u32 family) mlx5_ttc_fwd_default_dest(ttc, family2tt(family)); /* remove FT */ - rx_destroy(priv, rx); + rx_destroy(mdev, rx); out: mutex_unlock(&rx->ft.mutex); } /* IPsec TX flow steering */ -static int tx_create(struct mlx5e_priv *priv) +static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx) { struct mlx5_flow_table_attr ft_attr = {}; - struct mlx5e_ipsec *ipsec = priv->ipsec; - struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_flow_table *ft; int err; ft_attr.max_fte = NUM_IPSEC_FTE; ft_attr.autogroup.max_num_groups = 1; - ft = mlx5_create_auto_grouped_flow_table(ipsec->tx->ns, &ft_attr); + ft = mlx5_create_auto_grouped_flow_table(tx->ns, &ft_attr); if (IS_ERR(ft)) { err = PTR_ERR(ft); mlx5_core_err(mdev, "fail to create ipsec tx ft err=%d\n", err); return err; } - ipsec->tx->ft.sa = ft; + tx->ft.sa = ft; return 0; } -static struct mlx5e_ipsec_tx *tx_ft_get(struct mlx5e_priv *priv) +static struct mlx5e_ipsec_tx *tx_ft_get(struct mlx5_core_dev *mdev, + struct mlx5e_ipsec *ipsec) { - struct mlx5e_ipsec_tx *tx = priv->ipsec->tx; + struct mlx5e_ipsec_tx *tx = ipsec->tx; int err = 0; mutex_lock(&tx->ft.mutex); if (tx->ft.refcnt) goto skip; - err = tx_create(priv); + err = tx_create(mdev, tx); if (err) goto out; skip: @@ -311,9 +311,9 @@ static struct mlx5e_ipsec_tx *tx_ft_get(struct mlx5e_priv *priv) return tx; } -static void tx_ft_put(struct mlx5e_priv *priv) +static void tx_ft_put(struct mlx5e_ipsec *ipsec) { - struct mlx5e_ipsec_tx *tx = priv->ipsec->tx; + struct mlx5e_ipsec_tx *tx = ipsec->tx; mutex_lock(&tx->ft.mutex); tx->ft.refcnt--; @@ -382,13 +382,13 @@ static void setup_fte_common(struct mlx5_accel_esp_xfrm_attrs *attrs, flow_act->flags |= FLOW_ACT_NO_APPEND; } -static int rx_add_rule(struct mlx5e_priv *priv, - struct mlx5e_ipsec_sa_entry *sa_entry) +static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) { u8 action[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; struct mlx5e_ipsec_rule *ipsec_rule = &sa_entry->ipsec_rule; struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); + struct mlx5e_ipsec *ipsec = sa_entry->ipsec; u32 ipsec_obj_id = sa_entry->ipsec_obj_id; struct mlx5_modify_hdr *modify_hdr = NULL; struct mlx5_flow_destination dest = {}; @@ -398,7 +398,7 @@ static int rx_add_rule(struct mlx5e_priv *priv, struct mlx5e_ipsec_rx *rx; int err = 0; - rx = rx_ft_get(priv, attrs->family); + rx = rx_ft_get(mdev, ipsec, attrs->family); if (IS_ERR(rx)) return PTR_ERR(rx); @@ -418,7 +418,7 @@ static int rx_add_rule(struct mlx5e_priv *priv, MLX5_SET(set_action_in, action, offset, 0); MLX5_SET(set_action_in, action, length, 32); - modify_hdr = mlx5_modify_header_alloc(priv->mdev, MLX5_FLOW_NAMESPACE_KERNEL, + modify_hdr = mlx5_modify_header_alloc(mdev, MLX5_FLOW_NAMESPACE_KERNEL, 1, action); if (IS_ERR(modify_hdr)) { err = PTR_ERR(modify_hdr); @@ -447,25 +447,25 @@ static int rx_add_rule(struct mlx5e_priv *priv, out_err: if (modify_hdr) - mlx5_modify_header_dealloc(priv->mdev, modify_hdr); - rx_ft_put(priv, attrs->family); + mlx5_modify_header_dealloc(mdev, modify_hdr); + rx_ft_put(mdev, ipsec, attrs->family); out: kvfree(spec); return err; } -static int tx_add_rule(struct mlx5e_priv *priv, - struct mlx5e_ipsec_sa_entry *sa_entry) +static int tx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) { struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); + struct mlx5e_ipsec *ipsec = sa_entry->ipsec; struct mlx5_flow_act flow_act = {}; struct mlx5_flow_handle *rule; struct mlx5_flow_spec *spec; struct mlx5e_ipsec_tx *tx; int err = 0; - tx = tx_ft_get(priv); + tx = tx_ft_get(mdev, ipsec); if (IS_ERR(tx)) return PTR_ERR(tx); @@ -499,21 +499,19 @@ static int tx_add_rule(struct mlx5e_priv *priv, out: kvfree(spec); if (err) - tx_ft_put(priv); + tx_ft_put(ipsec); return err; } -int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_priv *priv, - struct mlx5e_ipsec_sa_entry *sa_entry) +int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) { if (sa_entry->attrs.dir == XFRM_DEV_OFFLOAD_OUT) - return tx_add_rule(priv, sa_entry); + return tx_add_rule(sa_entry); - return rx_add_rule(priv, sa_entry); + return rx_add_rule(sa_entry); } -void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_priv *priv, - struct mlx5e_ipsec_sa_entry *sa_entry) +void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_ipsec_sa_entry *sa_entry) { struct mlx5e_ipsec_rule *ipsec_rule = &sa_entry->ipsec_rule; struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); @@ -521,12 +519,12 @@ void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_priv *priv, mlx5_del_flow_rules(ipsec_rule->rule); if (sa_entry->attrs.dir == XFRM_DEV_OFFLOAD_OUT) { - tx_ft_put(priv); + tx_ft_put(sa_entry->ipsec); return; } mlx5_modify_header_dealloc(mdev, ipsec_rule->set_modify_hdr); - rx_ft_put(priv, sa_entry->attrs.family); + rx_ft_put(mdev, sa_entry->ipsec, sa_entry->attrs.family); } void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec) From patchwork Fri Dec 2 20:10:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063185 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08371C47088 for ; Fri, 2 Dec 2022 20:11:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234898AbiLBULl (ORCPT ); Fri, 2 Dec 2022 15:11:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234812AbiLBULU (ORCPT ); Fri, 2 Dec 2022 15:11:20 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B15BF3FB3 for ; Fri, 2 Dec 2022 12:11:19 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D0673B8226A for ; Fri, 2 Dec 2022 20:11:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13EC1C433C1; Fri, 2 Dec 2022 20:11:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011876; bh=rcS7N/tWTblgt66UxBdSTbRpNJEaHe46At2ngywxkIE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j4PKE+3uUT98TL1RjxiP/RAB1hhgmUov0NiueodO0plWA06zF2tKoDsmBNHnc4Wnu 5c/PhQxfHvbzmeW3IGcjgBL9FKvh/98whwFtMxEN4heIN94+WTfSj0aO3Lwo3lUv+F LkdWlResnt6qvn8w89jj7YOQBBjXd9M5g+/mA8nzEIJK5FMHVUsOAOcKyg0QGqD7P0 +K1+zcEXLolkkBs9Pt7pMPrD4f/zga14s/nPnNPDG+JYEekNLa804QZlsWNwdNQurF vtRwD47A9RrSvOIGUuKnl0dTawCKIcm+1B+dRMsfR6XBnezca7ns00Pn/keFemUmv/ y/jMIlVQwOPmQ== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem Subject: [PATCH xfrm-next 09/16] net/mlx5e: Create Advanced Steering Operation object for IPsec Date: Fri, 2 Dec 2022 22:10:30 +0200 Message-Id: <610c0789dee22247c2601d6245e2162ec5b87050.1670011671.git.leonro@nvidia.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Setup the ASO (Advanced Steering Operation) object that is needed for IPsec to interact with SW stack about various fast changing events: replay window, lifetime limits, e.t.c Reviewed-by: Raed Salem Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 + .../mellanox/mlx5/core/en_accel/ipsec.c | 12 +++++ .../mellanox/mlx5/core/en_accel/ipsec.h | 13 +++++ .../mlx5/core/en_accel/ipsec_offload.c | 54 +++++++++++++++++++ 4 files changed, 80 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 65790ff58a74..2d77fb8a8a01 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -1245,4 +1245,5 @@ int mlx5e_set_vf_rate(struct net_device *dev, int vf, int min_tx_rate, int max_t int mlx5e_get_vf_config(struct net_device *dev, int vf, struct ifla_vf_info *ivi); int mlx5e_get_vf_stats(struct net_device *dev, int vf, struct ifla_vf_stats *vf_stats); #endif +int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn, u32 *mkey); #endif /* __MLX5_EN_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index f518322c1ac1..d2c814e7af97 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -373,6 +373,13 @@ void mlx5e_ipsec_init(struct mlx5e_priv *priv) if (!ipsec->wq) goto err_wq; + if (mlx5_ipsec_device_caps(priv->mdev) & + MLX5_IPSEC_CAP_PACKET_OFFLOAD) { + ret = mlx5e_ipsec_aso_init(ipsec); + if (ret) + goto err_aso; + } + ret = mlx5e_accel_ipsec_fs_init(ipsec); if (ret) goto err_fs_init; @@ -383,6 +390,9 @@ void mlx5e_ipsec_init(struct mlx5e_priv *priv) return; err_fs_init: + if (mlx5_ipsec_device_caps(priv->mdev) & MLX5_IPSEC_CAP_PACKET_OFFLOAD) + mlx5e_ipsec_aso_cleanup(ipsec); +err_aso: destroy_workqueue(ipsec->wq); err_wq: kfree(ipsec); @@ -398,6 +408,8 @@ void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv) return; mlx5e_accel_ipsec_fs_cleanup(ipsec); + if (mlx5_ipsec_device_caps(priv->mdev) & MLX5_IPSEC_CAP_PACKET_OFFLOAD) + mlx5e_ipsec_aso_cleanup(ipsec); destroy_workqueue(ipsec->wq); kfree(ipsec); priv->ipsec = NULL; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index db0ccf2a797a..8e2f88f269ac 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -39,6 +39,7 @@ #include #include #include +#include "lib/aso.h" #define MLX5E_IPSEC_SADB_RX_BITS 10 #define MLX5E_IPSEC_ESN_SCOPE_MID 0x80000000L @@ -97,6 +98,14 @@ struct mlx5e_ipsec_sw_stats { struct mlx5e_ipsec_rx; struct mlx5e_ipsec_tx; +struct mlx5e_ipsec_aso { + u8 ctx[MLX5_ST_SZ_BYTES(ipsec_aso)]; + dma_addr_t dma_addr; + struct mlx5_aso *aso; + u32 pdn; + u32 mkey; +}; + struct mlx5e_ipsec { struct mlx5_core_dev *mdev; DECLARE_HASHTABLE(sadb_rx, MLX5E_IPSEC_SADB_RX_BITS); @@ -107,6 +116,7 @@ struct mlx5e_ipsec { struct mlx5e_ipsec_rx *rx_ipv4; struct mlx5e_ipsec_rx *rx_ipv6; struct mlx5e_ipsec_tx *tx; + struct mlx5e_ipsec_aso *aso; }; struct mlx5e_ipsec_esn_state { @@ -160,6 +170,9 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev); void mlx5_accel_esp_modify_xfrm(struct mlx5e_ipsec_sa_entry *sa_entry, const struct mlx5_accel_esp_xfrm_attrs *attrs); +int mlx5e_ipsec_aso_init(struct mlx5e_ipsec *ipsec); +void mlx5e_ipsec_aso_cleanup(struct mlx5e_ipsec *ipsec); + static inline struct mlx5_core_dev * mlx5e_ipsec_sa2dev(struct mlx5e_ipsec_sa_entry *sa_entry) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c index 3f2aeb07ea84..7fef5de55229 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c @@ -2,6 +2,7 @@ /* Copyright (c) 2017, Mellanox Technologies inc. All rights reserved. */ #include "mlx5_core.h" +#include "en.h" #include "ipsec.h" #include "lib/mlx5.h" @@ -207,3 +208,56 @@ void mlx5_accel_esp_modify_xfrm(struct mlx5e_ipsec_sa_entry *sa_entry, memcpy(&sa_entry->attrs, attrs, sizeof(sa_entry->attrs)); } + +int mlx5e_ipsec_aso_init(struct mlx5e_ipsec *ipsec) +{ + struct mlx5_core_dev *mdev = ipsec->mdev; + struct mlx5e_ipsec_aso *aso; + struct mlx5e_hw_objs *res; + struct device *pdev; + int err; + + aso = kzalloc(sizeof(*ipsec->aso), GFP_KERNEL); + if (!aso) + return -ENOMEM; + + res = &mdev->mlx5e_res.hw_objs; + + pdev = mlx5_core_dma_dev(mdev); + aso->dma_addr = dma_map_single(pdev, aso->ctx, sizeof(aso->ctx), + DMA_BIDIRECTIONAL); + err = dma_mapping_error(pdev, aso->dma_addr); + if (err) + goto err_dma; + + aso->aso = mlx5_aso_create(mdev, res->pdn); + if (IS_ERR(aso->aso)) { + err = PTR_ERR(aso->aso); + goto err_aso_create; + } + + ipsec->aso = aso; + return 0; + +err_aso_create: + dma_unmap_single(pdev, aso->dma_addr, sizeof(aso->ctx), + DMA_BIDIRECTIONAL); +err_dma: + kfree(aso); + return err; +} + +void mlx5e_ipsec_aso_cleanup(struct mlx5e_ipsec *ipsec) +{ + struct mlx5_core_dev *mdev = ipsec->mdev; + struct mlx5e_ipsec_aso *aso; + struct device *pdev; + + aso = ipsec->aso; + pdev = mlx5_core_dma_dev(mdev); + + mlx5_aso_destroy(aso->aso); + dma_unmap_single(pdev, aso->dma_addr, sizeof(aso->ctx), + DMA_BIDIRECTIONAL); + kfree(aso); +} From patchwork Fri Dec 2 20:10:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063190 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 879D3C4332F for ; Fri, 2 Dec 2022 20:12:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234944AbiLBUMM (ORCPT ); Fri, 2 Dec 2022 15:12:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234892AbiLBULl (ORCPT ); Fri, 2 Dec 2022 15:11:41 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32F3BF1426 for ; Fri, 2 Dec 2022 12:11:39 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C57C7622CB for ; Fri, 2 Dec 2022 20:11:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AC953C433D6; Fri, 2 Dec 2022 20:11:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011898; bh=O7vsvAOyCGRPlXB6A6nGeUCHGA1+K3UDQX1qblRMGC0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Rzr3KTa734y05C6GDCdpi0hRlebpE42UXY3gIxdkNUwq6T1bNiuKwqBBU0jIGxSP/ WTuMWfWj25vsvwqnVNu6rWSZzcCDcb7y96Rw26JbG9gfrMsOfvqUtz2/G7KMvQzQcH 6EDJG7//xPi+otT2V0tPCH4th8+ZjSKsUfzg/QQOwcrFVqrfd8cmS4KcMXCEoK3FM7 XS0+iB6qQ/UPeCj4si7D7lURqpCsQ0Fl2Mvrw/1QjTFe90YnRJseUMeXyZHtflCyDL Q5dcMRC021er7RzNg0VXibg9ksn7hGpMrxGj734v94zfWnlHsFOMapNE2k3cPeFJQk bTuiw/4ys+5Dw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 10/16] net/mlx5e: Create hardware IPsec packet offload objects Date: Fri, 2 Dec 2022 22:10:31 +0200 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Create initial hardware IPsec packet offload object and connect it to advanced steering operation (ASO) context and queue, so the data path can communicate with the stack. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec.c | 1 + .../mellanox/mlx5/core/en_accel/ipsec.h | 3 +- .../mlx5/core/en_accel/ipsec_offload.c | 37 +++++++++++++++++++ 3 files changed, 39 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index d2c814e7af97..c5bccc0df60d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -176,6 +176,7 @@ mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry, memcpy(&attrs->saddr, x->props.saddr.a6, sizeof(attrs->saddr)); memcpy(&attrs->daddr, x->id.daddr.a6, sizeof(attrs->daddr)); attrs->family = x->props.family; + attrs->type = x->xso.type; } static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 8e2f88f269ac..2c9aedf6b0ef 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -73,6 +73,7 @@ struct mlx5_accel_esp_xfrm_attrs { u8 dir : 2; u8 esn_overlap : 1; u8 esn_trigger : 1; + u8 type : 2; u8 family; u32 replay_window; }; @@ -102,8 +103,6 @@ struct mlx5e_ipsec_aso { u8 ctx[MLX5_ST_SZ_BYTES(ipsec_aso)]; dma_addr_t dma_addr; struct mlx5_aso *aso; - u32 pdn; - u32 mkey; }; struct mlx5e_ipsec { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c index 7fef5de55229..fc88454aaf8d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c @@ -53,6 +53,38 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev) } EXPORT_SYMBOL_GPL(mlx5_ipsec_device_caps); +static void mlx5e_ipsec_packet_setup(void *obj, u32 pdn, + struct mlx5_accel_esp_xfrm_attrs *attrs) +{ + void *aso_ctx; + + aso_ctx = MLX5_ADDR_OF(ipsec_obj, obj, ipsec_aso); + if (attrs->esn_trigger) { + MLX5_SET(ipsec_aso, aso_ctx, esn_event_arm, 1); + + if (attrs->dir == XFRM_DEV_OFFLOAD_IN) { + MLX5_SET(ipsec_aso, aso_ctx, window_sz, + attrs->replay_window / 64); + MLX5_SET(ipsec_aso, aso_ctx, mode, + MLX5_IPSEC_ASO_REPLAY_PROTECTION); + } + } + + /* ASO context */ + MLX5_SET(ipsec_obj, obj, ipsec_aso_access_pd, pdn); + MLX5_SET(ipsec_obj, obj, full_offload, 1); + MLX5_SET(ipsec_aso, aso_ctx, valid, 1); + /* MLX5_IPSEC_ASO_REG_C_4_5 is type C register that is used + * in flow steering to perform matching against. Please be + * aware that this register was chosen arbitrary and can't + * be used in other places as long as IPsec packet offload + * active. + */ + MLX5_SET(ipsec_obj, obj, aso_return_reg, MLX5_IPSEC_ASO_REG_C_4_5); + if (attrs->dir == XFRM_DEV_OFFLOAD_OUT) + MLX5_SET(ipsec_aso, aso_ctx, mode, MLX5_IPSEC_ASO_INC_SN); +} + static int mlx5_create_ipsec_obj(struct mlx5e_ipsec_sa_entry *sa_entry) { struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; @@ -61,6 +93,7 @@ static int mlx5_create_ipsec_obj(struct mlx5e_ipsec_sa_entry *sa_entry) u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; u32 in[MLX5_ST_SZ_DW(create_ipsec_obj_in)] = {}; void *obj, *salt_p, *salt_iv_p; + struct mlx5e_hw_objs *res; int err; obj = MLX5_ADDR_OF(create_ipsec_obj_in, in, ipsec_object); @@ -87,6 +120,10 @@ static int mlx5_create_ipsec_obj(struct mlx5e_ipsec_sa_entry *sa_entry) MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, MLX5_GENERAL_OBJECT_TYPES_IPSEC); + res = &mdev->mlx5e_res.hw_objs; + if (attrs->type == XFRM_DEV_OFFLOAD_PACKET) + mlx5e_ipsec_packet_setup(obj, res->pdn, attrs); + err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out)); if (!err) sa_entry->ipsec_obj_id = From patchwork Fri Dec 2 20:10:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063187 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2881EC4332F for ; Fri, 2 Dec 2022 20:11:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234881AbiLBULu (ORCPT ); Fri, 2 Dec 2022 15:11:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234880AbiLBULd (ORCPT ); Fri, 2 Dec 2022 15:11:33 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2200DF1CC4 for ; Fri, 2 Dec 2022 12:11:27 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AD6EE622E0 for ; Fri, 2 Dec 2022 20:11:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5BD70C433D7; Fri, 2 Dec 2022 20:11:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011886; bh=FxYRQnPqsWLxOrXu1p+qun82TQtXFu3goc3CPSYlDEM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BmrqpxYqyo8C2FleXYcGqkshSuqN/hfIh9CJjEWhLuFN9Q/FGwEZR2F+EqX1J5aTx 7zlE3EotqsCnbFFLgUX2WuCmLXe7EZk5Kd+5JtpHH5PCSLroQmIB6PyYzbGqhsbTQ6 MoweQYL4swrSuNxKA1hiERJSYV1mDw2HNixIVAu6tSSPFATEUJSC9D9XaW4bFhT1eY 4vReZoMM5f8pQdpyz7knrl/RcsM/9xqyAs+tR40mdzNwjSHTGu16IIYIDDlDcXG3sy Z6zFLYI9CFZPFRY94L6mJg7WxdnjuN0Hp0oQhZyd2rSMScJHp2mmHmo62jOi2Uxghv U58HqRJem2WJw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 11/16] net/mlx5e: Move IPsec flow table creation to separate function Date: Fri, 2 Dec 2022 22:10:32 +0200 Message-Id: <9f1b2e1809aa97642f46c2a4a597e52702a1570f.1670011671.git.leonro@nvidia.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Even now, to support IPsec crypto, the RX and TX paths use same logic to create flow tables. In the following patches, we will add more tables to support IPsec packet offload. So reuse existing code and rewrite it to support IPsec packet offload from the beginning. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 45 ++++++++++--------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index 8e87d8d02511..f65a74e3d648 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -42,6 +42,21 @@ static enum mlx5_traffic_types family2tt(u32 family) return MLX5_TT_IPV6_IPSEC_ESP; } +static struct mlx5_flow_table *ipsec_ft_create(struct mlx5_flow_namespace *ns, + int level, int prio, + int max_num_groups) +{ + struct mlx5_flow_table_attr ft_attr = {}; + + ft_attr.autogroup.num_reserved_entries = 1; + ft_attr.autogroup.max_num_groups = max_num_groups; + ft_attr.max_fte = NUM_IPSEC_FTE; + ft_attr.level = level; + ft_attr.prio = prio; + + return mlx5_create_auto_grouped_flow_table(ns, &ft_attr); +} + static int rx_err_add_rule(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx, struct mlx5e_ipsec_rx_err *rx_err) @@ -160,17 +175,13 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, { struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(ipsec->fs, false); struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false); - struct mlx5_flow_table_attr ft_attr = {}; struct mlx5_flow_table *ft; int err; rx->default_dest = mlx5_ttc_get_default_dest(ttc, family2tt(family)); - ft_attr.max_fte = 1; - ft_attr.autogroup.max_num_groups = 1; - ft_attr.level = MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL; - ft_attr.prio = MLX5E_NIC_PRIO; - ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); + ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL, + MLX5E_NIC_PRIO, 1); if (IS_ERR(ft)) return PTR_ERR(ft); @@ -180,12 +191,8 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, goto err_add; /* Create FT */ - ft_attr.max_fte = NUM_IPSEC_FTE; - ft_attr.level = MLX5E_ACCEL_FS_ESP_FT_LEVEL; - ft_attr.prio = MLX5E_NIC_PRIO; - ft_attr.autogroup.num_reserved_entries = 1; - ft_attr.autogroup.max_num_groups = 1; - ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); + ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_ESP_FT_LEVEL, MLX5E_NIC_PRIO, + 1); if (IS_ERR(ft)) { err = PTR_ERR(ft); goto err_fs_ft; @@ -273,18 +280,12 @@ static void rx_ft_put(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, /* IPsec TX flow steering */ static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx) { - struct mlx5_flow_table_attr ft_attr = {}; struct mlx5_flow_table *ft; - int err; - ft_attr.max_fte = NUM_IPSEC_FTE; - ft_attr.autogroup.max_num_groups = 1; - ft = mlx5_create_auto_grouped_flow_table(tx->ns, &ft_attr); - if (IS_ERR(ft)) { - err = PTR_ERR(ft); - mlx5_core_err(mdev, "fail to create ipsec tx ft err=%d\n", err); - return err; - } + ft = ipsec_ft_create(tx->ns, 0, 0, 1); + if (IS_ERR(ft)) + return PTR_ERR(ft); + tx->ft.sa = ft; return 0; } From patchwork Fri Dec 2 20:10:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063188 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADE40C4332F for ; Fri, 2 Dec 2022 20:12:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234921AbiLBUMA (ORCPT ); Fri, 2 Dec 2022 15:12:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234856AbiLBULe (ORCPT ); Fri, 2 Dec 2022 15:11:34 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27CFFF1CCF for ; Fri, 2 Dec 2022 12:11:31 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A85E7622CB for ; Fri, 2 Dec 2022 20:11:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8C48DC433C1; Fri, 2 Dec 2022 20:11:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011890; bh=tgMh15cyno/75xkIEGH1Z8OvjMnkpPDH4FcE5iTR7AA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PAwrUQd0lj42iJnGrirClgxiXZJNd7E1xiGv9ftAfhxQJHBYwnhTS2vkHPCgD2oMj K+z3VgLRxcLGm2ChV1WcAJJxGgEPA5xYXB47XGjYBJwwo1+rvwCcPDhM+HyrJgVNie rj/txDjzePUZ7HV11r1qe2mZqTTmpRAx+k+xLdAU20+7UuhtOxnoxPhQBKUDlwSb00 9phIa0dJqGh3j9D15d3vqFJGynfFutTnJr/j8ITWgE7nXLLHspqtoZBijo6fcxshXg 3nub+U945mmvOK9SHwi0LWj8uHDc+jyEk/FFbBI66M/jGQWgY4WEy5plVuvq8tHkF8 Z2XhF7I19iJvA== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 12/16] net/mlx5e: Refactor FTE setup code to be more clear Date: Fri, 2 Dec 2022 22:10:33 +0200 Message-Id: <2752cbcd615ce39f927a7c074d53a08a9bb4ed43.1670011671.git.leonro@nvidia.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky The policy offload logic needs to set flow steering rule that match on saddr and daddr too, so factor out this code to separate functions, together with code alignment to netdev coding pattern of relying on family type. As part of this change, let's separate more logic from setup_fte_common to make sure that the function names describe that is done in the function better than general *common* name. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 139 +++++++++++------- 1 file changed, 85 insertions(+), 54 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index f65a74e3d648..4c5904544bda 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -326,61 +326,78 @@ static void tx_ft_put(struct mlx5e_ipsec *ipsec) mutex_unlock(&tx->ft.mutex); } -static void setup_fte_common(struct mlx5_accel_esp_xfrm_attrs *attrs, - u32 ipsec_obj_id, - struct mlx5_flow_spec *spec, - struct mlx5_flow_act *flow_act) +static void setup_fte_addr4(struct mlx5_flow_spec *spec, __be32 *saddr, + __be32 *daddr) { - u8 ip_version = (attrs->family == AF_INET) ? 4 : 6; + spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS; - spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS | MLX5_MATCH_MISC_PARAMETERS; - - /* ip_version */ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.ip_version); - MLX5_SET(fte_match_param, spec->match_value, outer_headers.ip_version, ip_version); + MLX5_SET(fte_match_param, spec->match_value, outer_headers.ip_version, 4); + + memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value, + outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4), saddr, 4); + memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value, + outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4), daddr, 4); + MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, + outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4); + MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, + outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4); +} - /* Non fragmented */ - MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.frag); - MLX5_SET(fte_match_param, spec->match_value, outer_headers.frag, 0); +static void setup_fte_addr6(struct mlx5_flow_spec *spec, __be32 *saddr, + __be32 *daddr) +{ + spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS; + + MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.ip_version); + MLX5_SET(fte_match_param, spec->match_value, outer_headers.ip_version, 6); + + memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value, + outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6), saddr, 16); + memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value, + outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), daddr, 16); + memset(MLX5_ADDR_OF(fte_match_param, spec->match_criteria, + outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6), 0xff, 16); + memset(MLX5_ADDR_OF(fte_match_param, spec->match_criteria, + outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), 0xff, 16); +} +static void setup_fte_esp(struct mlx5_flow_spec *spec) +{ /* ESP header */ + spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS; + MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.ip_protocol); MLX5_SET(fte_match_param, spec->match_value, outer_headers.ip_protocol, IPPROTO_ESP); +} +static void setup_fte_spi(struct mlx5_flow_spec *spec, u32 spi) +{ /* SPI number */ + spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS; + MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, misc_parameters.outer_esp_spi); - MLX5_SET(fte_match_param, spec->match_value, - misc_parameters.outer_esp_spi, attrs->spi); - - if (ip_version == 4) { - memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value, - outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4), - &attrs->saddr.a4, 4); - memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value, - outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4), - &attrs->daddr.a4, 4); - MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, - outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4); - MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, - outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4); - } else { - memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value, - outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6), - &attrs->saddr.a6, 16); - memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value, - outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), - &attrs->daddr.a6, 16); - memset(MLX5_ADDR_OF(fte_match_param, spec->match_criteria, - outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6), - 0xff, 16); - memset(MLX5_ADDR_OF(fte_match_param, spec->match_criteria, - outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), - 0xff, 16); - } + MLX5_SET(fte_match_param, spec->match_value, misc_parameters.outer_esp_spi, spi); +} - flow_act->crypto.type = MLX5_FLOW_CONTEXT_ENCRYPT_DECRYPT_TYPE_IPSEC; - flow_act->crypto.obj_id = ipsec_obj_id; - flow_act->flags |= FLOW_ACT_NO_APPEND; +static void setup_fte_no_frags(struct mlx5_flow_spec *spec) +{ + /* Non fragmented */ + spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS; + + MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.frag); + MLX5_SET(fte_match_param, spec->match_value, outer_headers.frag, 0); +} + +static void setup_fte_reg_a(struct mlx5_flow_spec *spec) +{ + /* Add IPsec indicator in metadata_reg_a */ + spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_2; + + MLX5_SET(fte_match_param, spec->match_criteria, + misc_parameters_2.metadata_reg_a, MLX5_ETH_WQE_FT_META_IPSEC); + MLX5_SET(fte_match_param, spec->match_value, + misc_parameters_2.metadata_reg_a, MLX5_ETH_WQE_FT_META_IPSEC); } static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) @@ -390,7 +407,6 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); struct mlx5e_ipsec *ipsec = sa_entry->ipsec; - u32 ipsec_obj_id = sa_entry->ipsec_obj_id; struct mlx5_modify_hdr *modify_hdr = NULL; struct mlx5_flow_destination dest = {}; struct mlx5_flow_act flow_act = {}; @@ -409,13 +425,21 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) goto out_err; } - setup_fte_common(attrs, ipsec_obj_id, spec, &flow_act); + if (attrs->family == AF_INET) + setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4); + else + setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6); + + setup_fte_spi(spec, attrs->spi); + setup_fte_esp(spec); + setup_fte_no_frags(spec); /* Set bit[31] ipsec marker */ /* Set bit[23-0] ipsec_obj_id */ MLX5_SET(set_action_in, action, action_type, MLX5_ACTION_TYPE_SET); MLX5_SET(set_action_in, action, field, MLX5_ACTION_IN_FIELD_METADATA_REG_B); - MLX5_SET(set_action_in, action, data, (ipsec_obj_id | BIT(31))); + MLX5_SET(set_action_in, action, data, + (sa_entry->ipsec_obj_id | BIT(31))); MLX5_SET(set_action_in, action, offset, 0); MLX5_SET(set_action_in, action, length, 32); @@ -429,6 +453,9 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) goto out_err; } + flow_act.crypto.type = MLX5_FLOW_CONTEXT_ENCRYPT_DECRYPT_TYPE_IPSEC; + flow_act.crypto.obj_id = sa_entry->ipsec_obj_id; + flow_act.flags |= FLOW_ACT_NO_APPEND; flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_CRYPTO_DECRYPT | MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; @@ -458,6 +485,7 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) static int tx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) { + struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); struct mlx5e_ipsec *ipsec = sa_entry->ipsec; struct mlx5_flow_act flow_act = {}; @@ -476,16 +504,19 @@ static int tx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) goto out; } - setup_fte_common(&sa_entry->attrs, sa_entry->ipsec_obj_id, spec, - &flow_act); + if (attrs->family == AF_INET) + setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4); + else + setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6); - /* Add IPsec indicator in metadata_reg_a */ - spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_2; - MLX5_SET(fte_match_param, spec->match_criteria, misc_parameters_2.metadata_reg_a, - MLX5_ETH_WQE_FT_META_IPSEC); - MLX5_SET(fte_match_param, spec->match_value, misc_parameters_2.metadata_reg_a, - MLX5_ETH_WQE_FT_META_IPSEC); + setup_fte_spi(spec, attrs->spi); + setup_fte_esp(spec); + setup_fte_no_frags(spec); + setup_fte_reg_a(spec); + flow_act.crypto.type = MLX5_FLOW_CONTEXT_ENCRYPT_DECRYPT_TYPE_IPSEC; + flow_act.crypto.obj_id = sa_entry->ipsec_obj_id; + flow_act.flags |= FLOW_ACT_NO_APPEND; flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW | MLX5_FLOW_CONTEXT_ACTION_CRYPTO_ENCRYPT; rule = mlx5_add_flow_rules(tx->ft.sa, spec, &flow_act, NULL, 0); From patchwork Fri Dec 2 20:10:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063189 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93080C4321E for ; Fri, 2 Dec 2022 20:12:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234887AbiLBUMD (ORCPT ); Fri, 2 Dec 2022 15:12:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234818AbiLBULk (ORCPT ); Fri, 2 Dec 2022 15:11:40 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13862F1CD4 for ; Fri, 2 Dec 2022 12:11:35 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A081D6221E for ; Fri, 2 Dec 2022 20:11:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 84D55C433C1; Fri, 2 Dec 2022 20:11:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011894; bh=3wvahVP62xO9ka70VAYJcfJqtvSEBF8q0TQ8QBgR+uQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u72mPg/ZNEbEl0ykmTAx3w92xlIPBe2KBXbEhfRye0z7j1FvaOf1InLi4AxPmPqr2 pOtQUAfwmd/aptJFNv0gSEH4KwSPI5vLuI/p7MzIaIYoM6os6C0epkJFE58D/+X6my Ev107tAsiDxPyCcKSIkfeoUEDnL9e23GyAMvywr/E44ODf0iFWHNNXE64f7UEIJkrd RzmAqiiAbpvdUzR13j7Vz3XmTB0WyaLliGZgTKD7t5ifoyC3BhWWc9EORttmXBFxkH fPRDOsDGVBZTYVQK0CuD2e2Ux8ZUep2oC1cJ76WuDb88KhaH9lGQhfGXSaeMIrrsDO F008LSBxdzYiA== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 13/16] net/mlx5e: Flatten the IPsec RX add rule path Date: Fri, 2 Dec 2022 22:10:34 +0200 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Rewrote the IPsec RX add rule path to be less convoluted and don't rely on pre-initialized variables. The code now has clean linear flow with clean separation between error and success paths. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec.h | 2 +- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 88 +++++++++++-------- 2 files changed, 53 insertions(+), 37 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 2c9aedf6b0ef..990378d52fd4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -126,7 +126,7 @@ struct mlx5e_ipsec_esn_state { struct mlx5e_ipsec_rule { struct mlx5_flow_handle *rule; - struct mlx5_modify_hdr *set_modify_hdr; + struct mlx5_modify_hdr *modify_hdr; }; struct mlx5e_ipsec_modify_state_work { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index 4c5904544bda..b81046c71e6c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -400,20 +400,52 @@ static void setup_fte_reg_a(struct mlx5_flow_spec *spec) misc_parameters_2.metadata_reg_a, MLX5_ETH_WQE_FT_META_IPSEC); } -static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) +static int setup_modify_header(struct mlx5_core_dev *mdev, u32 val, u8 dir, + struct mlx5_flow_act *flow_act) { u8 action[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; + enum mlx5_flow_namespace_type ns_type; + struct mlx5_modify_hdr *modify_hdr; + + MLX5_SET(set_action_in, action, action_type, MLX5_ACTION_TYPE_SET); + switch (dir) { + case XFRM_DEV_OFFLOAD_IN: + MLX5_SET(set_action_in, action, field, + MLX5_ACTION_IN_FIELD_METADATA_REG_B); + ns_type = MLX5_FLOW_NAMESPACE_KERNEL; + break; + default: + return -EINVAL; + } + + MLX5_SET(set_action_in, action, data, val); + MLX5_SET(set_action_in, action, offset, 0); + MLX5_SET(set_action_in, action, length, 32); + + modify_hdr = mlx5_modify_header_alloc(mdev, ns_type, 1, action); + if (IS_ERR(modify_hdr)) { + mlx5_core_err(mdev, "Failed to allocate modify_header %ld\n", + PTR_ERR(modify_hdr)); + return PTR_ERR(modify_hdr); + } + + flow_act->modify_hdr = modify_hdr; + flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; + return 0; +} + +static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) +{ struct mlx5e_ipsec_rule *ipsec_rule = &sa_entry->ipsec_rule; struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); struct mlx5e_ipsec *ipsec = sa_entry->ipsec; - struct mlx5_modify_hdr *modify_hdr = NULL; struct mlx5_flow_destination dest = {}; struct mlx5_flow_act flow_act = {}; struct mlx5_flow_handle *rule; struct mlx5_flow_spec *spec; struct mlx5e_ipsec_rx *rx; - int err = 0; + int err; rx = rx_ft_get(mdev, ipsec, attrs->family); if (IS_ERR(rx)) @@ -422,7 +454,7 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) spec = kvzalloc(sizeof(*spec), GFP_KERNEL); if (!spec) { err = -ENOMEM; - goto out_err; + goto err_alloc; } if (attrs->family == AF_INET) @@ -434,52 +466,36 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) setup_fte_esp(spec); setup_fte_no_frags(spec); - /* Set bit[31] ipsec marker */ - /* Set bit[23-0] ipsec_obj_id */ - MLX5_SET(set_action_in, action, action_type, MLX5_ACTION_TYPE_SET); - MLX5_SET(set_action_in, action, field, MLX5_ACTION_IN_FIELD_METADATA_REG_B); - MLX5_SET(set_action_in, action, data, - (sa_entry->ipsec_obj_id | BIT(31))); - MLX5_SET(set_action_in, action, offset, 0); - MLX5_SET(set_action_in, action, length, 32); - - modify_hdr = mlx5_modify_header_alloc(mdev, MLX5_FLOW_NAMESPACE_KERNEL, - 1, action); - if (IS_ERR(modify_hdr)) { - err = PTR_ERR(modify_hdr); - mlx5_core_err(mdev, - "fail to alloc ipsec set modify_header_id err=%d\n", err); - modify_hdr = NULL; - goto out_err; - } + err = setup_modify_header(mdev, sa_entry->ipsec_obj_id | BIT(31), + XFRM_DEV_OFFLOAD_IN, &flow_act); + if (err) + goto err_mod_header; flow_act.crypto.type = MLX5_FLOW_CONTEXT_ENCRYPT_DECRYPT_TYPE_IPSEC; flow_act.crypto.obj_id = sa_entry->ipsec_obj_id; flow_act.flags |= FLOW_ACT_NO_APPEND; - flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | - MLX5_FLOW_CONTEXT_ACTION_CRYPTO_DECRYPT | - MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; + flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | + MLX5_FLOW_CONTEXT_ACTION_CRYPTO_DECRYPT; dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; - flow_act.modify_hdr = modify_hdr; dest.ft = rx->rx_err.ft; rule = mlx5_add_flow_rules(rx->ft.sa, spec, &flow_act, &dest, 1); if (IS_ERR(rule)) { err = PTR_ERR(rule); mlx5_core_err(mdev, "fail to add RX ipsec rule err=%d\n", err); - goto out_err; + goto err_add_flow; } + kvfree(spec); ipsec_rule->rule = rule; - ipsec_rule->set_modify_hdr = modify_hdr; - goto out; - -out_err: - if (modify_hdr) - mlx5_modify_header_dealloc(mdev, modify_hdr); - rx_ft_put(mdev, ipsec, attrs->family); + ipsec_rule->modify_hdr = flow_act.modify_hdr; + return 0; -out: +err_add_flow: + mlx5_modify_header_dealloc(mdev, flow_act.modify_hdr); +err_mod_header: kvfree(spec); +err_alloc: + rx_ft_put(mdev, ipsec, attrs->family); return err; } @@ -555,7 +571,7 @@ void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_ipsec_sa_entry *sa_entry) return; } - mlx5_modify_header_dealloc(mdev, ipsec_rule->set_modify_hdr); + mlx5_modify_header_dealloc(mdev, ipsec_rule->modify_hdr); rx_ft_put(mdev, sa_entry->ipsec, sa_entry->attrs.family); } From patchwork Fri Dec 2 20:10:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063192 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C2E3C4332F for ; Fri, 2 Dec 2022 20:12:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234911AbiLBUMh (ORCPT ); Fri, 2 Dec 2022 15:12:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234914AbiLBULu (ORCPT ); Fri, 2 Dec 2022 15:11:50 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D71E0F1CF8 for ; Fri, 2 Dec 2022 12:11:48 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 92593B8228B for ; Fri, 2 Dec 2022 20:11:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1194C433D7; Fri, 2 Dec 2022 20:11:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011906; bh=v2mPIrmT6OyhIDZZ6UNY0tbWeSQufKUyQ7IKZwfYGd8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GXtMdzdhvSp5naJCIFpbNgw6VCm+tqgsGvkaJN697mtYzkasQbjKof5h16QzkzsfT pSO01NasfAkMwlO0RuPdN8VbzLG8nV/32vPIugiz0gN9jVa+XhjQ63BIQfCjqAuhIi BHpr8OMXJUizfMA4kxviLm4I4+SP5NyzJLqgxlOIQmvYUhK3S06t9d3EBfNSgA8IqB ekj11s/Tz8fmyVYvVMv0L37AfISY1AaPhOYqJsdrFTlD1ppPSCQqwwlRIbj18T81Qr Om31fS7pYQ8WolhRAr2UFIaygbZoQnu8jx4dxsLH2zksTnmnwL2PogdtaeVfouF1xJ hH6MfuEUK7Hcw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Saeed Mahameed Subject: [PATCH xfrm-next 14/16] net/mlx5e: Make clear what IPsec rx_err does Date: Fri, 2 Dec 2022 22:10:35 +0200 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Reuse existing struct what holds all information about modify header pointer and rule. This helps to reduce ambiguity from the name _err_ that doesn't describe the real purpose of that flow table, rule and function - to copy status result from HW to the stack. Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 38 ++++++++----------- 1 file changed, 16 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index b81046c71e6c..b89001538abd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -9,15 +9,10 @@ #define NUM_IPSEC_FTE BIT(15) -struct mlx5e_ipsec_rx_err { - struct mlx5_flow_table *ft; - struct mlx5_flow_handle *rule; - struct mlx5_modify_hdr *copy_modify_hdr; -}; - struct mlx5e_ipsec_ft { struct mutex mutex; /* Protect changes to this struct */ struct mlx5_flow_table *sa; + struct mlx5_flow_table *status; u32 refcnt; }; @@ -26,7 +21,7 @@ struct mlx5e_ipsec_rx { struct mlx5_flow_group *miss_group; struct mlx5_flow_handle *miss_rule; struct mlx5_flow_destination default_dest; - struct mlx5e_ipsec_rx_err rx_err; + struct mlx5e_ipsec_rule status; }; struct mlx5e_ipsec_tx { @@ -57,9 +52,8 @@ static struct mlx5_flow_table *ipsec_ft_create(struct mlx5_flow_namespace *ns, return mlx5_create_auto_grouped_flow_table(ns, &ft_attr); } -static int rx_err_add_rule(struct mlx5_core_dev *mdev, - struct mlx5e_ipsec_rx *rx, - struct mlx5e_ipsec_rx_err *rx_err) +static int ipsec_status_rule(struct mlx5_core_dev *mdev, + struct mlx5e_ipsec_rx *rx) { u8 action[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; struct mlx5_flow_act flow_act = {}; @@ -94,7 +88,7 @@ static int rx_err_add_rule(struct mlx5_core_dev *mdev, flow_act.action = MLX5_FLOW_CONTEXT_ACTION_MOD_HDR | MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; flow_act.modify_hdr = modify_hdr; - fte = mlx5_add_flow_rules(rx_err->ft, spec, &flow_act, + fte = mlx5_add_flow_rules(rx->ft.status, spec, &flow_act, &rx->default_dest, 1); if (IS_ERR(fte)) { err = PTR_ERR(fte); @@ -103,8 +97,8 @@ static int rx_err_add_rule(struct mlx5_core_dev *mdev, } kvfree(spec); - rx_err->rule = fte; - rx_err->copy_modify_hdr = modify_hdr; + rx->status.rule = fte; + rx->status.modify_hdr = modify_hdr; return 0; out: @@ -165,9 +159,9 @@ static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) mlx5_destroy_flow_group(rx->miss_group); mlx5_destroy_flow_table(rx->ft.sa); - mlx5_del_flow_rules(rx->rx_err.rule); - mlx5_modify_header_dealloc(mdev, rx->rx_err.copy_modify_hdr); - mlx5_destroy_flow_table(rx->rx_err.ft); + mlx5_del_flow_rules(rx->status.rule); + mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr); + mlx5_destroy_flow_table(rx->ft.status); } static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, @@ -185,8 +179,8 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, if (IS_ERR(ft)) return PTR_ERR(ft); - rx->rx_err.ft = ft; - err = rx_err_add_rule(mdev, rx, &rx->rx_err); + rx->ft.status = ft; + err = ipsec_status_rule(mdev, rx); if (err) goto err_add; @@ -208,10 +202,10 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, err_fs: mlx5_destroy_flow_table(rx->ft.sa); err_fs_ft: - mlx5_del_flow_rules(rx->rx_err.rule); - mlx5_modify_header_dealloc(mdev, rx->rx_err.copy_modify_hdr); + mlx5_del_flow_rules(rx->status.rule); + mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr); err_add: - mlx5_destroy_flow_table(rx->rx_err.ft); + mlx5_destroy_flow_table(rx->ft.status); return err; } @@ -477,7 +471,7 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry) flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_CRYPTO_DECRYPT; dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; - dest.ft = rx->rx_err.ft; + dest.ft = rx->ft.status; rule = mlx5_add_flow_rules(rx->ft.sa, spec, &flow_act, &dest, 1); if (IS_ERR(rule)) { err = PTR_ERR(rule); From patchwork Fri Dec 2 20:10:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063191 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C7C5C47090 for ; Fri, 2 Dec 2022 20:12:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234606AbiLBUMN (ORCPT ); Fri, 2 Dec 2022 15:12:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234868AbiLBULp (ORCPT ); Fri, 2 Dec 2022 15:11:45 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A01AF1CF3 for ; Fri, 2 Dec 2022 12:11:45 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C02C5B8228B for ; Fri, 2 Dec 2022 20:11:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A9A08C433D7; Fri, 2 Dec 2022 20:11:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011902; bh=QtFY35m9618kJ0o7FCPTdiwPdiqY8lS3UIQf30dKX8M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GBSNuDZwZHZiOJfqV3EuFDwwSg7crlfOwxJUoMCe32NLgCAdBR/+WJLSgN0BZEUlF BHRZ4HDUJD58EqfvyPcznzqGu49uiWqqklg0SYwUfbfwuEkgE4ubxK0RWR0OZj6Hq1 /LzYfOwr4Z3kVscS7SWRBOBA4JSa2kJ2EVYCzE6SGRQpjCGSY7u4kzYB9Mrlkx/uNJ uV88Pmj/tvIZrWkeGxuI4UEcwR6XzYZdfe1e+R1suq/3Z8l4xW/mgn4TobtBd4MzCo 2vu8D/mT73x1nTTTjVv5LlDOmQ9cLRJTj5ilrtVzXexYHIewAQEQJZooAMkGfgriow I9OaZ1oeIiiAQ== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 15/16] net/mlx5e: Group IPsec miss handles into separate struct Date: Fri, 2 Dec 2022 22:10:36 +0200 Message-Id: <5d499e96b90812ad4d4168a11c480bb79d6e083f.1670011671.git.leonro@nvidia.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Move miss handles into dedicated struct, so we can reuse it in next patch when creating IPsec policy flow table. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index b89001538abd..dfdda5ae2245 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -16,10 +16,14 @@ struct mlx5e_ipsec_ft { u32 refcnt; }; +struct mlx5e_ipsec_miss { + struct mlx5_flow_group *group; + struct mlx5_flow_handle *rule; +}; + struct mlx5e_ipsec_rx { struct mlx5e_ipsec_ft ft; - struct mlx5_flow_group *miss_group; - struct mlx5_flow_handle *miss_rule; + struct mlx5e_ipsec_miss sa; struct mlx5_flow_destination default_dest; struct mlx5e_ipsec_rule status; }; @@ -135,18 +139,18 @@ static int rx_fs_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) mlx5_core_err(mdev, "fail to create ipsec rx miss_group err=%d\n", err); goto out; } - rx->miss_group = miss_group; + rx->sa.group = miss_group; /* Create miss rule */ miss_rule = mlx5_add_flow_rules(ft, spec, &flow_act, &rx->default_dest, 1); if (IS_ERR(miss_rule)) { - mlx5_destroy_flow_group(rx->miss_group); + mlx5_destroy_flow_group(rx->sa.group); err = PTR_ERR(miss_rule); mlx5_core_err(mdev, "fail to create ipsec rx miss_rule err=%d\n", err); goto out; } - rx->miss_rule = miss_rule; + rx->sa.rule = miss_rule; out: kvfree(flow_group_in); kvfree(spec); @@ -155,8 +159,8 @@ static int rx_fs_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) { - mlx5_del_flow_rules(rx->miss_rule); - mlx5_destroy_flow_group(rx->miss_group); + mlx5_del_flow_rules(rx->sa.rule); + mlx5_destroy_flow_group(rx->sa.group); mlx5_destroy_flow_table(rx->ft.sa); mlx5_del_flow_rules(rx->status.rule); From patchwork Fri Dec 2 20:10:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13063193 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9D71C4321E for ; Fri, 2 Dec 2022 20:12:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234953AbiLBUMn (ORCPT ); Fri, 2 Dec 2022 15:12:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234924AbiLBUL4 (ORCPT ); Fri, 2 Dec 2022 15:11:56 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E90BF4EA3 for ; Fri, 2 Dec 2022 12:11:51 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F147E622E1 for ; Fri, 2 Dec 2022 20:11:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7DCAC433C1; Fri, 2 Dec 2022 20:11:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670011910; bh=z6tjYnYlYMwilCDBM0iQtNINwEhd9cZdTOd9gi/Fw3s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bZJRRpSMdzY6Ch3s9YSQhkaHuDr8xXUnwGvw7GomiKbw0jLPlepk39r6on8L8v63b rJnXv/pwDAAKTQENCng+mpgFrqxaOoSamXsuaTKZWJk5OYUBhblfgegfnzbSxWfcHK 7/X2ElactMr4utxg5n3g5iVG/YoodMaesiCNkEME2N/Hde02dMtfkFD0/W+KPIYvTZ tv6Mnd4xWfoqTMK+ZJFs+HtMVTgCF1z6ZXyztfrUE/CRB9GYMz+5rmyw5Ni742/AtV C4X4U9ccUgjk96s5/X83mweqO7ISz4zRWVW9jUYjJbPnWqL8+O8qS8lWqwTPH/j4a7 RgjsnRaIv4RmQ== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Bharat Bhushan , Raed Salem , Saeed Mahameed Subject: [PATCH xfrm-next 16/16] net/mlx5e: Generalize creation of default IPsec miss group and rule Date: Fri, 2 Dec 2022 22:10:37 +0200 Message-Id: <15efae1c06b71944c96036d89e0b8a6690c36d92.1670011671.git.leonro@nvidia.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Create general function that sets miss group and rule to forward all not-matched traffic to the next table. Reviewed-by: Raed Salem Reviewed-by: Saeed Mahameed Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 47 +++++++++---------- 1 file changed, 23 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index dfdda5ae2245..5bc6f9d1f5a6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -24,7 +24,6 @@ struct mlx5e_ipsec_miss { struct mlx5e_ipsec_rx { struct mlx5e_ipsec_ft ft; struct mlx5e_ipsec_miss sa; - struct mlx5_flow_destination default_dest; struct mlx5e_ipsec_rule status; }; @@ -57,7 +56,8 @@ static struct mlx5_flow_table *ipsec_ft_create(struct mlx5_flow_namespace *ns, } static int ipsec_status_rule(struct mlx5_core_dev *mdev, - struct mlx5e_ipsec_rx *rx) + struct mlx5e_ipsec_rx *rx, + struct mlx5_flow_destination *dest) { u8 action[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; struct mlx5_flow_act flow_act = {}; @@ -92,8 +92,7 @@ static int ipsec_status_rule(struct mlx5_core_dev *mdev, flow_act.action = MLX5_FLOW_CONTEXT_ACTION_MOD_HDR | MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; flow_act.modify_hdr = modify_hdr; - fte = mlx5_add_flow_rules(rx->ft.status, spec, &flow_act, - &rx->default_dest, 1); + fte = mlx5_add_flow_rules(rx->ft.status, spec, &flow_act, dest, 1); if (IS_ERR(fte)) { err = PTR_ERR(fte); mlx5_core_err(mdev, "fail to add ipsec rx err copy rule err=%d\n", err); @@ -112,12 +111,12 @@ static int ipsec_status_rule(struct mlx5_core_dev *mdev, return err; } -static int rx_fs_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) +static int ipsec_miss_create(struct mlx5_core_dev *mdev, + struct mlx5_flow_table *ft, + struct mlx5e_ipsec_miss *miss, + struct mlx5_flow_destination *dest) { int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); - struct mlx5_flow_table *ft = rx->ft.sa; - struct mlx5_flow_group *miss_group; - struct mlx5_flow_handle *miss_rule; MLX5_DECLARE_FLOW_ACT(flow_act); struct mlx5_flow_spec *spec; u32 *flow_group_in; @@ -133,24 +132,23 @@ static int rx_fs_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_rx *rx) /* Create miss_group */ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ft->max_fte - 1); MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, ft->max_fte - 1); - miss_group = mlx5_create_flow_group(ft, flow_group_in); - if (IS_ERR(miss_group)) { - err = PTR_ERR(miss_group); - mlx5_core_err(mdev, "fail to create ipsec rx miss_group err=%d\n", err); + miss->group = mlx5_create_flow_group(ft, flow_group_in); + if (IS_ERR(miss->group)) { + err = PTR_ERR(miss->group); + mlx5_core_err(mdev, "fail to create IPsec miss_group err=%d\n", + err); goto out; } - rx->sa.group = miss_group; /* Create miss rule */ - miss_rule = - mlx5_add_flow_rules(ft, spec, &flow_act, &rx->default_dest, 1); - if (IS_ERR(miss_rule)) { - mlx5_destroy_flow_group(rx->sa.group); - err = PTR_ERR(miss_rule); - mlx5_core_err(mdev, "fail to create ipsec rx miss_rule err=%d\n", err); + miss->rule = mlx5_add_flow_rules(ft, spec, &flow_act, dest, 1); + if (IS_ERR(miss->rule)) { + mlx5_destroy_flow_group(miss->group); + err = PTR_ERR(miss->rule); + mlx5_core_err(mdev, "fail to create IPsec miss_rule err=%d\n", + err); goto out; } - rx->sa.rule = miss_rule; out: kvfree(flow_group_in); kvfree(spec); @@ -173,18 +171,19 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, { struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(ipsec->fs, false); struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false); + struct mlx5_flow_destination dest; struct mlx5_flow_table *ft; int err; - rx->default_dest = mlx5_ttc_get_default_dest(ttc, family2tt(family)); - ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL, MLX5E_NIC_PRIO, 1); if (IS_ERR(ft)) return PTR_ERR(ft); rx->ft.status = ft; - err = ipsec_status_rule(mdev, rx); + + dest = mlx5_ttc_get_default_dest(ttc, family2tt(family)); + err = ipsec_status_rule(mdev, rx, &dest); if (err) goto err_add; @@ -197,7 +196,7 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, } rx->ft.sa = ft; - err = rx_fs_create(mdev, rx); + err = ipsec_miss_create(mdev, rx->ft.sa, &rx->sa, &dest); if (err) goto err_fs;