From patchwork Thu Apr 13 12:29:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210258 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98196C77B61 for ; Thu, 13 Apr 2023 12:29:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229821AbjDMM3o (ORCPT ); Thu, 13 Apr 2023 08:29:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229793AbjDMM3n (ORCPT ); Thu, 13 Apr 2023 08:29:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D6F1869A for ; Thu, 13 Apr 2023 05:29:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C942663DD4 for ; Thu, 13 Apr 2023 12:29:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2D6AC433EF; Thu, 13 Apr 2023 12:29:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681388982; bh=TvDKIMPfBH15v59jOZClDeUNeGkdqswWR16MwRu78E4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oZk0nzgfJ7bYHI8yJztiO0DQ5nTAW7p5/KGBlsxOWL5f3c/jOjri6cUGFnWdT4dJh pMLGJQZIuRh7PQV9B3+m86KkTyGBVMbiPU7QPZOy+fAxTlE1WR3WTvtLrdTU041lrj oC8iUSi8OaBDT8qpp8W9yObCbaTDM8LP1hQXqMDtqKAstDaWWt7rXO/mkJA+q8Crin KCO0+GQJ1gy6BVUtLQuKClNvQFwJs9LZojBto2C6Y/weqkQwQBqv6d/V6PKPFsND01 QLlmS/7hwGysoY0PG65+Do2bizC/99QoEAx6Z+kpQZEZ1w5t1Jch84ggpPSEAQeb9d tY1oekmQcst4w== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 01/10] net/mlx5e: Add IPsec packet offload tunnel bits Date: Thu, 13 Apr 2023 15:29:19 +0300 Message-Id: <0584b0ca47684aff235a9a4d82c06e2f2595d94a.1681388425.git.leonro@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Extend packet reformat types and flow table capabilities with IPsec packet offload tunnel bits. Reviewed-by: Simon Horman Signed-off-by: Leon Romanovsky --- include/linux/mlx5/mlx5_ifc.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index e47d6c58da35..3e899844e84c 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -456,9 +456,11 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 max_ft_level[0x8]; u8 reformat_add_esp_trasport[0x1]; - u8 reserved_at_41[0x2]; + u8 reformat_l2_to_l3_esp_tunnel[0x1]; + u8 reserved_at_42[0x1]; u8 reformat_del_esp_trasport[0x1]; - u8 reserved_at_44[0x2]; + u8 reformat_l3_esp_tunnel_to_l2[0x1]; + u8 reserved_at_45[0x1]; u8 execute_aso[0x1]; u8 reserved_at_47[0x19]; @@ -6599,7 +6601,9 @@ enum mlx5_reformat_ctx_type { MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2 = 0x3, MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL = 0x4, MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV4 = 0x5, + MLX5_REFORMAT_TYPE_L2_TO_L3_ESP_TUNNEL = 0x6, MLX5_REFORMAT_TYPE_DEL_ESP_TRANSPORT = 0x8, + MLX5_REFORMAT_TYPE_L3_ESP_TUNNEL_TO_L2 = 0x9, MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV6 = 0xb, MLX5_REFORMAT_TYPE_INSERT_HDR = 0xf, MLX5_REFORMAT_TYPE_REMOVE_HDR = 0x10, From patchwork Thu Apr 13 12:29:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210261 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F786C77B6E for ; Thu, 13 Apr 2023 12:30:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229793AbjDMMaD (ORCPT ); Thu, 13 Apr 2023 08:30:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229910AbjDMM35 (ORCPT ); Thu, 13 Apr 2023 08:29:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5CE093D6 for ; Thu, 13 Apr 2023 05:29:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6D7A463CA0 for ; Thu, 13 Apr 2023 12:29:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 10E6FC433A0; Thu, 13 Apr 2023 12:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681388994; bh=hSg5IXhXxTiwB9P+xvI/7K/RWm7Dlvspk+t0Edzgr+0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t4f9GWffee73d+Ux2X5xdYKgVMJgOz/AxztoXyB3bksISDwAFG7nLp49Wf23x4yaz PfDwnmvRosQ5+hZOxvLEEzRLNQ9Ul/mVvb2wHMi1HEblfun4JeiIXOgWN4RpL1bxcp 8HzFMOqShrM6gwCcPy8srWUbwyqfrnqfDhvpjvJBbV6sRkYSvEaegAZIiCJnd4CdoF Jr76qVrVFxSSLmYwA3p5dcgWXX7IxBbdOOhNLMejViCkZujMswFF5tWFlhKF+3rHAk 4B3bnTmjKI71Hq9v7y/9VB/almAVrgs7NTkLL0qhxa3aYDwc0xJaAPXhw/lExAd2Tm zong8qQkaK78w== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 02/10] net/mlx5e: Check IPsec packet offload tunnel capabilities Date: Thu, 13 Apr 2023 15:29:20 +0300 Message-Id: <9bc295c93c47710ba69a030c31cce861464164ef.1681388425.git.leonro@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Validate tunnel mode support for IPsec packet offload. Reviewed-by: Simon Horman Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h | 1 + .../ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c | 6 ++++++ 2 files changed, 7 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 52890d7dce6b..bb89e18b17b4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -107,6 +107,7 @@ enum mlx5_ipsec_cap { MLX5_IPSEC_CAP_PACKET_OFFLOAD = 1 << 2, MLX5_IPSEC_CAP_ROCE = 1 << 3, MLX5_IPSEC_CAP_PRIO = 1 << 4, + MLX5_IPSEC_CAP_TUNNEL = 1 << 5, }; struct mlx5e_priv; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c index 5fddb86bb35e..df90e19066bc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c @@ -48,6 +48,12 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev) if (MLX5_CAP_FLOWTABLE_NIC_TX(mdev, ignore_flow_level) && MLX5_CAP_FLOWTABLE_NIC_RX(mdev, ignore_flow_level)) caps |= MLX5_IPSEC_CAP_PRIO; + + if (MLX5_CAP_FLOWTABLE_NIC_TX(mdev, + reformat_l2_to_l3_esp_tunnel) && + MLX5_CAP_FLOWTABLE_NIC_RX(mdev, + reformat_l3_esp_tunnel_to_l2)) + caps |= MLX5_IPSEC_CAP_TUNNEL; } if (mlx5_get_roce_state(mdev) && From patchwork Thu Apr 13 12:29:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210259 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E39E2C77B61 for ; Thu, 13 Apr 2023 12:29:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229853AbjDMM3w (ORCPT ); Thu, 13 Apr 2023 08:29:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229895AbjDMM3u (ORCPT ); Thu, 13 Apr 2023 08:29:50 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E8A686BA for ; Thu, 13 Apr 2023 05:29:48 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0738E63DDD for ; Thu, 13 Apr 2023 12:29:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DB159C433D2; Thu, 13 Apr 2023 12:29:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681388986; bh=nLNihOvVOx9USP6d+rJd+a1ohE3QG/eg7efw0024Tp4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Id4i5sMU66bCJTICNlsRRX3QApl9cCTkkqo0iXEEcLnwuPh7iIEl6+40nmC8rBlG9 AtNfO2z3zFESMH0rvH4e9HkfSgZ0lSVg1Kvd6RIKn88IbXNruIQsYHHC5iX2uHrkDR o73ZBDtA9ooey47VINcd+yod4ojGHurLyWNi0cCWoXtUuXadXSx2OidlYkxcK2Db1E jWCd0ZsSywtVSLp9ypL97qJvL8xBnE9pTEoG9oucPkw7SOG8FaIIuryU2ZDIL+FX5q HjcQ1vafwudPMny+5K/QFTNBl1/PIApAfU/NnscDdUqbURtY6rl0XYbpDQi3wckj0o +k887m6PyEa7g== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 03/10] net/mlx5e: Configure IPsec SA tables to support tunnel mode Date: Thu, 13 Apr 2023 15:29:21 +0300 Message-Id: <8301a5e5ccb2f8070c971005836d343c6546e027.1681388425.git.leonro@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Create SA flow steering tables both for RX and TX with tunnel reformat property. This allows to add and delete extra headers needed for tunnel mode. Reviewed-by: Simon Horman Signed-off-by: Leon Romanovsky --- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 23 ++++++++++++------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index b47794d4146e..060be020ca64 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -118,7 +118,7 @@ static void ipsec_chains_put_table(struct mlx5_fs_chains *chains, u32 prio) static struct mlx5_flow_table *ipsec_ft_create(struct mlx5_flow_namespace *ns, int level, int prio, - int max_num_groups) + int max_num_groups, u32 flags) { struct mlx5_flow_table_attr ft_attr = {}; @@ -127,6 +127,7 @@ static struct mlx5_flow_table *ipsec_ft_create(struct mlx5_flow_namespace *ns, ft_attr.max_fte = NUM_IPSEC_FTE; ft_attr.level = level; ft_attr.prio = prio; + ft_attr.flags = flags; return mlx5_create_auto_grouped_flow_table(ns, &ft_attr); } @@ -267,6 +268,7 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, struct mlx5_flow_destination default_dest; struct mlx5_flow_destination dest[2]; struct mlx5_flow_table *ft; + u32 flags = 0; int err; default_dest = mlx5_ttc_get_default_dest(ttc, family2tt(family)); @@ -277,7 +279,7 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, return err; ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL, - MLX5E_NIC_PRIO, 1); + MLX5E_NIC_PRIO, 1, 0); if (IS_ERR(ft)) { err = PTR_ERR(ft); goto err_fs_ft_status; @@ -300,8 +302,10 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, goto err_add; /* Create FT */ - ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_ESP_FT_LEVEL, MLX5E_NIC_PRIO, - 2); + if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_TUNNEL) + flags = MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT; + ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_ESP_FT_LEVEL, MLX5E_NIC_PRIO, 2, + flags); if (IS_ERR(ft)) { err = PTR_ERR(ft); goto err_fs_ft; @@ -327,7 +331,7 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, } ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_POL_FT_LEVEL, MLX5E_NIC_PRIO, - 2); + 2, 0); if (IS_ERR(ft)) { err = PTR_ERR(ft); goto err_pol_ft; @@ -511,9 +515,10 @@ static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx, { struct mlx5_flow_destination dest = {}; struct mlx5_flow_table *ft; + u32 flags = 0; int err; - ft = ipsec_ft_create(tx->ns, 2, 0, 1); + ft = ipsec_ft_create(tx->ns, 2, 0, 1, 0); if (IS_ERR(ft)) return PTR_ERR(ft); tx->ft.status = ft; @@ -522,7 +527,9 @@ static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx, if (err) goto err_status_rule; - ft = ipsec_ft_create(tx->ns, 1, 0, 4); + if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_TUNNEL) + flags = MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT; + ft = ipsec_ft_create(tx->ns, 1, 0, 4, flags); if (IS_ERR(ft)) { err = PTR_ERR(ft); goto err_sa_ft; @@ -541,7 +548,7 @@ static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx, goto connect_roce; } - ft = ipsec_ft_create(tx->ns, 0, 0, 2); + ft = ipsec_ft_create(tx->ns, 0, 0, 2, 0); if (IS_ERR(ft)) { err = PTR_ERR(ft); goto err_pol_ft; From patchwork Thu Apr 13 12:29:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210260 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70BD1C77B6E for ; Thu, 13 Apr 2023 12:29:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229879AbjDMM3z (ORCPT ); Thu, 13 Apr 2023 08:29:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229793AbjDMM3x (ORCPT ); Thu, 13 Apr 2023 08:29:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BD4A86BA for ; Thu, 13 Apr 2023 05:29:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EBC2163CA0 for ; Thu, 13 Apr 2023 12:29:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1206C433EF; Thu, 13 Apr 2023 12:29:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681388990; bh=CyCZT8zZqTi4EDp0tMkoLLir7r064G+Tkp4IJfyj3J0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fDlE1/++iS7nhV4pEvdk6HYdOc7YLBGay7niAcLEO7rNIqJgbGAiKiap6ewsZ2OEu vnbvKYVescotn2b6jTF7QuhhPJsYdqoiNor5uCD4+iQ4ZJDRonONrVrHL8eX9dMSPP tM8KItODGr0CIUOdcSqPlOJPk8joZQektSSLXdQ9F8tE4PgwQLys7wZY+vrhu7KSuq KFEnshMYG/ykRDj/AcGfGCgBQiIW17ul4ukgo6/HfHEeaHujFUK6F97O8DDiIAnajU bxhN9ve8AU0Yu5ZA99enyEhoeoeAzQ+sGiBUzjlQPFWzjIoRo6Ure9+7xaSDRfYyTT TXFK2AiC9S50A== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 04/10] net/mlx5e: Prepare IPsec packet reformat code for tunnel mode Date: Thu, 13 Apr 2023 15:29:22 +0300 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Refactor setup_pkt_reformat() function to accommodate future extension to support tunnel mode. Signed-off-by: Leon Romanovsky Reviewed-by: Simon Horman --- .../mellanox/mlx5/core/en_accel/ipsec.c | 1 + .../mellanox/mlx5/core/en_accel/ipsec.h | 2 +- .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 81 ++++++++++++++----- 3 files changed, 63 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index def01bfde610..359da277c03a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -297,6 +297,7 @@ void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry, attrs->upspec.sport = ntohs(x->sel.sport); attrs->upspec.sport_mask = ntohs(x->sel.sport_mask); attrs->upspec.proto = x->sel.proto; + attrs->mode = x->props.mode; mlx5e_ipsec_init_limits(sa_entry, attrs); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index bb89e18b17b4..ae525420a492 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -77,7 +77,7 @@ struct mlx5_replay_esn { struct mlx5_accel_esp_xfrm_attrs { u32 spi; - u32 flags; + u32 mode; struct aes_gcm_keymat aes_gcm; union { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index 060be020ca64..6a1ed4114054 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -10,6 +10,7 @@ #include "lib/fs_chains.h" #define NUM_IPSEC_FTE BIT(15) +#define MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_SIZE 16 struct mlx5e_ipsec_fc { struct mlx5_fc *cnt; @@ -836,40 +837,80 @@ static int setup_modify_header(struct mlx5_core_dev *mdev, u32 val, u8 dir, return 0; } +static int +setup_pkt_transport_reformat(struct mlx5_accel_esp_xfrm_attrs *attrs, + struct mlx5_pkt_reformat_params *reformat_params) +{ + u8 *reformatbf; + __be32 spi; + + switch (attrs->dir) { + case XFRM_DEV_OFFLOAD_IN: + reformat_params->type = MLX5_REFORMAT_TYPE_DEL_ESP_TRANSPORT; + break; + case XFRM_DEV_OFFLOAD_OUT: + if (attrs->family == AF_INET) + reformat_params->type = + MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV4; + else + reformat_params->type = + MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV6; + + reformatbf = kzalloc(MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_SIZE, + GFP_KERNEL); + if (!reformatbf) + return -ENOMEM; + + /* convert to network format */ + spi = htonl(attrs->spi); + memcpy(reformatbf, &spi, sizeof(spi)); + + reformat_params->param_0 = attrs->authsize; + reformat_params->size = + MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_SIZE; + reformat_params->data = reformatbf; + break; + default: + return -EINVAL; + } + + return 0; +} + static int setup_pkt_reformat(struct mlx5_core_dev *mdev, struct mlx5_accel_esp_xfrm_attrs *attrs, struct mlx5_flow_act *flow_act) { - enum mlx5_flow_namespace_type ns_type = MLX5_FLOW_NAMESPACE_EGRESS; struct mlx5_pkt_reformat_params reformat_params = {}; struct mlx5_pkt_reformat *pkt_reformat; - u8 reformatbf[16] = {}; - __be32 spi; + enum mlx5_flow_namespace_type ns_type; + int ret; - if (attrs->dir == XFRM_DEV_OFFLOAD_IN) { - reformat_params.type = MLX5_REFORMAT_TYPE_DEL_ESP_TRANSPORT; + switch (attrs->dir) { + case XFRM_DEV_OFFLOAD_IN: ns_type = MLX5_FLOW_NAMESPACE_KERNEL; - goto cmd; + break; + case XFRM_DEV_OFFLOAD_OUT: + ns_type = MLX5_FLOW_NAMESPACE_EGRESS; + break; + default: + return -EINVAL; } - if (attrs->family == AF_INET) - reformat_params.type = - MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV4; - else - reformat_params.type = - MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV6; - - /* convert to network format */ - spi = htonl(attrs->spi); - memcpy(reformatbf, &spi, 4); + switch (attrs->mode) { + case XFRM_MODE_TRANSPORT: + ret = setup_pkt_transport_reformat(attrs, &reformat_params); + break; + default: + ret = -EINVAL; + } - reformat_params.param_0 = attrs->authsize; - reformat_params.size = sizeof(reformatbf); - reformat_params.data = &reformatbf; + if (ret) + return ret; -cmd: pkt_reformat = mlx5_packet_reformat_alloc(mdev, &reformat_params, ns_type); + kfree(reformat_params.data); if (IS_ERR(pkt_reformat)) return PTR_ERR(pkt_reformat); From patchwork Thu Apr 13 12:29:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210265 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BDC8C77B61 for ; Thu, 13 Apr 2023 12:30:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229929AbjDMMaY (ORCPT ); Thu, 13 Apr 2023 08:30:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229847AbjDMMaW (ORCPT ); Thu, 13 Apr 2023 08:30:22 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08532A266 for ; Thu, 13 Apr 2023 05:30:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0B8DB63DDD for ; Thu, 13 Apr 2023 12:30:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E403AC433D2; Thu, 13 Apr 2023 12:30:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681389011; bh=VTeIg9MPovUo92fEVtII5et/rSlMAfUW5EpHQIksX+c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L/w0EIVkzyYm5OXTmHQfKNjZKCMQ5Qvq3Ht+PIoYzdypzqQhOPugEaZwCt5wbwvJM e2qEsaNkoz50ux29SsiRoCGOuOdfZcq4/IE+Dlbb6Gxxl6IM0cb2bFZYYVf1YMI5T/ +oI8d6zHAqfxTggRG/qWbIgqdASP0M4agjJfSIVjIOrxczcof/nQy5/lBfDJnof+vf iOCdlX/b65I3fIZc70ux9kyjlvYEvpqZMheqodCuW1UfjrnzJgCPCAsYvez6J4mqEg +XTn5NoOzYBQbLE7JNodTtst7TfvVSE1yw12pAbflzaA9Nzv6FYfwpFogI5DMSVuXd LSFgG9eQGL2aA== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 05/10] net/mlx5e: Support IPsec RX packet offload in tunnel mode Date: Thu, 13 Apr 2023 15:29:23 +0300 Message-Id: <10b2ef977bb38508edd9a9c8f35fe3ac9e5e582a.1681388425.git.leonro@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Extend mlx5 driver with logic to support IPsec RX packet offload in tunnel mode. Signed-off-by: Leon Romanovsky Reviewed-by: Simon Horman --- .../mellanox/mlx5/core/en_accel/ipsec.c | 36 +++++++++++++ .../mellanox/mlx5/core/en_accel/ipsec.h | 2 + .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 50 +++++++++++++++++++ 3 files changed, 88 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index 359da277c03a..7c55b37c1c01 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -242,6 +242,41 @@ static void mlx5e_ipsec_init_limits(struct mlx5e_ipsec_sa_entry *sa_entry, attrs->lft.numb_rounds_soft = (u64)n; } +static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry, + struct mlx5_accel_esp_xfrm_attrs *attrs) +{ + struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); + struct xfrm_state *x = sa_entry->x; + struct net_device *netdev; + struct neighbour *n; + u8 addr[ETH_ALEN]; + + if (attrs->mode != XFRM_MODE_TUNNEL && + attrs->type != XFRM_DEV_OFFLOAD_PACKET) + return; + + netdev = x->xso.real_dev; + + mlx5_query_mac_address(mdev, addr); + switch (attrs->dir) { + case XFRM_DEV_OFFLOAD_IN: + ether_addr_copy(attrs->dmac, addr); + n = neigh_lookup(&arp_tbl, &attrs->saddr.a4, netdev); + if (!n) { + n = neigh_create(&arp_tbl, &attrs->saddr.a4, netdev); + if (IS_ERR(n)) + return; + neigh_event_send(n, NULL); + } + neigh_ha_snapshot(addr, n, netdev); + ether_addr_copy(attrs->smac, addr); + break; + default: + return; + } + neigh_release(n); +} + void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry, struct mlx5_accel_esp_xfrm_attrs *attrs) { @@ -300,6 +335,7 @@ void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry, attrs->mode = x->props.mode; mlx5e_ipsec_init_limits(sa_entry, attrs); + mlx5e_ipsec_init_macs(sa_entry, attrs); } static int mlx5e_xfrm_validate_state(struct mlx5_core_dev *mdev, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index ae525420a492..77384ffa4451 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -99,6 +99,8 @@ struct mlx5_accel_esp_xfrm_attrs { u32 authsize; u32 reqid; struct mlx5_ipsec_lft lft; + u8 smac[ETH_ALEN]; + u8 dmac[ETH_ALEN]; }; enum mlx5_ipsec_cap { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index 6a1ed4114054..001d7c3add6a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -837,6 +837,53 @@ static int setup_modify_header(struct mlx5_core_dev *mdev, u32 val, u8 dir, return 0; } +static int +setup_pkt_tunnel_reformat(struct mlx5_core_dev *mdev, + struct mlx5_accel_esp_xfrm_attrs *attrs, + struct mlx5_pkt_reformat_params *reformat_params) +{ + struct ethhdr *eth_hdr; + char *reformatbf; + size_t bfflen; + + bfflen = sizeof(*eth_hdr); + + reformatbf = kzalloc(bfflen, GFP_KERNEL); + if (!reformatbf) + return -ENOMEM; + + eth_hdr = (struct ethhdr *)reformatbf; + switch (attrs->family) { + case AF_INET: + eth_hdr->h_proto = htons(ETH_P_IP); + break; + case AF_INET6: + eth_hdr->h_proto = htons(ETH_P_IPV6); + break; + default: + goto free_reformatbf; + } + + ether_addr_copy(eth_hdr->h_dest, attrs->dmac); + ether_addr_copy(eth_hdr->h_source, attrs->smac); + + switch (attrs->dir) { + case XFRM_DEV_OFFLOAD_IN: + reformat_params->type = MLX5_REFORMAT_TYPE_L3_ESP_TUNNEL_TO_L2; + break; + default: + goto free_reformatbf; + } + + reformat_params->size = bfflen; + reformat_params->data = reformatbf; + return 0; + +free_reformatbf: + kfree(reformatbf); + return -EINVAL; +} + static int setup_pkt_transport_reformat(struct mlx5_accel_esp_xfrm_attrs *attrs, struct mlx5_pkt_reformat_params *reformat_params) @@ -901,6 +948,9 @@ static int setup_pkt_reformat(struct mlx5_core_dev *mdev, case XFRM_MODE_TRANSPORT: ret = setup_pkt_transport_reformat(attrs, &reformat_params); break; + case XFRM_MODE_TUNNEL: + ret = setup_pkt_tunnel_reformat(mdev, attrs, &reformat_params); + break; default: ret = -EINVAL; } From patchwork Thu Apr 13 12:29:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210262 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADDFDC77B61 for ; Thu, 13 Apr 2023 12:30:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229951AbjDMMaI (ORCPT ); Thu, 13 Apr 2023 08:30:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229864AbjDMMaE (ORCPT ); Thu, 13 Apr 2023 08:30:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45EDC9755 for ; Thu, 13 Apr 2023 05:30:01 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 92DF563DD4 for ; Thu, 13 Apr 2023 12:29:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7418BC433D2; Thu, 13 Apr 2023 12:29:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681388999; bh=o75fQWXxho7trxGhiR6R/L24gY139K9Apf7COG4YfX0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ld4hofNzXYwp966RRMegfiuk8Oy4hsmfdEhwer3YPq2hnw08pJ6DLPvzRpmp/cBgo ViRM9wTUkhIeuANGqHZ2XtKkM7ik/XnmvcPpfKdQidnwbbn4NJRPPc0ntLtyGQh+HP f0lzV3E/DB+WZWjn3yRYAlj/nXFPwwqHpPLo9FcwjfVvHBYnw6x/MnPGo0rC4g6IFc XeNcbkTlxHFfNsSX7r7MPzmpC9M9DdA6+HCfFHDF3P1UbYnFNuIlOnOih0ZxSYaah6 Eu1Tj+pAH3EevWDPDLZtT1/X4uYa49B9CYvRHv3xWNxrfR898Pa9nM5sUH8W9NmPLz OKEel7w578yig== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 06/10] net/mlx5e: Support IPsec TX packet offload in tunnel mode Date: Thu, 13 Apr 2023 15:29:24 +0300 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Extend mlx5 driver with logic to support IPsec TX packet offload in tunnel mode. Signed-off-by: Leon Romanovsky Reviewed-by: Simon Horman --- .../mellanox/mlx5/core/en_accel/ipsec.c | 12 +++++ .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 52 +++++++++++++++++++ 2 files changed, 64 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index 7c55b37c1c01..36f3ffd54355 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -271,6 +271,18 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry, neigh_ha_snapshot(addr, n, netdev); ether_addr_copy(attrs->smac, addr); break; + case XFRM_DEV_OFFLOAD_OUT: + ether_addr_copy(attrs->smac, addr); + n = neigh_lookup(&arp_tbl, &attrs->daddr.a4, netdev); + if (!n) { + n = neigh_create(&arp_tbl, &attrs->daddr.a4, netdev); + if (IS_ERR(n)) + return; + neigh_event_send(n, NULL); + } + neigh_ha_snapshot(addr, n, netdev); + ether_addr_copy(attrs->dmac, addr); + break; default: return; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index 001d7c3add6a..4c800b54d8b6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -11,6 +11,7 @@ #define NUM_IPSEC_FTE BIT(15) #define MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_SIZE 16 +#define IPSEC_TUNNEL_DEFAULT_TTL 0x40 struct mlx5e_ipsec_fc { struct mlx5_fc *cnt; @@ -842,12 +843,31 @@ setup_pkt_tunnel_reformat(struct mlx5_core_dev *mdev, struct mlx5_accel_esp_xfrm_attrs *attrs, struct mlx5_pkt_reformat_params *reformat_params) { + struct ip_esp_hdr *esp_hdr; + struct ipv6hdr *ipv6hdr; struct ethhdr *eth_hdr; + struct iphdr *iphdr; char *reformatbf; size_t bfflen; + void *hdr; bfflen = sizeof(*eth_hdr); + if (attrs->dir == XFRM_DEV_OFFLOAD_OUT) { + bfflen += sizeof(*esp_hdr) + 8; + + switch (attrs->family) { + case AF_INET: + bfflen += sizeof(*iphdr); + break; + case AF_INET6: + bfflen += sizeof(*ipv6hdr); + break; + default: + return -EINVAL; + } + } + reformatbf = kzalloc(bfflen, GFP_KERNEL); if (!reformatbf) return -ENOMEM; @@ -871,6 +891,38 @@ setup_pkt_tunnel_reformat(struct mlx5_core_dev *mdev, case XFRM_DEV_OFFLOAD_IN: reformat_params->type = MLX5_REFORMAT_TYPE_L3_ESP_TUNNEL_TO_L2; break; + case XFRM_DEV_OFFLOAD_OUT: + reformat_params->type = MLX5_REFORMAT_TYPE_L2_TO_L3_ESP_TUNNEL; + reformat_params->param_0 = attrs->authsize; + + hdr = reformatbf + sizeof(*eth_hdr); + switch (attrs->family) { + case AF_INET: + iphdr = (struct iphdr *)hdr; + memcpy(&iphdr->saddr, &attrs->saddr.a4, 4); + memcpy(&iphdr->daddr, &attrs->daddr.a4, 4); + iphdr->version = 4; + iphdr->ihl = 5; + iphdr->ttl = IPSEC_TUNNEL_DEFAULT_TTL; + iphdr->protocol = IPPROTO_ESP; + hdr += sizeof(*iphdr); + break; + case AF_INET6: + ipv6hdr = (struct ipv6hdr *)hdr; + memcpy(&ipv6hdr->saddr, &attrs->saddr.a6, 16); + memcpy(&ipv6hdr->daddr, &attrs->daddr.a6, 16); + ipv6hdr->nexthdr = IPPROTO_ESP; + ipv6hdr->version = 6; + ipv6hdr->hop_limit = IPSEC_TUNNEL_DEFAULT_TTL; + hdr += sizeof(*ipv6hdr); + break; + default: + goto free_reformatbf; + } + + esp_hdr = (struct ip_esp_hdr *)hdr; + esp_hdr->spi = htonl(attrs->spi); + break; default: goto free_reformatbf; } From patchwork Thu Apr 13 12:29:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210263 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CEA7C77B61 for ; Thu, 13 Apr 2023 12:30:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229712AbjDMMaM (ORCPT ); Thu, 13 Apr 2023 08:30:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229927AbjDMMaJ (ORCPT ); Thu, 13 Apr 2023 08:30:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C60ABA253 for ; Thu, 13 Apr 2023 05:30:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C81B163DFD for ; Thu, 13 Apr 2023 12:30:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A6B49C433D2; Thu, 13 Apr 2023 12:30:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681389003; bh=NHPwBqSWcXk2fCZyUrdEekb1rtsFvQ7b3ELweDF1nS4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DowqLdb8vbPuPhdzjZuKD10mYdnoQiTB2U8yeZ6Fvm3i7Z6HZc+HMjd8WQYtfPV1F d4l/8lAlLoyIB9Oj6hGQtayOy5blRC67q9x/tpRtWsKOvIaZDvMQwZyfGPZwbflY6T LIxpsy44kVfAI4N0YDeZanb1Ztw71YYsBjFEnl7y8xK/A2NWxA+SegzhdRAUujazwh ofzMUX/bCGexWy09YEnGCakx/vJtSvhdYRBqDmXrqx1jWzrhYXvBEBkOct/zwbGSZ1 Q1IJ849AZ7BgkCIXvJyrbn3CdEBLJYCDqvU/YDkOLdWVHHhngHx4Dk1gvbPc3/rnEi eVVbfNsWdf27g== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 07/10] net/mlx5e: Listen to ARP events to update IPsec L2 headers in tunnel mode Date: Thu, 13 Apr 2023 15:29:25 +0300 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky In IPsec packet offload mode all header manipulations are performed by hardware, which is responsible to add/remove L2 header with source and destinations MACs. CX-7 devices don't support offload of in-kernel routing functionality, as such HW needs external help to fill other side MAC as it isn't available for HW. As a solution, let's listen to neigh ARP updates and reconfigure IPsec rules on the fly once new MAC data information arrives. Signed-off-by: Leon Romanovsky Reviewed-by: Simon Horman --- .../mellanox/mlx5/core/en_accel/ipsec.c | 132 +++++++++++++++++- .../mellanox/mlx5/core/en_accel/ipsec.h | 5 + 2 files changed, 130 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index 36f3ffd54355..b64281fd4142 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -35,12 +35,14 @@ #include #include #include +#include #include "en.h" #include "ipsec.h" #include "ipsec_rxtx.h" #define MLX5_IPSEC_RESCHED msecs_to_jiffies(1000) +#define MLX5E_IPSEC_TUNNEL_SA XA_MARK_1 static struct mlx5e_ipsec_sa_entry *to_ipsec_sa_entry(struct xfrm_state *x) { @@ -251,7 +253,7 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry, struct neighbour *n; u8 addr[ETH_ALEN]; - if (attrs->mode != XFRM_MODE_TUNNEL && + if (attrs->mode != XFRM_MODE_TUNNEL || attrs->type != XFRM_DEV_OFFLOAD_PACKET) return; @@ -267,6 +269,8 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry, if (IS_ERR(n)) return; neigh_event_send(n, NULL); + attrs->drop = true; + break; } neigh_ha_snapshot(addr, n, netdev); ether_addr_copy(attrs->smac, addr); @@ -279,6 +283,8 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry, if (IS_ERR(n)) return; neigh_event_send(n, NULL); + attrs->drop = true; + break; } neigh_ha_snapshot(addr, n, netdev); ether_addr_copy(attrs->dmac, addr); @@ -507,34 +513,81 @@ static void mlx5e_ipsec_set_esn_ops(struct mlx5e_ipsec_sa_entry *sa_entry) sa_entry->set_iv_op = mlx5e_ipsec_set_iv; } +static void mlx5e_ipsec_handle_netdev_event(struct work_struct *_work) +{ + struct mlx5e_ipsec_work *work = + container_of(_work, struct mlx5e_ipsec_work, work); + struct mlx5e_ipsec_sa_entry *sa_entry = work->sa_entry; + struct mlx5e_ipsec_netevent_data *data = work->data; + struct mlx5_accel_esp_xfrm_attrs *attrs; + + attrs = &sa_entry->attrs; + + switch (attrs->dir) { + case XFRM_DEV_OFFLOAD_IN: + ether_addr_copy(attrs->smac, data->addr); + break; + case XFRM_DEV_OFFLOAD_OUT: + ether_addr_copy(attrs->dmac, data->addr); + break; + default: + WARN_ON_ONCE(true); + } + attrs->drop = false; + mlx5e_accel_ipsec_fs_modify(sa_entry); +} + static int mlx5_ipsec_create_work(struct mlx5e_ipsec_sa_entry *sa_entry) { struct xfrm_state *x = sa_entry->x; struct mlx5e_ipsec_work *work; + void *data = NULL; switch (x->xso.type) { case XFRM_DEV_OFFLOAD_CRYPTO: if (!(x->props.flags & XFRM_STATE_ESN)) return 0; break; + case XFRM_DEV_OFFLOAD_PACKET: + if (x->props.mode != XFRM_MODE_TUNNEL) + return 0; + break; default: - return 0; + break; } work = kzalloc(sizeof(*work), GFP_KERNEL); if (!work) return -ENOMEM; - work->data = kzalloc(sizeof(*sa_entry), GFP_KERNEL); - if (!work->data) { - kfree(work); - return -ENOMEM; + switch (x->xso.type) { + case XFRM_DEV_OFFLOAD_CRYPTO: + data = kzalloc(sizeof(*sa_entry), GFP_KERNEL); + if (!data) + goto free_work; + + INIT_WORK(&work->work, mlx5e_ipsec_modify_state); + break; + case XFRM_DEV_OFFLOAD_PACKET: + data = kzalloc(sizeof(struct mlx5e_ipsec_netevent_data), + GFP_KERNEL); + if (!data) + goto free_work; + + INIT_WORK(&work->work, mlx5e_ipsec_handle_netdev_event); + break; + default: + break; } - INIT_WORK(&work->work, mlx5e_ipsec_modify_state); + work->data = data; work->sa_entry = sa_entry; sa_entry->work = work; return 0; + +free_work: + kfree(work); + return -ENOMEM; } static int mlx5e_ipsec_create_dwork(struct mlx5e_ipsec_sa_entry *sa_entry) @@ -629,6 +682,12 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x, if (sa_entry->dwork) queue_delayed_work(ipsec->wq, &sa_entry->dwork->dwork, MLX5_IPSEC_RESCHED); + + if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET && + x->props.mode == XFRM_MODE_TUNNEL) + xa_set_mark(&ipsec->sadb, sa_entry->ipsec_obj_id, + MLX5E_IPSEC_TUNNEL_SA); + out: x->xso.offload_handle = (unsigned long)sa_entry; return 0; @@ -651,6 +710,7 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x, static void mlx5e_xfrm_del_state(struct xfrm_state *x) { struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x); + struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; struct mlx5e_ipsec *ipsec = sa_entry->ipsec; struct mlx5e_ipsec_sa_entry *old; @@ -659,6 +719,12 @@ static void mlx5e_xfrm_del_state(struct xfrm_state *x) old = xa_erase_bh(&ipsec->sadb, sa_entry->ipsec_obj_id); WARN_ON(old != sa_entry); + + if (attrs->mode == XFRM_MODE_TUNNEL && + attrs->type == XFRM_DEV_OFFLOAD_PACKET) + /* Make sure that no ARP requests are running in parallel */ + flush_workqueue(ipsec->wq); + } static void mlx5e_xfrm_free_state(struct xfrm_state *x) @@ -683,6 +749,46 @@ static void mlx5e_xfrm_free_state(struct xfrm_state *x) kfree(sa_entry); } +static int mlx5e_ipsec_netevent_event(struct notifier_block *nb, + unsigned long event, void *ptr) +{ + struct mlx5_accel_esp_xfrm_attrs *attrs; + struct mlx5e_ipsec_netevent_data *data; + struct mlx5e_ipsec_sa_entry *sa_entry; + struct mlx5e_ipsec *ipsec; + struct neighbour *n = ptr; + struct net_device *netdev; + struct xfrm_state *x; + unsigned long idx; + + if (event != NETEVENT_NEIGH_UPDATE || !(n->nud_state & NUD_VALID)) + return NOTIFY_DONE; + + ipsec = container_of(nb, struct mlx5e_ipsec, netevent_nb); + xa_for_each_marked(&ipsec->sadb, idx, sa_entry, MLX5E_IPSEC_TUNNEL_SA) { + attrs = &sa_entry->attrs; + + if (attrs->family == AF_INET) { + if (!neigh_key_eq32(n, &attrs->saddr.a4) && + !neigh_key_eq32(n, &attrs->daddr.a4)) + continue; + } else { + if (!neigh_key_eq128(n, &attrs->saddr.a4) && + !neigh_key_eq128(n, &attrs->daddr.a4)) + continue; + } + + x = sa_entry->x; + netdev = x->xso.real_dev; + data = sa_entry->work->data; + + neigh_ha_snapshot(data->addr, n, netdev); + queue_work(ipsec->wq, &sa_entry->work->work); + } + + return NOTIFY_DONE; +} + void mlx5e_ipsec_init(struct mlx5e_priv *priv) { struct mlx5e_ipsec *ipsec; @@ -711,6 +817,13 @@ void mlx5e_ipsec_init(struct mlx5e_priv *priv) goto err_aso; } + if (mlx5_ipsec_device_caps(priv->mdev) & MLX5_IPSEC_CAP_TUNNEL) { + ipsec->netevent_nb.notifier_call = mlx5e_ipsec_netevent_event; + ret = register_netevent_notifier(&ipsec->netevent_nb); + if (ret) + goto clear_aso; + } + ret = mlx5e_accel_ipsec_fs_init(ipsec); if (ret) goto err_fs_init; @@ -721,6 +834,9 @@ void mlx5e_ipsec_init(struct mlx5e_priv *priv) return; err_fs_init: + if (mlx5_ipsec_device_caps(priv->mdev) & MLX5_IPSEC_CAP_TUNNEL) + unregister_netevent_notifier(&ipsec->netevent_nb); +clear_aso: if (mlx5_ipsec_device_caps(priv->mdev) & MLX5_IPSEC_CAP_PACKET_OFFLOAD) mlx5e_ipsec_aso_cleanup(ipsec); err_aso: @@ -739,6 +855,8 @@ void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv) return; mlx5e_accel_ipsec_fs_cleanup(ipsec); + if (mlx5_ipsec_device_caps(priv->mdev) & MLX5_IPSEC_CAP_TUNNEL) + unregister_netevent_notifier(&ipsec->netevent_nb); if (mlx5_ipsec_device_caps(priv->mdev) & MLX5_IPSEC_CAP_PACKET_OFFLOAD) mlx5e_ipsec_aso_cleanup(ipsec); destroy_workqueue(ipsec->wq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index 77384ffa4451..d06c896eadb6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -144,6 +144,10 @@ struct mlx5e_ipsec_work { void *data; }; +struct mlx5e_ipsec_netevent_data { + u8 addr[ETH_ALEN]; +}; + struct mlx5e_ipsec_dwork { struct delayed_work dwork; struct mlx5e_ipsec_sa_entry *sa_entry; @@ -169,6 +173,7 @@ struct mlx5e_ipsec { struct mlx5e_ipsec_tx *tx; struct mlx5e_ipsec_aso *aso; struct notifier_block nb; + struct notifier_block netevent_nb; struct mlx5_ipsec_fs *roce; }; From patchwork Thu Apr 13 12:29:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210264 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39107C77B6F for ; Thu, 13 Apr 2023 12:30:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229895AbjDMMaO (ORCPT ); Thu, 13 Apr 2023 08:30:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229929AbjDMMaM (ORCPT ); Thu, 13 Apr 2023 08:30:12 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88D64901A for ; Thu, 13 Apr 2023 05:30:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D7BA763CA0 for ; Thu, 13 Apr 2023 12:30:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7245C433D2; Thu, 13 Apr 2023 12:30:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681389007; bh=MkUUzOHHplHN6G6IHQQSQZhJJRawjZg0zPUp2EdIo8g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S8ieUO75lyu81BrjHNo97ndPz3lhLG3diHv0JDQrdWccTCcbizU2bMe9kCYPNaEJh uD+wqivKeVBQvcZ/XPc00SAw1wMPebCJuQLNdPqVj+IqvthEc8XdbcssKlHKXrY8dX Ah8UgUdiXvlQfWYhcEBGtgsbumlxCWoahbZE/rhyRiV16sG+j8PN2D5G/VYS3aqPI9 wdPcvDLNyvO0VufeA/9ULMsHkdwbVUWXqEi9nRUke051ziJTHdEAJMi52rI0MiTkTQ 3XjCWR2bMp3ANcezZmkk7ClcDVGLwp1pMyxhcyLwmj2PvdN50al9Ox9JW1Ta0OqVqY kKk5H0wOTYvbw== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 08/10] net/mlx5: Allow blocking encap changes in eswitch Date: Thu, 13 Apr 2023 15:29:26 +0300 Message-Id: <47dc63412a5c0b8b60ff4127a54d709845b4e4de.1681388425.git.leonro@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Existing eswitch encap option enables header encapsulation. Unfortunately currently available hardware isn't able to perform double encapsulation, which can happen once IPsec packet offload tunnel mode is used together with encap mode set to BASIC. So as a solution for misconfiguration, provide an option to block encap changes, which will be used for IPsec packet offload. Reviewed-by: Emeel Hakim Signed-off-by: Leon Romanovsky Reviewed-by: Simon Horman --- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 14 ++++++ .../mellanox/mlx5/core/eswitch_offloads.c | 48 +++++++++++++++++++ 2 files changed, 62 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 19e9a77c4633..e9d68fdf68f5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -263,6 +263,7 @@ struct mlx5_esw_offload { const struct mlx5_eswitch_rep_ops *rep_ops[NUM_REP_TYPES]; u8 inline_mode; atomic64_t num_flows; + u64 num_block_encap; enum devlink_eswitch_encap_mode encap; struct ida vport_metadata_ida; unsigned int host_number; /* ECPF supports one external host */ @@ -748,6 +749,9 @@ void mlx5_eswitch_offloads_destroy_single_fdb(struct mlx5_eswitch *master_esw, struct mlx5_eswitch *slave_esw); int mlx5_eswitch_reload_reps(struct mlx5_eswitch *esw); +bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev); +void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev); + static inline int mlx5_eswitch_num_vfs(struct mlx5_eswitch *esw) { if (mlx5_esw_allowed(esw)) @@ -761,6 +765,7 @@ mlx5_eswitch_get_slow_fdb(struct mlx5_eswitch *esw) { return esw->fdb_table.offloads.slow_fdb; } + #else /* CONFIG_MLX5_ESWITCH */ /* eswitch API stubs */ static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; } @@ -805,6 +810,15 @@ mlx5_eswitch_reload_reps(struct mlx5_eswitch *esw) { return 0; } + +static inline bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev) +{ + return true; +} + +static inline void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev) +{ +} #endif /* CONFIG_MLX5_ESWITCH */ #endif /* __MLX5_ESWITCH_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index 48036dfddd5e..b6e2709c1371 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -3586,6 +3586,47 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode) return err; } +bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev) +{ + struct devlink *devlink = priv_to_devlink(dev); + struct mlx5_eswitch *esw; + + devl_lock(devlink); + esw = mlx5_devlink_eswitch_get(devlink); + if (IS_ERR(esw)) { + devl_unlock(devlink); + /* Failure means no eswitch => not possible to change encap */ + return true; + } + + down_write(&esw->mode_lock); + if (esw->mode != MLX5_ESWITCH_LEGACY && + esw->offloads.encap != DEVLINK_ESWITCH_ENCAP_MODE_NONE) { + up_write(&esw->mode_lock); + devl_unlock(devlink); + return false; + } + + esw->offloads.num_block_encap++; + up_write(&esw->mode_lock); + devl_unlock(devlink); + return true; +} + +void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev) +{ + struct devlink *devlink = priv_to_devlink(dev); + struct mlx5_eswitch *esw; + + esw = mlx5_devlink_eswitch_get(devlink); + if (IS_ERR(esw)) + return; + + down_write(&esw->mode_lock); + esw->offloads.num_block_encap--; + up_write(&esw->mode_lock); +} + int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink, enum devlink_eswitch_encap_mode encap, struct netlink_ext_ack *extack) @@ -3627,6 +3668,13 @@ int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink, goto unlock; } + if (esw->offloads.num_block_encap) { + NL_SET_ERR_MSG_MOD(extack, + "Can't set encapsulation when IPsec SA and/or policies are configured"); + err = -EOPNOTSUPP; + goto unlock; + } + esw_destroy_offloads_fdb_tables(esw); esw->offloads.encap = encap; From patchwork Thu Apr 13 12:29:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210267 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B408C77B61 for ; Thu, 13 Apr 2023 12:30:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229947AbjDMMah (ORCPT ); Thu, 13 Apr 2023 08:30:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229828AbjDMMae (ORCPT ); Thu, 13 Apr 2023 08:30:34 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3615AAD0F for ; Thu, 13 Apr 2023 05:30:21 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2AED463E09 for ; Thu, 13 Apr 2023 12:30:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 09AEFC433EF; Thu, 13 Apr 2023 12:30:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681389019; bh=ZWpAK1tMaQfomcNTmCa8S88VyalaBfnG0DdFL+F/7Yo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DWJM/4Zal68KTcP/7KOLR1bfWJVy7vwORSD6miL5AyWLMNWDObp1xylhk5IEEE/1w 3iVvevhLsRxZJWSj0TIwOAyNZKqYunrlseACnQM2UrKlgzxWTjJ5WIsA4B2AghUDcG VyEtRCiPphCQQ3VeZWFcYbQ6r9rA3NahUKvcGDRDNoN+9AbkJUi9rDcqz9ZJ1MwxWJ zVgc6JpmbRMoow5owcKryEr4KlUSacdJWd2hF9IoRMBUIQPoV6OT4DuGrQeBlFMkbx 0ZLrd+Ud6nM0tcSSTC3229YACqGORzEBSA2OkfoPW/z+50gRLRKkTitYlTaV+nvtnj wnE7YYj8UvmEg== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 09/10] net/mlx5e: Create IPsec table with tunnel support only when encap is disabled Date: Thu, 13 Apr 2023 15:29:27 +0300 Message-Id: <95b253445bedd2167a6c156d242fd47f37de6ad1.1681388425.git.leonro@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Current hardware doesn't support double encapsulation which is happening when IPsec packet offload tunnel mode is configured together with eswitch encap option. Any user attempt to add new SA/policy after he/she sets encap mode, will generate the following FW syndrome: mlx5_core 0000:08:00.0: mlx5_cmd_out_err:803:(pid 1904): CREATE_FLOW_TABLE(0x930) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0xa43321), err(-22) Make sure that we block encap changes before creating flow steering tables. This is applicable only for packet offload in tunnel mode, while packet offload in transport mode and crypto offload, don't have such limitation as they don't perform encapsulation. Reviewed-by: Raed Salem Signed-off-by: Leon Romanovsky Reviewed-by: Simon Horman --- .../mellanox/mlx5/core/en_accel/ipsec.c | 8 +++++ .../mellanox/mlx5/core/en_accel/ipsec.h | 1 + .../mellanox/mlx5/core/en_accel/ipsec_fs.c | 33 +++++++++++++++++-- 3 files changed, 39 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index b64281fd4142..0bda5a91bff6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -668,6 +668,14 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x, if (err) goto err_hw_ctx; + if (x->props.mode == XFRM_MODE_TUNNEL && + x->xso.type == XFRM_DEV_OFFLOAD_PACKET && + !mlx5e_ipsec_fs_tunnel_enabled(sa_entry)) { + NL_SET_ERR_MSG_MOD(extack, "Packet offload tunnel mode is disabled due to encap settings"); + err = -EINVAL; + goto err_add_rule; + } + /* We use *_bh() variant because xfrm_timer_handler(), which runs * in softirq context, can reach our state delete logic and we need * xa_erase_bh() there. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h index d06c896eadb6..f7f7c09d2b32 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h @@ -251,6 +251,7 @@ void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_ipsec_sa_entry *sa_entry); int mlx5e_accel_ipsec_fs_add_pol(struct mlx5e_ipsec_pol_entry *pol_entry); void mlx5e_accel_ipsec_fs_del_pol(struct mlx5e_ipsec_pol_entry *pol_entry); void mlx5e_accel_ipsec_fs_modify(struct mlx5e_ipsec_sa_entry *sa_entry); +bool mlx5e_ipsec_fs_tunnel_enabled(struct mlx5e_ipsec_sa_entry *sa_entry); int mlx5_ipsec_create_sa_ctx(struct mlx5e_ipsec_sa_entry *sa_entry); void mlx5_ipsec_free_sa_ctx(struct mlx5e_ipsec_sa_entry *sa_entry); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c index 4c800b54d8b6..5a8fcd30fcb1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c @@ -4,6 +4,7 @@ #include #include "en.h" #include "en/fs.h" +#include "eswitch.h" #include "ipsec.h" #include "fs_core.h" #include "lib/ipsec_fs_roce.h" @@ -38,6 +39,7 @@ struct mlx5e_ipsec_rx { struct mlx5e_ipsec_rule status; struct mlx5e_ipsec_fc *fc; struct mlx5_fs_chains *chains; + u8 allow_tunnel_mode : 1; }; struct mlx5e_ipsec_tx { @@ -47,6 +49,7 @@ struct mlx5e_ipsec_tx { struct mlx5_flow_namespace *ns; struct mlx5e_ipsec_fc *fc; struct mlx5_fs_chains *chains; + u8 allow_tunnel_mode : 1; }; /* IPsec RX flow steering */ @@ -254,7 +257,8 @@ static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, mlx5_del_flow_rules(rx->sa.rule); mlx5_destroy_flow_group(rx->sa.group); mlx5_destroy_flow_table(rx->ft.sa); - + if (rx->allow_tunnel_mode) + mlx5_eswitch_unblock_encap(mdev); mlx5_del_flow_rules(rx->status.rule); mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr); mlx5_destroy_flow_table(rx->ft.status); @@ -305,6 +309,8 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, /* Create FT */ if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_TUNNEL) + rx->allow_tunnel_mode = mlx5_eswitch_block_encap(mdev); + if (rx->allow_tunnel_mode) flags = MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT; ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_ESP_FT_LEVEL, MLX5E_NIC_PRIO, 2, flags); @@ -362,6 +368,8 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, err_fs: mlx5_destroy_flow_table(rx->ft.sa); err_fs_ft: + if (rx->allow_tunnel_mode) + mlx5_eswitch_unblock_encap(mdev); mlx5_del_flow_rules(rx->status.rule); mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr); err_add: @@ -496,7 +504,8 @@ static int ipsec_counter_rule_tx(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_ } /* IPsec TX flow steering */ -static void tx_destroy(struct mlx5e_ipsec_tx *tx, struct mlx5_ipsec_fs *roce) +static void tx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx, + struct mlx5_ipsec_fs *roce) { mlx5_ipsec_fs_roce_tx_destroy(roce); if (tx->chains) { @@ -508,6 +517,8 @@ static void tx_destroy(struct mlx5e_ipsec_tx *tx, struct mlx5_ipsec_fs *roce) } mlx5_destroy_flow_table(tx->ft.sa); + if (tx->allow_tunnel_mode) + mlx5_eswitch_unblock_encap(mdev); mlx5_del_flow_rules(tx->status.rule); mlx5_destroy_flow_table(tx->ft.status); } @@ -530,6 +541,8 @@ static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx, goto err_status_rule; if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_TUNNEL) + tx->allow_tunnel_mode = mlx5_eswitch_block_encap(mdev); + if (tx->allow_tunnel_mode) flags = MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT; ft = ipsec_ft_create(tx->ns, 1, 0, 4, flags); if (IS_ERR(ft)) { @@ -581,6 +594,8 @@ static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx, err_pol_ft: mlx5_destroy_flow_table(tx->ft.sa); err_sa_ft: + if (tx->allow_tunnel_mode) + mlx5_eswitch_unblock_encap(mdev); mlx5_del_flow_rules(tx->status.rule); err_status_rule: mlx5_destroy_flow_table(tx->ft.status); @@ -609,7 +624,7 @@ static void tx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx) if (--tx->ft.refcnt) return; - tx_destroy(tx, ipsec->roce); + tx_destroy(ipsec->mdev, tx, ipsec->roce); } static struct mlx5_flow_table *tx_ft_get_policy(struct mlx5_core_dev *mdev, @@ -1603,3 +1618,15 @@ void mlx5e_accel_ipsec_fs_modify(struct mlx5e_ipsec_sa_entry *sa_entry) mlx5e_accel_ipsec_fs_del_rule(sa_entry); memcpy(sa_entry, &sa_entry_shadow, sizeof(*sa_entry)); } + +bool mlx5e_ipsec_fs_tunnel_enabled(struct mlx5e_ipsec_sa_entry *sa_entry) +{ + struct mlx5e_ipsec_rx *rx = + ipsec_rx(sa_entry->ipsec, sa_entry->attrs.family); + struct mlx5e_ipsec_tx *tx = sa_entry->ipsec->tx; + + if (sa_entry->attrs.dir == XFRM_DEV_OFFLOAD_OUT) + return tx->allow_tunnel_mode; + + return rx->allow_tunnel_mode; +} From patchwork Thu Apr 13 12:29:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210266 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A567C77B61 for ; Thu, 13 Apr 2023 12:30:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229840AbjDMMab (ORCPT ); Thu, 13 Apr 2023 08:30:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229618AbjDMMaa (ORCPT ); Thu, 13 Apr 2023 08:30:30 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F7D7A26A for ; Thu, 13 Apr 2023 05:30:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2124C63D9B for ; Thu, 13 Apr 2023 12:30:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3985C433EF; Thu, 13 Apr 2023 12:30:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681389015; bh=sW6I9mHFG7ZUP3SbjblGOuYUnNAbLFL5IneN6Lo9o2g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C5worpXJPMz3eEp6qLMiw+g7iIwCVXrXu/cyEkxyU0ewSYH6Q5KoN+dR66BA0q70W OOR6axJenl6pZHaoqpYcY5bFAkxmm8ub8QJ4fFSsn4Q6zA4YpxS5GylEHA/2FXTE/4 50gKpJQ6ixMLGp3bDzpmf2c2FdPxqMNgfeJ87jZuFjIrTuAHx0fKf613WPfYSUaATt ZH5898E8lUJehYKu5qu17uQA/mJgZ3c2xJ/NCFX2L6bTPadsoUOGNp+MCxkBAE71S2 MEx6hgYtu0OnCMCyinx8eJLel+50NjtEIzmsoKLhwGDyGSuQemp0c3dKmJv2v9LNaO 6dUZWPw1o4jbw== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 10/10] net/mlx5e: Accept tunnel mode for IPsec packet offload Date: Thu, 13 Apr 2023 15:29:28 +0300 Message-Id: <03b551b18ed893d574c566204373499817e345ff.1681388425.git.leonro@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Open mlx5 driver to accept IPsec tunnel mode. Signed-off-by: Leon Romanovsky Reviewed-by: Simon Horman --- .../ethernet/mellanox/mlx5/core/en_accel/ipsec.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index 0bda5a91bff6..5fd609d1120e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -422,6 +422,11 @@ static int mlx5e_xfrm_validate_state(struct mlx5_core_dev *mdev, return -EINVAL; } + if (x->props.mode != XFRM_MODE_TRANSPORT && x->props.mode != XFRM_MODE_TUNNEL) { + NL_SET_ERR_MSG_MOD(extack, "Only transport and tunnel xfrm states may be offloaded"); + return -EINVAL; + } + switch (x->xso.type) { case XFRM_DEV_OFFLOAD_CRYPTO: if (!(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_CRYPTO)) { @@ -429,11 +434,6 @@ static int mlx5e_xfrm_validate_state(struct mlx5_core_dev *mdev, return -EINVAL; } - if (x->props.mode != XFRM_MODE_TRANSPORT && - x->props.mode != XFRM_MODE_TUNNEL) { - NL_SET_ERR_MSG_MOD(extack, "Only transport and tunnel xfrm states may be offloaded"); - return -EINVAL; - } break; case XFRM_DEV_OFFLOAD_PACKET: if (!(mlx5_ipsec_device_caps(mdev) & @@ -442,8 +442,9 @@ static int mlx5e_xfrm_validate_state(struct mlx5_core_dev *mdev, return -EINVAL; } - if (x->props.mode != XFRM_MODE_TRANSPORT) { - NL_SET_ERR_MSG_MOD(extack, "Only transport xfrm states may be offloaded in packet mode"); + if (x->props.mode == XFRM_MODE_TUNNEL && + !(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_TUNNEL)) { + NL_SET_ERR_MSG_MOD(extack, "Packet offload is not supported for tunnel mode"); return -EINVAL; }