From patchwork Fri Jan 8 05:30:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005771 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7BE8C43381 for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 828622343B for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726992AbhAHFbh (ORCPT ); Fri, 8 Jan 2021 00:31:37 -0500 Received: from mail.kernel.org ([198.145.29.99]:35742 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725793AbhAHFbh (ORCPT ); Fri, 8 Jan 2021 00:31:37 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A5480233FC; Fri, 8 Jan 2021 05:30:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083857; bh=pMgOJOYy0PZ/+2EahATIrfbrm0OG5TontQ7SQUkPMDo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SumcxvvRI6q+iMFM5A52oaDQW+xCC4t/UJs/M0UnNzoHRvyJLThs/5jfGuIWKajRW LHcL4cdfQIoT7412Q6SaXrOIoIy6du4Zl2YsWdQv7f4RYADXbu8yMKX6kUuJJBkQWV +DyXzjAgHmfwM5l6zy39CS5oLc6ephbCxllLO1zcuDdbquEf1zCG6Yqne/THdxA/3B vwsggP/QfQ4TOVEwupjKBUj9bDxFwUFE01LbbYfKVDAi0mQAMuIwQBid8ZYgZdYX8s pXDyX3zLwkXZ+7AKf8RddPtY1PzmgQ9DewB/8wD0QIHwTZIpyRHg3G3LhYEDnpkMTU G+ONkZogH7Oeg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Paul Blakey , Maor Dickman , Roi Dayan , Saeed Mahameed Subject: [net-next 01/15] net/mlx5: Add HW definition of reg_c_preserve Date: Thu, 7 Jan 2021 21:30:40 -0800 Message-Id: <20210108053054.660499-2-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Paul Blakey Add capability bit to test whether reg_c value is preserved on recirculation. Signed-off-by: Paul Blakey Signed-off-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- include/linux/mlx5/mlx5_ifc.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 8fbddec26eb8..ec0d01e26b3e 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1278,7 +1278,9 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_a0[0x3]; u8 ece_support[0x1]; - u8 reserved_at_a4[0x7]; + u8 reserved_at_a4[0x5]; + u8 reg_c_preserve[0x1]; + u8 reserved_at_aa[0x1]; u8 log_max_srq[0x5]; u8 reserved_at_b0[0x2]; u8 ts_cqe_to_dest_cqn[0x1]; From patchwork Fri Jan 8 05:30:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005767 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 519EFC433DB for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1A192233EE for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727272AbhAHFbi (ORCPT ); Fri, 8 Jan 2021 00:31:38 -0500 Received: from mail.kernel.org ([198.145.29.99]:35752 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727001AbhAHFbh (ORCPT ); Fri, 8 Jan 2021 00:31:37 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2573B23406; Fri, 8 Jan 2021 05:30:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083857; bh=PRD0C0CukMOoliU1TJTVvJ7qCD21WQBO6LpD3X7upIo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ks+y6OuX0XFs8TKF/ZGPhEw+0wkwvFm2m1FmZW3c2c5Th6UErI/YSbTVK4h/IONPS abglZCRP8qN8InBc3SrjNQSdGPxOkNoxAoWdh9Ind+0V9tLNHnuE73Tm+U+E122Nnc cJrOvLc9bM2QlfO2PIZ2I1uF9QZX0L6QFukQxqbydGOZxBon1sawUm9LOOGqRsoEC1 MTT+/KwoNwvdtFUTbKGjpdB9GiBQHU/9xYre+XxjwBCiqneYbXr0/p7PdrEUgwluVL Klrc9t5L3BPjZ+jWpACr3DBDhraoH93NgauF9gMuYViSX6nrNkpJzotFKDKLztq5Ow X8a0FkRZGlvTQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Eli Cohen , Roi Dayan , Saeed Mahameed Subject: [net-next 02/15] net/mlx5e: Simplify condition on esw_vport_enable_qos() Date: Thu, 7 Jan 2021 21:30:41 -0800 Message-Id: <20210108053054.660499-3-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Eli Cohen esw->qos.enabled will only be true if both MLX5_CAP_GEN(dev, qos) and MLX5_CAP_QOS(dev, esw_scheduling) are true. Therefore, remove them from the condition in and rely only on esw->qos.enabled. Fixes: 1bd27b11c1df ("net/mlx5: Introduce E-switch QoS management") Signed-off-by: Eli Cohen Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index da901e364656..876e6449edb3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1042,8 +1042,7 @@ static int esw_vport_enable_qos(struct mlx5_eswitch *esw, void *vport_elem; int err = 0; - if (!esw->qos.enabled || !MLX5_CAP_GEN(dev, qos) || - !MLX5_CAP_QOS(dev, esw_scheduling)) + if (!esw->qos.enabled) return 0; if (vport->qos.enabled) From patchwork Fri Jan 8 05:30:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005775 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88B24C433E9 for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 621AC233EE for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727485AbhAHFbn (ORCPT ); Fri, 8 Jan 2021 00:31:43 -0500 Received: from mail.kernel.org ([198.145.29.99]:35764 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727145AbhAHFbi (ORCPT ); Fri, 8 Jan 2021 00:31:38 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 920742343B; Fri, 8 Jan 2021 05:30:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083857; bh=3OMEF2lY2e3KG2yS3rlIrF/fVnWCck/jj3EHH6R2dXU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZFD6hPO0n3ztFBLmtpcFWGTevFFTvels+L0F9j5sqTA/JikuuFaXff0iyZXQeabOy Ho03DXGmH0jyM3I5uybVWwH4Yqlt8msi264SNu7ouZkLIStHAZ7SNs4LIrU25Ow8dm wh5ImN84iD01AyzJq0hVp0OyuvEl/0IPF8stPUPa9vg3GEE5rhPrKUJHg1ZzyuJ8vr xFapE/uiTApJKV6/4hMFTOfkzO8lzc5wsAqGnE2SaysfaIeDV6XmzEkH4hGfNtFv8b 5Kk70EBf4FiYSGZZqp2PDX3vHbartKeBuDQBVFFxdcG7+WjldJE9C0WtXnLsCtNDJ5 SGn76RL9JjY3A== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Eli Cohen , Roi Dayan , Mark Bloch , Saeed Mahameed Subject: [net-next 03/15] net/mlx5: E-Switch, use new cap as condition for mpls over udp Date: Thu, 7 Jan 2021 21:30:42 -0800 Message-Id: <20210108053054.660499-4-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Eli Cohen Use tunnel_stateless_mpls_over_udp instead of MLX5_FLEX_PROTO_CW_MPLS_UDP since new devices have native support for mpls over udp and do not rely on flex parser. Signed-off-by: Eli Cohen Reviewed-by: Roi Dayan Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c index 1f9526244222..3479672e84cf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c @@ -81,8 +81,8 @@ static int parse_tunnel(struct mlx5e_priv *priv, if (!enc_keyid.mask->keyid) return 0; - if (!(MLX5_CAP_GEN(priv->mdev, flex_parser_protocols) & - MLX5_FLEX_PROTO_CW_MPLS_UDP)) + if (!MLX5_CAP_ETH(priv->mdev, tunnel_stateless_mpls_over_udp) && + !(MLX5_CAP_GEN(priv->mdev, flex_parser_protocols) & MLX5_FLEX_PROTO_CW_MPLS_UDP)) return -EOPNOTSUPP; flow_rule_match_mpls(rule, &match); From patchwork Fri Jan 8 05:30:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005769 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67035C433E6 for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33C082343B for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725844AbhAHFbj (ORCPT ); Fri, 8 Jan 2021 00:31:39 -0500 Received: from mail.kernel.org ([198.145.29.99]:35768 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727001AbhAHFbj (ORCPT ); Fri, 8 Jan 2021 00:31:39 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0F99D23447; Fri, 8 Jan 2021 05:30:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083858; bh=F49kLDrCed6uyBjZ0IqYXQ53NkEM/zkekFsLsal6OB8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tt57tPWoExRpz9XTHcBRxl+cHcjiJETWmeA096lrLNNxJ7jQXevKBSeSFtBD75Izw nfE4292Pkzp7kdDg/RbbY8E8VGAAakRh7/NtOgTD5tGJUraO29WbUQn+qe8yHzbg9s QVORrOdm360d+YU7weN8dmkoRBqEu6Kx7EzKR/se0FIpvaQfzyrg1UjY/6M2HqRXD9 ITwh5VQqRkfs4zIyXD7286JZ/zp6Ln4HYqt2lneR1hVZeDwu3eo6OKFfUsdCMAgK3H 6wDhdODK+Hc5aGVsw4X7e4OWmnvHfdOevZub4FVTrPipz0afZ34nEzK4BYonQEkeqJ 88W1z5BjZztQA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Jianbo Liu , Oz Shlomo , Roi Dayan , Saeed Mahameed Subject: [net-next 04/15] net/mlx5e: E-Switch, Offload all chain 0 priorities when modify header and forward action is not supported Date: Thu, 7 Jan 2021 21:30:43 -0800 Message-Id: <20210108053054.660499-5-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jianbo Liu Miss path handling of tc multi chain filters (i.e. filters that are defined on chain > 0) requires the hardware to communicate to the driver the last chain that was processed. This is possible only when the hardware is capable of performing the combination of modify header and forward to table actions. Currently, if the hardware is missing this capability then the driver only offloads rules that are defined on tc chain 0 prio 1. However, this restriction can be relaxed because packets that miss from chain 0 are processed through all the priorities by tc software. Allow the offload of all the supported priorities for chain 0 even when the hardware is not capable to perform modify header and goto table actions. Fixes: 0b3a8b6b5340 ("net/mlx5: E-Switch: Fix using fwd and modify when firmware doesn't support it") Signed-off-by: Jianbo Liu Reviewed-by: Oz Shlomo Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 6 ------ drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c | 3 --- 2 files changed, 9 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 4cdf834fa74a..56aa39ac1a1c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -1317,12 +1317,6 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, int err = 0; int out_index; - if (!mlx5_chains_prios_supported(esw_chains(esw)) && attr->prio != 1) { - NL_SET_ERR_MSG_MOD(extack, - "E-switch priorities unsupported, upgrade FW"); - return -EOPNOTSUPP; - } - /* We check chain range only for tc flows. * For ft flows, we checked attr->chain was originally 0 and set it to * FDB_FT_CHAIN which is outside tc range. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c index 947f346bdc2d..fa61d305990c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c @@ -141,9 +141,6 @@ u32 mlx5_chains_get_nf_ft_chain(struct mlx5_fs_chains *chains) u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains) { - if (!mlx5_chains_prios_supported(chains)) - return 1; - if (mlx5_chains_ignore_flow_level_supported(chains)) return UINT_MAX; From patchwork Fri Jan 8 05:30:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005831 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D38EDC433DB for ; Fri, 8 Jan 2021 05:32:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AFEF8233FB for ; Fri, 8 Jan 2021 05:32:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727678AbhAHFcS (ORCPT ); Fri, 8 Jan 2021 00:32:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:35860 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727486AbhAHFcR (ORCPT ); Fri, 8 Jan 2021 00:32:17 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 852F7235F7; Fri, 8 Jan 2021 05:30:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083858; bh=hOc3iTrAoZbIJqWQo17f3gKH4nlVqWwkrhEU2i7TGsM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n3A/7DcxKP9PHrPgzwaAgSp5sbd3N9Gh0oTA0bI9R2oF6OfRblxkvyaKv4ViGeodt e2NIDWNx/86w+YjlhyTpmcxguEhZs9ke9hXetdtboZOoGFNRU0axLWVt0UGrfTELoa 3IsqiRIMt2vHIH8maIYB+3r+7TkM1mVLnckWDa6c8Sjs7sJzTZZfQQSUWVkAP6fBUt cGMjIL6ogjG9O9U+/A4adT/PwBdK9/WuiTPj5XeMK4Fn3ouSZkW0dH5PLayCafUWzx dKVvcpPQMeGHsnPJR1c7dgaNzt5Dj6f0D4EJaa4fXG0aQhpxgH0I1kuFpN/6YXjrC9 FGmogeeQzkOcg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Oz Shlomo , Paul Blakey , Saeed Mahameed Subject: [net-next 05/15] net/mlx5e: CT: Pass null instead of zero spec Date: Thu, 7 Jan 2021 21:30:44 -0800 Message-Id: <20210108053054.660499-6-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Roi Dayan No need to pass zero spec to mlx5_add_flow_rules() as the function can handle null spec. Signed-off-by: Roi Dayan Reviewed-by: Oz Shlomo Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index e521254d886e..a0b193181ba5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -1220,9 +1220,8 @@ static int tc_ct_pre_ct_add_rules(struct mlx5_ct_ft *ct_ft, pre_ct->flow_rule = rule; /* add miss rule */ - memset(spec, 0, sizeof(*spec)); dest.ft = nat ? ct_priv->ct_nat : ct_priv->ct; - rule = mlx5_add_flow_rules(ft, spec, &flow_act, &dest, 1); + rule = mlx5_add_flow_rules(ft, NULL, &flow_act, &dest, 1); if (IS_ERR(rule)) { err = PTR_ERR(rule); ct_dbg("Failed to add pre ct miss rule zone %d", zone); From patchwork Fri Jan 8 05:30:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005837 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA497C433DB for ; Fri, 8 Jan 2021 05:32:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A1AA233FB for ; Fri, 8 Jan 2021 05:32:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727614AbhAHFcS (ORCPT ); Fri, 8 Jan 2021 00:32:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:35858 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727489AbhAHFcR (ORCPT ); Fri, 8 Jan 2021 00:32:17 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 05E9323403; Fri, 8 Jan 2021 05:30:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083859; bh=mdwVSbRH5eEhxTjqaDtaoXYM4NwSCcLKgGfFHlIWcNA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=r/klehQv5fq00zQNFBhlLsi4G+SvH1ioQ5MEscK99N1sZYxYxcO9rogniABHnYOg+ INBuzOzN0eRlTjxeydYPfjhCC69DsmhklV6+4QUTSuEV5vopQdP+EjcDk4t5lXC1I/ swySzxyxVHRkKJDhIDtA2jQ+02PAbKKCw/uUsoWCpJTQ24EVaVW/fEFOlAq98udvV9 sSe/EeKGPADL8EbHqFtdBoxu1wIir0XbXJ+L8BKnynjL/mck1FLHp+TFVt+xrSSlGT +JPh/XhPgnwOAhnP+EVtpLTrC9Ksqs6u0oxoi94WmwA1eVoHfOVSz70J2Uq2KnIC9m v2XX0jDCrf2vw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Oz Shlomo , Paul Blakey , Saeed Mahameed Subject: [net-next 06/15] net/mlx5e: Remove redundant initialization to null Date: Thu, 7 Jan 2021 21:30:45 -0800 Message-Id: <20210108053054.660499-7-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Roi Dayan miss_rule and prio_s args are not being referenced before assigned so there is no need to init them. Signed-off-by: Roi Dayan Reviewed-by: Oz Shlomo Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c index fa61d305990c..381325b4a863 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c @@ -538,13 +538,13 @@ mlx5_chains_create_prio(struct mlx5_fs_chains *chains, u32 chain, u32 prio, u32 level) { int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); - struct mlx5_flow_handle *miss_rule = NULL; + struct mlx5_flow_handle *miss_rule; struct mlx5_flow_group *miss_group; struct mlx5_flow_table *next_ft; struct mlx5_flow_table *ft; - struct prio *prio_s = NULL; struct fs_chain *chain_s; struct list_head *pos; + struct prio *prio_s; u32 *flow_group_in; int err; From patchwork Fri Jan 8 05:30:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005777 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D4DFC433E0 for ; Fri, 8 Jan 2021 05:32:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5EA34233FC for ; Fri, 8 Jan 2021 05:32:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727695AbhAHFcT (ORCPT ); Fri, 8 Jan 2021 00:32:19 -0500 Received: from mail.kernel.org ([198.145.29.99]:35864 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727145AbhAHFcR (ORCPT ); Fri, 8 Jan 2021 00:32:17 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 77415235FA; Fri, 8 Jan 2021 05:30:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083859; bh=eG6R1DIS9cGxxRKoIEcaWliWd3ODZo8VQnU1QivRPAk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oIp73Xo4rkvv9JF2w2lWE5Yf0nItFdAn0tJVK82u4cE/aA6beJSwV02aJSX4XyeVP RhxNoDifPe0RSPLxU1AJr0a/X1LpoRdcMV0xXGuZ4n+9+bq1ON8bIMXTO1H7eIM6YL z5CGuAAMWVJINKlIJeFtgRapIMDwWopKwdQRl9MRjHruOazywBdSJTPhJEMsDRWUdP QkWTuHYEssBVhRxGUJNBNbpV7MmvnfxYaD34Yn+u3wRJfeb+3cOFG29vKqJM0lFJct eIbbNgSRHjq3+yWlpTWK0D4dCGk3HjtH9BQJw1UYFT9lU6iH7BWoVExT+S6vYNAQ6m T2/0lkrmqzmxg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Paul Blakey , Saeed Mahameed Subject: [net-next 07/15] net/mlx5e: CT: Remove redundant usage of zone mask Date: Thu, 7 Jan 2021 21:30:46 -0800 Message-Id: <20210108053054.660499-8-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Roi Dayan The zone member is of type u16 so there is no reason to apply the zone mask on it. This is also matching the call to set a match in other places which don't need and don't apply the mask. Signed-off-by: Roi Dayan Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index a0b193181ba5..d7ecd5e5f7c4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -703,9 +703,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv, attr->flags |= MLX5_ESW_ATTR_FLAG_NO_IN_PORT; mlx5_tc_ct_set_tuple_match(netdev_priv(ct_priv->netdev), spec, flow_rule); - mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, - entry->tuple.zone & MLX5_CT_ZONE_MASK, - MLX5_CT_ZONE_MASK); + mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, entry->tuple.zone, MLX5_CT_ZONE_MASK); zone_rule->rule = mlx5_tc_rule_insert(priv, spec, attr); if (IS_ERR(zone_rule->rule)) { From patchwork Fri Jan 8 05:30:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005833 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CFAFC433E0 for ; Fri, 8 Jan 2021 05:32:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A4D0233EE for ; Fri, 8 Jan 2021 05:32:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727834AbhAHFcc (ORCPT ); Fri, 8 Jan 2021 00:32:32 -0500 Received: from mail.kernel.org ([198.145.29.99]:35862 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727495AbhAHFcS (ORCPT ); Fri, 8 Jan 2021 00:32:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 07490235FC; Fri, 8 Jan 2021 05:30:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083860; bh=jl1SFtCgX4vu4c2Smrtp/L37oy72Y1Dx5tuHWugVtKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UwHkuhyraX/ZS8dxIjKsbrEpKAQ9DyifryfsafSDh1BtARE7IBdcYCWbyXTEdXVMa q5Fq/jqyQptlddwfUd0baAd+dD/h8uMB5/IV1CIyKHAez05+7kI3qTYQ9iJucQ+KaT xGNh3pJiFUaD/27Ow5SBfrMqzVj3vPnhzK1Vt++sgxEJneJ5TS2Ft5Fm7fcV5Gt1PY pQUV2Ztd73B096l7FaPxIZ3AijCMDpGarBPgusEOM/pz5rtQxQCPFX7CQVJS6AI5Cm i+70xQaMfpR9H6jgtxeBpr+otBz0gDhpaivGcN/83U0gxxfHox9jeTxTKU4JmL6k7a Ip+sO68fb9UVQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Paul Blakey , Saeed Mahameed Subject: [net-next 08/15] net/mlx5e: CT: Preparation for offloading +trk+new ct rules Date: Thu, 7 Jan 2021 21:30:47 -0800 Message-Id: <20210108053054.660499-9-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Roi Dayan Connection tracking associates the connection state per packet. The first packet of a connection is assigned with the +trk+new state. The connection enters the established state once a packet is seen on the other direction. Currently we offload only the established flows. However, UDP traffic using source port entropy (e.g. vxlan, RoCE) will never enter the established state. Such protocols do not require stateful processing, and therefore could be offloaded. The change in the model is that a miss on the CT table will be forwarded to a new +trk+new ct table and a miss there will be forwarded to the slow path table. Signed-off-by: Roi Dayan Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 104 ++++++++++++++++-- 1 file changed, 96 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index d7ecd5e5f7c4..6dac2fabb7f5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -21,6 +21,7 @@ #include "en.h" #include "en_tc.h" #include "en_rep.h" +#include "fs_core.h" #define MLX5_CT_ZONE_BITS (mlx5e_tc_attr_to_reg_mappings[ZONE_TO_REG].mlen * 8) #define MLX5_CT_ZONE_MASK GENMASK(MLX5_CT_ZONE_BITS - 1, 0) @@ -50,6 +51,9 @@ struct mlx5_tc_ct_priv { struct mlx5_flow_table *ct; struct mlx5_flow_table *ct_nat; struct mlx5_flow_table *post_ct; + struct mlx5_flow_table *trk_new_ct; + struct mlx5_flow_group *miss_grp; + struct mlx5_flow_handle *miss_rule; struct mutex control_lock; /* guards parallel adds/dels */ struct mutex shared_counter_lock; struct mapping_ctx *zone_mapping; @@ -1490,14 +1494,14 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft) * | set zone * v * +--------------------+ - * + CT (nat or no nat) + - * + tuple + zone match + - * +--------------------+ - * | set mark - * | set labels_id - * | set established - * | set zone_restore - * | do nat (if needed) + * + CT (nat or no nat) + miss +---------------------+ miss + * + tuple + zone match +----------------->+ trk_new_ct +-------> SW + * +--------------------+ + vxlan||roce match + + * | set mark +---------------------+ + * | set labels_id | set ct_state +trk+new + * | set established | set zone_restore + * | set zone_restore v + * | do nat (if needed) post_ct * v * +--------------+ * + post_ct + original filter actions @@ -1893,6 +1897,72 @@ mlx5_tc_ct_init_check_support(struct mlx5e_priv *priv, return mlx5_tc_ct_init_check_nic_support(priv, err_msg); } +static struct mlx5_flow_handle * +tc_ct_add_miss_rule(struct mlx5_flow_table *ft, + struct mlx5_flow_table *next_ft) +{ + struct mlx5_flow_destination dest = {}; + struct mlx5_flow_act act = {}; + + act.flags = FLOW_ACT_IGNORE_FLOW_LEVEL | FLOW_ACT_NO_APPEND; + act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; + dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; + dest.ft = next_ft; + + return mlx5_add_flow_rules(ft, NULL, &act, &dest, 1); +} + +static int +tc_ct_add_ct_table_miss_rule(struct mlx5_tc_ct_priv *ct_priv) +{ + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); + struct mlx5_flow_handle *miss_rule; + struct mlx5_flow_group *miss_group; + int max_fte = ct_priv->ct->max_fte; + u32 *flow_group_in; + int err = 0; + + flow_group_in = kvzalloc(inlen, GFP_KERNEL); + if (!flow_group_in) + return -ENOMEM; + + /* create miss group */ + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, + max_fte - 2); + MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, + max_fte - 1); + miss_group = mlx5_create_flow_group(ct_priv->ct, flow_group_in); + if (IS_ERR(miss_group)) { + err = PTR_ERR(miss_group); + goto err_miss_grp; + } + + /* add miss rule to next fdb */ + miss_rule = tc_ct_add_miss_rule(ct_priv->ct, ct_priv->trk_new_ct); + if (IS_ERR(miss_rule)) { + err = PTR_ERR(miss_rule); + goto err_miss_rule; + } + + ct_priv->miss_grp = miss_group; + ct_priv->miss_rule = miss_rule; + kvfree(flow_group_in); + return 0; + +err_miss_rule: + mlx5_destroy_flow_group(miss_group); +err_miss_grp: + kvfree(flow_group_in); + return err; +} + +static void +tc_ct_del_ct_table_miss_rule(struct mlx5_tc_ct_priv *ct_priv) +{ + mlx5_del_flow_rules(ct_priv->miss_rule); + mlx5_destroy_flow_group(ct_priv->miss_grp); +} + #define INIT_ERR_PREFIX "tc ct offload init failed" struct mlx5_tc_ct_priv * @@ -1962,6 +2032,18 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains, goto err_post_ct_tbl; } + ct_priv->trk_new_ct = mlx5_chains_create_global_table(chains); + if (IS_ERR(ct_priv->trk_new_ct)) { + err = PTR_ERR(ct_priv->trk_new_ct); + mlx5_core_warn(dev, "%s, failed to create trk new ct table err: %d", + INIT_ERR_PREFIX, err); + goto err_trk_new_ct_tbl; + } + + err = tc_ct_add_ct_table_miss_rule(ct_priv); + if (err) + goto err_init_ct_tbl; + idr_init(&ct_priv->fte_ids); mutex_init(&ct_priv->control_lock); mutex_init(&ct_priv->shared_counter_lock); @@ -1971,6 +2053,10 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains, return ct_priv; +err_init_ct_tbl: + mlx5_chains_destroy_global_table(chains, ct_priv->trk_new_ct); +err_trk_new_ct_tbl: + mlx5_chains_destroy_global_table(chains, ct_priv->post_ct); err_post_ct_tbl: mlx5_chains_destroy_global_table(chains, ct_priv->ct_nat); err_ct_nat_tbl: @@ -1997,6 +2083,8 @@ mlx5_tc_ct_clean(struct mlx5_tc_ct_priv *ct_priv) chains = ct_priv->chains; + tc_ct_del_ct_table_miss_rule(ct_priv); + mlx5_chains_destroy_global_table(chains, ct_priv->trk_new_ct); mlx5_chains_destroy_global_table(chains, ct_priv->post_ct); mlx5_chains_destroy_global_table(chains, ct_priv->ct_nat); mlx5_chains_destroy_global_table(chains, ct_priv->ct); From patchwork Fri Jan 8 05:30:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005825 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B07E7C433E0 for ; Fri, 8 Jan 2021 05:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72913233EE for ; Fri, 8 Jan 2021 05:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727778AbhAHFcZ (ORCPT ); Fri, 8 Jan 2021 00:32:25 -0500 Received: from mail.kernel.org ([198.145.29.99]:35866 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727617AbhAHFcT (ORCPT ); Fri, 8 Jan 2021 00:32:19 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 73987235FF; Fri, 8 Jan 2021 05:31:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083860; bh=z0P2B5cF3JnhqoaBim1peyysRdd7H8ydax5UE9QTjDM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WRntdumRV+CCnMkXrG4RpTu+stOKIgN5FjtaVgF6UARM2kfm/djenW+vsji9b0QI9 14HO5n+2YDr3HAnLv3W3m0bc/tHZnfax3MSVAEePNCwK2Gg0kx9LHh2UOiAQk5V9Au YZ1jdmpYMt5egPekJSWNsdofUyO+ybuPTMk1sjrUHaeWR+O1ZvIBHrRTw55Z5YqA0N 2K9rmc6mZEhZSwpqjPXqLlF6XtRSmn/7FZUw4ekWhqPYj++5C5ciGYqC6wM2Dg8ZfT rRWry77RArhxnHYV8xI0ceyicd0mxT70jVC0wFNuZ3pBpRWFkP1czNvUnrbbysjxat h/Vs/2MWHgUBg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Paul Blakey , Saeed Mahameed Subject: [net-next 09/15] net/mlx5e: CT: Support offload of +trk+new ct rules Date: Thu, 7 Jan 2021 21:30:48 -0800 Message-Id: <20210108053054.660499-10-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Roi Dayan Add support to offload +trk+new rules for terminating flows for udp protocols using source port entropy. This kind of traffic will never be considered connect in conntrack and thus never set as established so no need to keep track of them in SW conntrack and offload this traffic based on dst port. In this commit we support only the default registered vxlan port, RoCE and Geneve ports. Using the registered ports assume the traffic is that of the registered protocol. Signed-off-by: Roi Dayan Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 228 +++++++++++++++++- .../ethernet/mellanox/mlx5/core/en/tc_ct.h | 6 + .../net/ethernet/mellanox/mlx5/core/en_tc.c | 16 +- 3 files changed, 236 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index 6dac2fabb7f5..b0c357f755d4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -25,9 +25,6 @@ #define MLX5_CT_ZONE_BITS (mlx5e_tc_attr_to_reg_mappings[ZONE_TO_REG].mlen * 8) #define MLX5_CT_ZONE_MASK GENMASK(MLX5_CT_ZONE_BITS - 1, 0) -#define MLX5_CT_STATE_ESTABLISHED_BIT BIT(1) -#define MLX5_CT_STATE_TRK_BIT BIT(2) -#define MLX5_CT_STATE_NAT_BIT BIT(3) #define MLX5_FTE_ID_BITS (mlx5e_tc_attr_to_reg_mappings[FTEID_TO_REG].mlen * 8) #define MLX5_FTE_ID_MAX GENMASK(MLX5_FTE_ID_BITS - 1, 0) @@ -39,6 +36,17 @@ #define ct_dbg(fmt, args...)\ netdev_dbg(ct_priv->netdev, "ct_debug: " fmt "\n", ##args) +#define IANA_VXLAN_UDP_PORT 4789 +#define ROCE_V2_UDP_DPORT 4791 +#define GENEVE_UDP_PORT 6081 +#define DEFAULT_UDP_PORTS 3 + +static int default_udp_ports[] = { + IANA_VXLAN_UDP_PORT, + ROCE_V2_UDP_DPORT, + GENEVE_UDP_PORT, +}; + struct mlx5_tc_ct_priv { struct mlx5_core_dev *dev; const struct net_device *netdev; @@ -88,6 +96,16 @@ struct mlx5_tc_ct_pre { struct mlx5_modify_hdr *modify_hdr; }; +struct mlx5_tc_ct_trk_new_rule { + struct mlx5_flow_handle *flow_rule; + struct list_head list; +}; + +struct mlx5_tc_ct_trk_new_rules { + struct list_head rules; + struct mlx5_modify_hdr *modify_hdr; +}; + struct mlx5_ct_ft { struct rhash_head node; u16 zone; @@ -98,6 +116,8 @@ struct mlx5_ct_ft { struct rhashtable ct_entries_ht; struct mlx5_tc_ct_pre pre_ct; struct mlx5_tc_ct_pre pre_ct_nat; + struct mlx5_tc_ct_trk_new_rules trk_new_rules; + struct nf_conn *tmpl; }; struct mlx5_ct_tuple { @@ -1064,7 +1084,7 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv, { struct flow_rule *rule = flow_cls_offload_flow_rule(f); struct flow_dissector_key_ct *mask, *key; - bool trk, est, untrk, unest, new; + bool trk, est, untrk, unest, new, unnew; u32 ctstate = 0, ctstate_mask = 0; u16 ct_state_on, ct_state_off; u16 ct_state, ct_state_mask; @@ -1102,19 +1122,16 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv, new = ct_state_on & TCA_FLOWER_KEY_CT_FLAGS_NEW; est = ct_state_on & TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED; untrk = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_TRACKED; + unnew = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_NEW; unest = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED; ctstate |= trk ? MLX5_CT_STATE_TRK_BIT : 0; + ctstate |= new ? MLX5_CT_STATE_NEW_BIT : 0; ctstate |= est ? MLX5_CT_STATE_ESTABLISHED_BIT : 0; ctstate_mask |= (untrk || trk) ? MLX5_CT_STATE_TRK_BIT : 0; + ctstate_mask |= (unnew || new) ? MLX5_CT_STATE_NEW_BIT : 0; ctstate_mask |= (unest || est) ? MLX5_CT_STATE_ESTABLISHED_BIT : 0; - if (new) { - NL_SET_ERR_MSG_MOD(extack, - "matching on ct_state +new isn't supported"); - return -EOPNOTSUPP; - } - if (mask->ct_zone) mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, key->ct_zone, MLX5_CT_ZONE_MASK); @@ -1136,6 +1153,8 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv, MLX5_CT_LABELS_MASK); } + ct_attr->ct_state = ctstate; + return 0; } @@ -1390,10 +1409,157 @@ mlx5_tc_ct_free_pre_ct_tables(struct mlx5_ct_ft *ft) mlx5_tc_ct_free_pre_ct(ft, &ft->pre_ct); } +static void mlx5_tc_ct_set_match_dst_udp_port(struct mlx5_flow_spec *spec, u16 dst_port) +{ + void *headers_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, + outer_headers); + void *headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, + outer_headers); + + MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, udp_dport); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dst_port); + + spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS; +} + +static struct mlx5_tc_ct_trk_new_rule * +tc_ct_add_trk_new_rule(struct mlx5_ct_ft *ft, int port) +{ + struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv; + struct mlx5_tc_ct_trk_new_rule *trk_new_rule; + struct mlx5_flow_destination dest = {}; + struct mlx5_flow_act flow_act = {}; + struct mlx5_flow_handle *rule; + struct mlx5_flow_spec *spec; + int err; + + trk_new_rule = kzalloc(sizeof(*trk_new_rule), GFP_KERNEL); + if (!trk_new_rule) + return ERR_PTR(-ENOMEM); + + spec = kzalloc(sizeof(*spec), GFP_KERNEL); + if (!spec) { + kfree(trk_new_rule); + return ERR_PTR(-ENOMEM); + } + + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | + MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; + flow_act.flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; + flow_act.modify_hdr = ft->trk_new_rules.modify_hdr; + dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; + dest.ft = ct_priv->post_ct; + + mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, ft->zone, MLX5_CT_ZONE_MASK); + mlx5_tc_ct_set_match_dst_udp_port(spec, port); + + rule = mlx5_add_flow_rules(ct_priv->trk_new_ct, spec, &flow_act, &dest, 1); + if (IS_ERR(rule)) { + err = PTR_ERR(rule); + ct_dbg("Failed to add trk_new rule for udp port %d, err %d", port, err); + goto err_insert; + } + + kfree(spec); + trk_new_rule->flow_rule = rule; + list_add_tail(&trk_new_rule->list, &ft->trk_new_rules.rules); + return trk_new_rule; + +err_insert: + kfree(spec); + kfree(trk_new_rule); + return ERR_PTR(err); +} + +static void +tc_ct_del_trk_new_rule(struct mlx5_tc_ct_trk_new_rule *rule) +{ + list_del(&rule->list); + mlx5_del_flow_rules(rule->flow_rule); + kfree(rule); +} + +static int +tc_ct_init_trk_new_rules(struct mlx5_ct_ft *ft) +{ + struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv; + struct mlx5_tc_ct_trk_new_rule *rule, *tmp; + struct mlx5e_tc_mod_hdr_acts mod_acts = {}; + struct mlx5_modify_hdr *mod_hdr; + struct mlx5e_priv *priv; + u32 ct_state; + int i, err; + + priv = netdev_priv(ct_priv->netdev); + + ct_state = MLX5_CT_STATE_TRK_BIT | MLX5_CT_STATE_NEW_BIT; + err = mlx5e_tc_match_to_reg_set(priv->mdev, &mod_acts, ct_priv->ns_type, + CTSTATE_TO_REG, ct_state); + if (err) { + ct_dbg("Failed to set register for ct trk_new"); + goto err_set_registers; + } + + err = mlx5e_tc_match_to_reg_set(priv->mdev, &mod_acts, ct_priv->ns_type, + ZONE_RESTORE_TO_REG, ft->zone_restore_id); + if (err) { + ct_dbg("Failed to set register for ct trk_new zone restore"); + goto err_set_registers; + } + + mod_hdr = mlx5_modify_header_alloc(priv->mdev, + ct_priv->ns_type, + mod_acts.num_actions, + mod_acts.actions); + if (IS_ERR(mod_hdr)) { + err = PTR_ERR(mod_hdr); + ct_dbg("Failed to create ct trk_new mod hdr"); + goto err_set_registers; + } + + ft->trk_new_rules.modify_hdr = mod_hdr; + dealloc_mod_hdr_actions(&mod_acts); + + for (i = 0; i < DEFAULT_UDP_PORTS; i++) { + int port = default_udp_ports[i]; + + rule = tc_ct_add_trk_new_rule(ft, port); + if (IS_ERR(rule)) + goto err_insert; + } + + return 0; + +err_insert: + list_for_each_entry_safe(rule, tmp, &ft->trk_new_rules.rules, list) + tc_ct_del_trk_new_rule(rule); + mlx5_modify_header_dealloc(priv->mdev, mod_hdr); +err_set_registers: + dealloc_mod_hdr_actions(&mod_acts); + netdev_warn(priv->netdev, + "Failed to offload ct trk_new flow, err %d\n", err); + return err; +} + +static void +tc_ct_cleanup_trk_new_rules(struct mlx5_ct_ft *ft) +{ + struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv; + struct mlx5_tc_ct_trk_new_rule *rule, *tmp; + struct mlx5e_priv *priv; + + list_for_each_entry_safe(rule, tmp, &ft->trk_new_rules.rules, list) + tc_ct_del_trk_new_rule(rule); + + priv = netdev_priv(ct_priv->netdev); + mlx5_modify_header_dealloc(priv->mdev, ft->trk_new_rules.modify_hdr); +} + static struct mlx5_ct_ft * mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, struct nf_flowtable *nf_ft) { + struct nf_conntrack_zone ctzone; struct mlx5_ct_ft *ft; int err; @@ -1415,11 +1581,16 @@ mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, ft->nf_ft = nf_ft; ft->ct_priv = ct_priv; refcount_set(&ft->refcount, 1); + INIT_LIST_HEAD(&ft->trk_new_rules.rules); err = mlx5_tc_ct_alloc_pre_ct_tables(ft); if (err) goto err_alloc_pre_ct; + err = tc_ct_init_trk_new_rules(ft); + if (err) + goto err_add_trk_new_rules; + err = rhashtable_init(&ft->ct_entries_ht, &cts_ht_params); if (err) goto err_init; @@ -1429,6 +1600,14 @@ mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, if (err) goto err_insert; + nf_ct_zone_init(&ctzone, zone, NF_CT_DEFAULT_ZONE_DIR, 0); + ft->tmpl = nf_ct_tmpl_alloc(&init_net, &ctzone, GFP_KERNEL); + if (!ft->tmpl) + goto err_tmpl; + + __set_bit(IPS_CONFIRMED_BIT, &ft->tmpl->status); + nf_conntrack_get(&ft->tmpl->ct_general); + err = nf_flow_table_offload_add_cb(ft->nf_ft, mlx5_tc_ct_block_flow_offload, ft); if (err) @@ -1437,10 +1616,14 @@ mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, return ft; err_add_cb: + nf_conntrack_put(&ft->tmpl->ct_general); +err_tmpl: rhashtable_remove_fast(&ct_priv->zone_ht, &ft->node, zone_params); err_insert: rhashtable_destroy(&ft->ct_entries_ht); err_init: + tc_ct_cleanup_trk_new_rules(ft); +err_add_trk_new_rules: mlx5_tc_ct_free_pre_ct_tables(ft); err_alloc_pre_ct: mapping_remove(ct_priv->zone_mapping, ft->zone_restore_id); @@ -1471,6 +1654,8 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft) rhashtable_free_and_destroy(&ft->ct_entries_ht, mlx5_tc_ct_flush_ft_entry, ct_priv); + nf_conntrack_put(&ft->tmpl->ct_general); + tc_ct_cleanup_trk_new_rules(ft); mlx5_tc_ct_free_pre_ct_tables(ft); mapping_remove(ct_priv->zone_mapping, ft->zone_restore_id); kfree(ft); @@ -2100,6 +2285,27 @@ mlx5_tc_ct_clean(struct mlx5_tc_ct_priv *ct_priv) kfree(ct_priv); } +static bool +mlx5e_tc_ct_restore_trk_new(struct mlx5_tc_ct_priv *ct_priv, + struct sk_buff *skb, + struct mlx5_ct_tuple *tuple, + u16 zone) +{ + struct mlx5_ct_ft *ft; + + if ((ntohs(tuple->port.dst) != IANA_VXLAN_UDP_PORT) && + (ntohs(tuple->port.dst) != ROCE_V2_UDP_DPORT)) + return false; + + ft = rhashtable_lookup_fast(&ct_priv->zone_ht, &zone, zone_params); + if (!ft) + return false; + + nf_conntrack_get(&ft->tmpl->ct_general); + nf_ct_set(skb, ft->tmpl, IP_CT_NEW); + return true; +} + bool mlx5e_tc_ct_restore_flow(struct mlx5_tc_ct_priv *ct_priv, struct sk_buff *skb, u8 zone_restore_id) @@ -2123,7 +2329,7 @@ mlx5e_tc_ct_restore_flow(struct mlx5_tc_ct_priv *ct_priv, entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_nat_ht, &tuple, tuples_nat_ht_params); if (!entry) - return false; + return mlx5e_tc_ct_restore_trk_new(ct_priv, skb, &tuple, zone); tcf_ct_flow_table_restore_skb(skb, entry->restore_cookie); return true; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h index 6503b614337c..f730dbfbb02c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h @@ -10,6 +10,11 @@ #include "en.h" +#define MLX5_CT_STATE_ESTABLISHED_BIT BIT(1) +#define MLX5_CT_STATE_TRK_BIT BIT(2) +#define MLX5_CT_STATE_NAT_BIT BIT(3) +#define MLX5_CT_STATE_NEW_BIT BIT(4) + struct mlx5_flow_attr; struct mlx5e_tc_mod_hdr_acts; struct mlx5_rep_uplink_priv; @@ -28,6 +33,7 @@ struct mlx5_ct_attr { struct mlx5_ct_flow *ct_flow; struct nf_flowtable *nf_ft; u32 ct_labels_id; + u32 ct_state; }; #define zone_to_reg_ct {\ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 56aa39ac1a1c..5cf7c221404b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -3255,11 +3255,11 @@ static bool actions_match_supported(struct mlx5e_priv *priv, struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack) { - bool ct_flow = false, ct_clear = false; + bool ct_flow = false, ct_clear = false, ct_new = false; u32 actions; - ct_clear = flow->attr->ct_attr.ct_action & - TCA_CT_ACT_CLEAR; + ct_clear = flow->attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR; + ct_new = flow->attr->ct_attr.ct_state & MLX5_CT_STATE_NEW_BIT; ct_flow = flow_flag_test(flow, CT) && !ct_clear; actions = flow->attr->action; @@ -3274,6 +3274,16 @@ static bool actions_match_supported(struct mlx5e_priv *priv, } } + if (ct_new && ct_flow) { + NL_SET_ERR_MSG_MOD(extack, "Can't offload ct_state new with action ct"); + return false; + } + + if (ct_new && flow->attr->dest_chain) { + NL_SET_ERR_MSG_MOD(extack, "Can't offload ct_state new with action goto"); + return false; + } + if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) return modify_header_match_supported(priv, &parse_attr->spec, flow_action, actions, From patchwork Fri Jan 8 05:30:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005829 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34947C433E6 for ; Fri, 8 Jan 2021 05:32:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A437233FB for ; Fri, 8 Jan 2021 05:32:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727795AbhAHFc2 (ORCPT ); Fri, 8 Jan 2021 00:32:28 -0500 Received: from mail.kernel.org ([198.145.29.99]:35868 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727630AbhAHFcS (ORCPT ); Fri, 8 Jan 2021 00:32:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id EDEB22368A; Fri, 8 Jan 2021 05:31:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083861; bh=BWVnnNUhdzTuf2XjWmoXkpTWbTWBLQvs5rjTLjsPJEs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ndrbxJ+JK58uVO7014rfAnVWjatNCptCU1I1l2N5OYjHE8HX/iBpxqkwcKQwrRGyl 08Njw1BGJ+FfiF3SyGLqo4R4SRpTZ9Eo4R1OThNu+cngJHVt6WYs3lIVAtMXFM/ufd 8rJIvDg1eo1PDIW6WLuqqDLZi5Z5Ss7xJYcaXyGKCFxMYdiT/J8LOutEMlQqGWu+ng 1WIl72GiGEsUlEVj1n4qH6KI5otJDfGmdZzCWZfc6x1wV7frr5USwk39N/A+8QpXzO rq8iA7WxOzsOQKt4oL5O0dl7WYOKpOMsqT9SAQgsfLfvfqNq8X4VKF/T8/xlHJRzlv eoCMlYoRxesfw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Paul Blakey , Maor Dickman , Roi Dayan , Saeed Mahameed Subject: [net-next 10/15] net/mlx5e: CT: Add support for mirroring Date: Thu, 7 Jan 2021 21:30:49 -0800 Message-Id: <20210108053054.660499-11-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Paul Blakey Add support for mirroring before the CT action by splitting the pre ct rule. Mirror outputs are done first on the tc chain,prio table rule (the fwd rule), which will then forward to a per port fwd table. On this fwd table, we insert the original pre ct rule that forwards to ct/ct nat table. Signed-off-by: Paul Blakey Signed-off-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 4 +++ .../net/ethernet/mellanox/mlx5/core/en_tc.c | 25 ++++++++++--------- 2 files changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index b0c357f755d4..9a189c06ab56 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -1825,6 +1825,10 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, ct_flow->post_ct_attr->prio = 0; ct_flow->post_ct_attr->ft = ct_priv->post_ct; + /* Splits were handled before CT */ + if (ct_priv->ns_type == MLX5_FLOW_NAMESPACE_FDB) + ct_flow->post_ct_attr->esw_attr->split_count = 0; + ct_flow->post_ct_attr->inner_match_level = MLX5_MATCH_NONE; ct_flow->post_ct_attr->outer_match_level = MLX5_MATCH_NONE; ct_flow->post_ct_attr->action &= ~(MLX5_FLOW_CONTEXT_ACTION_DECAP); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 5cf7c221404b..89bb464850a1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -1165,19 +1165,23 @@ mlx5e_tc_offload_fdb_rules(struct mlx5_eswitch *esw, if (flow_flag_test(flow, CT)) { mod_hdr_acts = &attr->parse_attr->mod_hdr_acts; - return mlx5_tc_ct_flow_offload(get_ct_priv(flow->priv), + rule = mlx5_tc_ct_flow_offload(get_ct_priv(flow->priv), flow, spec, attr, mod_hdr_acts); + } else { + rule = mlx5_eswitch_add_offloaded_rule(esw, spec, attr); } - rule = mlx5_eswitch_add_offloaded_rule(esw, spec, attr); if (IS_ERR(rule)) return rule; if (attr->esw_attr->split_count) { flow->rule[1] = mlx5_eswitch_add_fwd_rule(esw, spec, attr); if (IS_ERR(flow->rule[1])) { - mlx5_eswitch_del_offloaded_rule(esw, rule, attr); + if (flow_flag_test(flow, CT)) + mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); + else + mlx5_eswitch_del_offloaded_rule(esw, rule, attr); return flow->rule[1]; } } @@ -1192,14 +1196,14 @@ mlx5e_tc_unoffload_fdb_rules(struct mlx5_eswitch *esw, { flow_flag_clear(flow, OFFLOADED); + if (attr->esw_attr->split_count) + mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr); + if (flow_flag_test(flow, CT)) { mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); return; } - if (attr->esw_attr->split_count) - mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr); - mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr); } @@ -3264,7 +3268,8 @@ static bool actions_match_supported(struct mlx5e_priv *priv, actions = flow->attr->action; if (mlx5e_is_eswitch_flow(flow)) { - if (flow->attr->esw_attr->split_count && ct_flow) { + if (flow->attr->esw_attr->split_count && ct_flow && + !MLX5_CAP_GEN(flow->attr->esw_attr->in_mdev, reg_c_preserve)) { /* All registers used by ct are cleared when using * split rules. */ @@ -4373,6 +4378,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, return err; flow_flag_set(flow, CT); + esw_attr->split_count = esw_attr->out_count; break; default: NL_SET_ERR_MSG_MOD(extack, "The offload action is not supported"); @@ -4432,11 +4438,6 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, return -EOPNOTSUPP; } - if (attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { - NL_SET_ERR_MSG_MOD(extack, - "Mirroring goto chain rules isn't supported"); - return -EOPNOTSUPP; - } attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; } From patchwork Fri Jan 8 05:30:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005827 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8110C433E0 for ; Fri, 8 Jan 2021 05:32:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8DF88233FB for ; Fri, 8 Jan 2021 05:32:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727816AbhAHFc2 (ORCPT ); Fri, 8 Jan 2021 00:32:28 -0500 Received: from mail.kernel.org ([198.145.29.99]:35870 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727662AbhAHFcS (ORCPT ); Fri, 8 Jan 2021 00:32:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6B9CC236F9; Fri, 8 Jan 2021 05:31:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083861; bh=pqjYbU6uIaglQnOunWZ/J1/q10k+vBChOjvynqIvulE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rriGkouvKvIXoBJwynZHdq8Tl2QmuxM1tftDLgIITtoE+uGRzdl0S6uY7Qrx1XsxG 6JyoDfVYujOUbC22M67pyapEY30XH/cX8VUjJqP6VQaP0tV4lizv4Ux4Ku66gHEi3D bYKXqN7h7rElkHbozEC+1zjwiHsXID2ol61MOp4Ex3TNDTZd8Ddvwielkbt03rUk0B 4fvHC/gZ5cK6QlvxrJgsFiALoZXl+dy+uTohZSBcHhcvMuyC3W6oYgJ8aB+mzqNow0 eGqMahuTd4cuf9u71HxL/YOVYTu/j7PBZiOyY7lxzMC+fDHhnvjxmfDIUAwUPr1YOb 5u5e0FjATSnVw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Paul Blakey , Saeed Mahameed Subject: [net-next 11/15] net/mlx5e: CT, Avoid false lock depenency warning Date: Thu, 7 Jan 2021 21:30:50 -0800 Message-Id: <20210108053054.660499-12-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Roi Dayan To avoid false lock dependency warning set the ct_entries_ht lock class different than the lock class of the ht being used when deleting last flow from a group and then deleting a group, we get into del_sw_flow_group() which call rhashtable_destroy on fg->ftes_hash which will take ht->mutex but it's different than the ht->mutex here. ====================================================== WARNING: possible circular locking dependency detected 5.10.0-rc2+ #8 Tainted: G O ------------------------------------------------------ revalidator23/24009 is trying to acquire lock: ffff888128d83828 (&node->lock){++++}-{3:3}, at: mlx5_del_flow_rules+0x83/0x7a0 [mlx5_core] but task is already holding lock: ffff8881081ef518 (&ht->mutex){+.+.}-{3:3}, at: rhashtable_free_and_destroy+0x37/0x720 which lock already depends on the new lock. Fixes: 9808dd0a2aee ("net/mlx5e: CT: Use rhashtable's ct entries instead of a separate list") Signed-off-by: Roi Dayan Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index 9a189c06ab56..510eab3204d4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -1555,6 +1555,14 @@ tc_ct_cleanup_trk_new_rules(struct mlx5_ct_ft *ft) mlx5_modify_header_dealloc(priv->mdev, ft->trk_new_rules.modify_hdr); } +/* To avoid false lock dependency warning set the ct_entries_ht lock + * class different than the lock class of the ht being used when deleting + * last flow from a group and then deleting a group, we get into del_sw_flow_group() + * which call rhashtable_destroy on fg->ftes_hash which will take ht->mutex but + * it's different than the ht->mutex here. + */ +static struct lock_class_key ct_entries_ht_lock_key; + static struct mlx5_ct_ft * mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, struct nf_flowtable *nf_ft) @@ -1595,6 +1603,8 @@ mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, if (err) goto err_init; + lockdep_set_class(&ft->ct_entries_ht.mutex, &ct_entries_ht_lock_key); + err = rhashtable_insert_fast(&ct_priv->zone_ht, &ft->node, zone_params); if (err) From patchwork Fri Jan 8 05:30:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005779 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C522C433E6 for ; Fri, 8 Jan 2021 05:32:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1AAB233FC for ; Fri, 8 Jan 2021 05:32:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727712AbhAHFcT (ORCPT ); Fri, 8 Jan 2021 00:32:19 -0500 Received: from mail.kernel.org ([198.145.29.99]:35872 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727676AbhAHFcS (ORCPT ); Fri, 8 Jan 2021 00:32:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id D38D4236FB; Fri, 8 Jan 2021 05:31:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083862; bh=evc7/n3g4A5JipB6pHN8qe4Dq/f2isfC+dQPtgBVLYg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rFSiap0YrYDw3eWqs8vIyW0O3LblIK2RTZhEJ/3BnwSFau2BJ0pNZ0K90+wsil74J bvXIpxc0KJVgtQO81HHPDcqGZUHZHGvhHdDR7CKTDrP6cf8iWFPngO2SfsdDZhn4yh 1vyRcN4D3+y51llSrnoyJfSNbausjinXQUgfG8ezd9+cBbFNugxqy0+NAyB7O5jrrM 6AA3/DgpVxdDERV+ZhUMZ4KDssE54Ypky9ExxFsbNXROlNaXU/gJ+68eJd00ZSq64d fl9VI/XcUlsSXMQQMbVFy7Kke0FT1W3CqZ3CcZpSRr0kT3jycYbMf2qGwAdcy+toX+ WplLukruXosxw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Raed Salem , Huy Nguyen , Saeed Mahameed Subject: [net-next 12/15] net/mlx5e: IPsec, Enclose csum logic under ipsec config Date: Thu, 7 Jan 2021 21:30:51 -0800 Message-Id: <20210108053054.660499-13-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tariq Toukan All IPsec logic should be wrapped under the compile flag, including its checksum logic. Introduce an inline function in ipsec datapath header, with a corresponding stub. Signed-off-by: Tariq Toukan Reviewed-by: Raed Salem Reviewed-by: Huy Nguyen Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h | 10 ++++++++++ drivers/net/ethernet/mellanox/mlx5/core/en_tx.c | 3 +-- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h index 9df9b9a8e09b..fabf4b6b2b84 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h @@ -87,6 +87,11 @@ static inline bool mlx5e_ipsec_is_tx_flow(struct mlx5e_accel_tx_ipsec_state *ips return ipsec_st->x; } +static inline bool mlx5e_ipsec_eseg_meta(struct mlx5_wqe_eth_seg *eseg) +{ + return eseg->flow_table_metadata & cpu_to_be32(MLX5_ETH_WQE_FT_META_IPSEC); +} + void mlx5e_ipsec_tx_build_eseg(struct mlx5e_priv *priv, struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg); #else @@ -96,6 +101,11 @@ void mlx5e_ipsec_offload_handle_rx_skb(struct net_device *netdev, struct mlx5_cqe64 *cqe) {} +static inline bool mlx5e_ipsec_eseg_meta(struct mlx5_wqe_eth_seg *eseg) +{ + return false; +} + static inline bool mlx5_ipsec_is_rx_flow(struct mlx5_cqe64 *cqe) { return false; } #endif /* CONFIG_MLX5_EN_IPSEC */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index e47e2a0059d0..22be21d5cdad 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -241,9 +241,8 @@ mlx5e_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb, eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM | MLX5_ETH_WQE_L4_CSUM; sq->stats->csum_partial++; #endif - } else if (unlikely(eseg->flow_table_metadata & cpu_to_be32(MLX5_ETH_WQE_FT_META_IPSEC))) { + } else if (unlikely(mlx5e_ipsec_eseg_meta(eseg))) { ipsec_txwqe_build_eseg_csum(sq, skb, eseg); - } else sq->stats->csum_none++; } From patchwork Fri Jan 8 05:30:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005781 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEC16C433E9 for ; Fri, 8 Jan 2021 05:32:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE80F233EE for ; Fri, 8 Jan 2021 05:32:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727729AbhAHFcU (ORCPT ); Fri, 8 Jan 2021 00:32:20 -0500 Received: from mail.kernel.org ([198.145.29.99]:35874 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727714AbhAHFcT (ORCPT ); Fri, 8 Jan 2021 00:32:19 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5429523716; Fri, 8 Jan 2021 05:31:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083862; bh=RLY82qF651aA8yYnWKLwQFuD9i1A3WESheA1hYF1+2g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Qj/tuy/GD/3VPID8FXolc0wzNE9l1iOoFBgkpJTKYQNCla8qcvNxw8a1DpVkNnbmJ GKj15p6/EwqeJiYZ4GcDfu4Mt4ns64DIxsbqylNRefGRKlhHA+da7vXDQ5rQ84J6Fz jTIM+4ODCTJBTfEmhMU+BaFcDzmkeD5tvfK4sMZ0cYho5kbO0PNYQSydz4Y0ghmhHj PvEV7yMcHfmdbZ/987fUf/S/jjQd3HcBTXL8yiAWDj38qKRGpmf5rWyw3G+pd1w1ym mBev1wsJuszlPAeAba0m/vggrVrOw8P6lxWLdKkamEwBxxPFz/me0a7h6eO3ZyEKC1 iUtlAiB3IgBEA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Huy Nguyen , Saeed Mahameed Subject: [net-next 13/15] net/mlx5e: IPsec, Avoid unreachable return Date: Thu, 7 Jan 2021 21:30:52 -0800 Message-Id: <20210108053054.660499-14-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tariq Toukan Simple code improvement, move default return operation under the #else block. Signed-off-by: Tariq Toukan Reviewed-by: Huy Nguyen Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h index 899b98aca0d3..fb89b24deb2b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h @@ -142,9 +142,9 @@ static inline bool mlx5e_accel_tx_is_ipsec_flow(struct mlx5e_accel_tx_state *sta { #ifdef CONFIG_MLX5_EN_IPSEC return mlx5e_ipsec_is_tx_flow(&state->ipsec); -#endif - +#else return false; +#endif } static inline unsigned int mlx5e_accel_tx_ids_len(struct mlx5e_txqsq *sq, From patchwork Fri Jan 8 05:30:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005823 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7C9EC433E0 for ; Fri, 8 Jan 2021 05:32:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B197A233EE for ; Fri, 8 Jan 2021 05:32:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727746AbhAHFcW (ORCPT ); Fri, 8 Jan 2021 00:32:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:35876 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727676AbhAHFcU (ORCPT ); Fri, 8 Jan 2021 00:32:20 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id C1DAF233FB; Fri, 8 Jan 2021 05:31:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083863; bh=McerwxI2K1bc0kkLDTiM5M/ec5W4HuZgMqRKaKf1jRs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jmonhGxLfv3QisUqbHPmZzWUOBfJabx36vyz6Wr2DI5qSY+jcrRUt2RxQibKYBsQU dWOL71D74qvCCtV4174VKyLVsoxAWvkW66093kV0mt42WWfl25aP2GyfvWFLV0OhD1 348IzcWXw1qRVfNGT1n4czImheVxhZiBM5LbT0cbFhtArNDcDRrFpmab9XVdm1cA+V kkQOdQJGzL9xyRnLui0LuKomjkxb9iYJDhOrC8ASS/BNdZpjyW8QhNLlrMDp646uO1 toQU23lHUAVgwD96Q2vUxak7fbckDgbIWMHCGki6+mhOEx0eemZmnO2B6GvLgd8QY6 u6tiDYYRLM6fw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Raed Salem , Huy Nguyen , Saeed Mahameed Subject: [net-next 14/15] net/mlx5e: IPsec, Inline feature_check fast-path function Date: Thu, 7 Jan 2021 21:30:53 -0800 Message-Id: <20210108053054.660499-15-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tariq Toukan Feature check functions are in the TX fast-path of all SKBs, not only IPsec traffic. Move the IPsec feature check function into a header and turn it inline. Use a stub and clean the config flag condition in Eth main driver file. Signed-off-by: Tariq Toukan Reviewed-by: Raed Salem Reviewed-by: Huy Nguyen Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/en_accel/ipsec_rxtx.c | 14 -------------- .../mellanox/mlx5/core/en_accel/ipsec_rxtx.h | 19 +++++++++++++++++-- .../net/ethernet/mellanox/mlx5/core/en_main.c | 2 -- 3 files changed, 17 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c index a9b45606dbdb..a97e8d205094 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c @@ -497,20 +497,6 @@ void mlx5e_ipsec_offload_handle_rx_skb(struct net_device *netdev, } } -bool mlx5e_ipsec_feature_check(struct sk_buff *skb, struct net_device *netdev, - netdev_features_t features) -{ - struct sec_path *sp = skb_sec_path(skb); - struct xfrm_state *x; - - if (sp && sp->len) { - x = sp->xvec[0]; - if (x && x->xso.offload_handle) - return true; - } - return false; -} - void mlx5e_ipsec_build_inverse_table(void) { u16 mss_inv; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h index fabf4b6b2b84..3e80742a3caf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h @@ -57,8 +57,6 @@ struct sk_buff *mlx5e_ipsec_handle_rx_skb(struct net_device *netdev, struct sk_buff *skb, u32 *cqe_bcnt); void mlx5e_ipsec_inverse_table_init(void); -bool mlx5e_ipsec_feature_check(struct sk_buff *skb, struct net_device *netdev, - netdev_features_t features); void mlx5e_ipsec_set_iv_esn(struct sk_buff *skb, struct xfrm_state *x, struct xfrm_offload *xo); void mlx5e_ipsec_set_iv(struct sk_buff *skb, struct xfrm_state *x, @@ -94,6 +92,21 @@ static inline bool mlx5e_ipsec_eseg_meta(struct mlx5_wqe_eth_seg *eseg) void mlx5e_ipsec_tx_build_eseg(struct mlx5e_priv *priv, struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg); + +static inline bool mlx5e_ipsec_feature_check(struct sk_buff *skb, struct net_device *netdev, + netdev_features_t features) +{ + struct sec_path *sp = skb_sec_path(skb); + + if (sp && sp->len) { + struct xfrm_state *x = sp->xvec[0]; + + if (x && x->xso.offload_handle) + return true; + } + return false; +} + #else static inline void mlx5e_ipsec_offload_handle_rx_skb(struct net_device *netdev, @@ -107,6 +120,8 @@ static inline bool mlx5e_ipsec_eseg_meta(struct mlx5_wqe_eth_seg *eseg) } static inline bool mlx5_ipsec_is_rx_flow(struct mlx5_cqe64 *cqe) { return false; } +static inline bool mlx5e_ipsec_feature_check(struct sk_buff *skb, struct net_device *netdev, + netdev_features_t features) { return false; } #endif /* CONFIG_MLX5_EN_IPSEC */ #endif /* __MLX5E_IPSEC_RXTX_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index f27f509ab028..c00eef14ee6c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4375,10 +4375,8 @@ netdev_features_t mlx5e_features_check(struct sk_buff *skb, features = vlan_features_check(skb, features); features = vxlan_features_check(skb, features); -#ifdef CONFIG_MLX5_EN_IPSEC if (mlx5e_ipsec_feature_check(skb, netdev, features)) return features; -#endif /* Validate if the tunneled packet is being offloaded by HW */ if (skb->encapsulation && From patchwork Fri Jan 8 05:30:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12005835 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1411EC433DB for ; Fri, 8 Jan 2021 05:32:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D5418233FB for ; Fri, 8 Jan 2021 05:32:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727854AbhAHFcf (ORCPT ); Fri, 8 Jan 2021 00:32:35 -0500 Received: from mail.kernel.org ([198.145.29.99]:35860 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727835AbhAHFcd (ORCPT ); Fri, 8 Jan 2021 00:32:33 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3EC5923435; Fri, 8 Jan 2021 05:31:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083863; bh=xfJYrAsNZVw0yeAgwqXIgmL8mQA4xtx+Jcljb4OuLxI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cfTAOnl8Uex8/5RNaTEmmaiyz28atRtcqitv7J0XXwp8+Dh6fiix4nDA/nK+5cPgD 3OrUhOnqk4bKusOmOQiPsRicZJ6d9oNt+Hy4hOCe3f+BoB4x4xhB2+m2Q0n0oCbNuy mycoDYZDCDJUqpYpcybxGBpRyF9EByav0Ii6gbhBJZ9SbF43nUBGWrzbUK2suZfkr0 +FYCC/Py5H2EpLjzSoEc6fCsB0SkH4RyT9GZFUm60T2WaN6XrvCxlwAnqM14ZHiaQn 5mHk2QbNSaQItFiFNwD+30Ul+wNj9HjA7qKHi19PQEGTWadY+m1FVALLvJkY9cw/LL T3lBnM/AFSWFQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Raed Salem , Huy Nguyen , Saeed Mahameed Subject: [net-next 15/15] net/mlx5e: IPsec, Remove unnecessary config flag usage Date: Thu, 7 Jan 2021 21:30:54 -0800 Message-Id: <20210108053054.660499-16-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tariq Toukan MLX5_IPSEC_DEV() is always defined, no need to protect it under config flag CONFIG_MLX5_EN_IPSEC, especially in slow path. Signed-off-by: Tariq Toukan Reviewed-by: Raed Salem Reviewed-by: Huy Nguyen Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 -- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 2 -- 2 files changed, 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index c00eef14ee6c..5309bc9f3197 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -2068,10 +2068,8 @@ static void mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev, u32 buf_size = 0; int i; -#ifdef CONFIG_MLX5_EN_IPSEC if (MLX5_IPSEC_DEV(mdev)) byte_count += MLX5E_METADATA_ETHER_LEN; -#endif if (mlx5e_rx_is_linear_skb(params, xsk)) { int frag_stride; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 7f5851c61218..cb8e3d2b4750 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1786,12 +1786,10 @@ int mlx5e_rq_set_handlers(struct mlx5e_rq *rq, struct mlx5e_params *params, bool rq->dealloc_wqe = mlx5e_dealloc_rx_mpwqe; rq->handle_rx_cqe = priv->profile->rx_handlers->handle_rx_cqe_mpwqe; -#ifdef CONFIG_MLX5_EN_IPSEC if (MLX5_IPSEC_DEV(mdev)) { netdev_err(netdev, "MPWQE RQ with IPSec offload not supported\n"); return -EINVAL; } -#endif if (!rq->handle_rx_cqe) { netdev_err(netdev, "RX handler of MPWQE RQ is not set\n"); return -EINVAL;