From patchwork Wed Nov 17 04:33:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623683 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13772C433F5 for ; Wed, 17 Nov 2021 04:34:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB5FD613A2 for ; Wed, 17 Nov 2021 04:34:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231137AbhKQEhE (ORCPT ); Tue, 16 Nov 2021 23:37:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:41404 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229984AbhKQEhE (ORCPT ); Tue, 16 Nov 2021 23:37:04 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 7C720613A2; Wed, 17 Nov 2021 04:34:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123646; bh=6bTo0FeB5kZNxr3stpY/j0YKazkG71zR6qgI8gCXAMg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WzB86CrVcLNTOIHadifB4y02kCX8Hkyh7E39iY6bA/XqvK67v0wp7zmdcVKUXfh2G 6KrjHaYb5ohK5M/xBYtpWtz69uuaRcyV1mYCxJSCRn9E/cNTztGDl8wKcNczzVDlg1 sPZ1hynCE/rMPEB26QwcD1sEfrdQIDjxtUHgl6nynbAWgdO5dMpZtoP5FCySA1lTCU X9atR3FPLXrg8E7YAL4cN3pJgKQKCWJFNjYUVuPJgAp+D4oL1jROscAIzXS98KKDx+ +/M2bJwVMgdwcQBVqvFnLe5/rLE2LEkfX317LT6uw/5b+FFr54edUn22liXgBiE1QA K/6kDXhWMG5OQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Saeed Mahameed , Tariq Toukan Subject: [net-next v0 01/15] net/mlx5e: Support ethtool cq mode Date: Tue, 16 Nov 2021 20:33:43 -0800 Message-Id: <20211117043357.345072-2-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed Add support for ethtool coalesce cq mode set and get. Signed-off-by: Saeed Mahameed Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 7 ++- .../ethernet/mellanox/mlx5/core/en_ethtool.c | 49 ++++++++++++++++--- .../net/ethernet/mellanox/mlx5/core/en_rep.c | 4 +- .../mellanox/mlx5/core/ipoib/ethtool.c | 4 +- 4 files changed, 52 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index f0ac6b0d9653..48b12ee44b8d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -1148,9 +1148,12 @@ void mlx5e_ethtool_get_channels(struct mlx5e_priv *priv, int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv, struct ethtool_channels *ch); int mlx5e_ethtool_get_coalesce(struct mlx5e_priv *priv, - struct ethtool_coalesce *coal); + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal); int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv, - struct ethtool_coalesce *coal); + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack); int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv, struct ethtool_link_ksettings *link_ksettings); int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index c2ea5fad48dd..45bdfcb3dcc7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -511,7 +511,8 @@ static int mlx5e_set_channels(struct net_device *dev, } int mlx5e_ethtool_get_coalesce(struct mlx5e_priv *priv, - struct ethtool_coalesce *coal) + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal) { struct dim_cq_moder *rx_moder, *tx_moder; @@ -528,6 +529,11 @@ int mlx5e_ethtool_get_coalesce(struct mlx5e_priv *priv, coal->tx_max_coalesced_frames = tx_moder->pkts; coal->use_adaptive_tx_coalesce = priv->channels.params.tx_dim_enabled; + kernel_coal->use_cqe_mode_rx = + MLX5E_GET_PFLAG(&priv->channels.params, MLX5E_PFLAG_RX_CQE_BASED_MODER); + kernel_coal->use_cqe_mode_tx = + MLX5E_GET_PFLAG(&priv->channels.params, MLX5E_PFLAG_TX_CQE_BASED_MODER); + return 0; } @@ -538,7 +544,7 @@ static int mlx5e_get_coalesce(struct net_device *netdev, { struct mlx5e_priv *priv = netdev_priv(netdev); - return mlx5e_ethtool_get_coalesce(priv, coal); + return mlx5e_ethtool_get_coalesce(priv, coal, kernel_coal); } #define MLX5E_MAX_COAL_TIME MLX5_MAX_CQ_PERIOD @@ -578,14 +584,26 @@ mlx5e_set_priv_channels_rx_coalesce(struct mlx5e_priv *priv, struct ethtool_coal } } +/* convert a boolean value of cq_mode to mlx5 period mode + * true : MLX5_CQ_PERIOD_MODE_START_FROM_CQE + * false : MLX5_CQ_PERIOD_MODE_START_FROM_EQE + */ +static int cqe_mode_to_period_mode(bool val) +{ + return val ? MLX5_CQ_PERIOD_MODE_START_FROM_CQE : MLX5_CQ_PERIOD_MODE_START_FROM_EQE; +} + int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv, - struct ethtool_coalesce *coal) + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) { struct dim_cq_moder *rx_moder, *tx_moder; struct mlx5_core_dev *mdev = priv->mdev; struct mlx5e_params new_params; bool reset_rx, reset_tx; bool reset = true; + u8 cq_period_mode; int err = 0; if (!MLX5_CAP_GEN(mdev, cq_moderation)) @@ -605,6 +623,12 @@ int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv, return -ERANGE; } + if ((kernel_coal->use_cqe_mode_rx || kernel_coal->use_cqe_mode_tx) && + !MLX5_CAP_GEN(priv->mdev, cq_period_start_from_cqe)) { + NL_SET_ERR_MSG_MOD(extack, "cqe_mode_rx/tx is not supported on this device"); + return -EOPNOTSUPP; + } + mutex_lock(&priv->state_lock); new_params = priv->channels.params; @@ -621,6 +645,18 @@ int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv, reset_rx = !!coal->use_adaptive_rx_coalesce != priv->channels.params.rx_dim_enabled; reset_tx = !!coal->use_adaptive_tx_coalesce != priv->channels.params.tx_dim_enabled; + cq_period_mode = cqe_mode_to_period_mode(kernel_coal->use_cqe_mode_rx); + if (cq_period_mode != rx_moder->cq_period_mode) { + mlx5e_set_rx_cq_mode_params(&new_params, cq_period_mode); + reset_rx = true; + } + + cq_period_mode = cqe_mode_to_period_mode(kernel_coal->use_cqe_mode_tx); + if (cq_period_mode != tx_moder->cq_period_mode) { + mlx5e_set_tx_cq_mode_params(&new_params, cq_period_mode); + reset_tx = true; + } + if (reset_rx) { u8 mode = MLX5E_GET_PFLAG(&new_params, MLX5E_PFLAG_RX_CQE_BASED_MODER); @@ -656,9 +692,9 @@ static int mlx5e_set_coalesce(struct net_device *netdev, struct kernel_ethtool_coalesce *kernel_coal, struct netlink_ext_ack *extack) { - struct mlx5e_priv *priv = netdev_priv(netdev); + struct mlx5e_priv *priv = netdev_priv(netdev); - return mlx5e_ethtool_set_coalesce(priv, coal); + return mlx5e_ethtool_set_coalesce(priv, coal, kernel_coal, extack); } static void ptys2ethtool_supported_link(struct mlx5_core_dev *mdev, @@ -2358,7 +2394,8 @@ static void mlx5e_get_rmon_stats(struct net_device *netdev, const struct ethtool_ops mlx5e_ethtool_ops = { .supported_coalesce_params = ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_MAX_FRAMES | - ETHTOOL_COALESCE_USE_ADAPTIVE, + ETHTOOL_COALESCE_USE_ADAPTIVE | + ETHTOOL_COALESCE_USE_CQE, .get_drvinfo = mlx5e_get_drvinfo, .get_link = ethtool_op_get_link, .get_link_ext_state = mlx5e_get_link_ext_state, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index e58a9ec42553..8c81aeba07db 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -258,7 +258,7 @@ static int mlx5e_rep_get_coalesce(struct net_device *netdev, { struct mlx5e_priv *priv = netdev_priv(netdev); - return mlx5e_ethtool_get_coalesce(priv, coal); + return mlx5e_ethtool_get_coalesce(priv, coal, kernel_coal); } static int mlx5e_rep_set_coalesce(struct net_device *netdev, @@ -268,7 +268,7 @@ static int mlx5e_rep_set_coalesce(struct net_device *netdev, { struct mlx5e_priv *priv = netdev_priv(netdev); - return mlx5e_ethtool_set_coalesce(priv, coal); + return mlx5e_ethtool_set_coalesce(priv, coal, kernel_coal, extack); } static u32 mlx5e_rep_get_rxfh_key_size(struct net_device *netdev) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c index 962d41418ce7..f23e33ac9c6b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c @@ -105,7 +105,7 @@ static int mlx5i_set_coalesce(struct net_device *netdev, { struct mlx5e_priv *priv = mlx5i_epriv(netdev); - return mlx5e_ethtool_set_coalesce(priv, coal); + return mlx5e_ethtool_set_coalesce(priv, coal, kernel_coal, extack); } static int mlx5i_get_coalesce(struct net_device *netdev, @@ -115,7 +115,7 @@ static int mlx5i_get_coalesce(struct net_device *netdev, { struct mlx5e_priv *priv = mlx5i_epriv(netdev); - return mlx5e_ethtool_get_coalesce(priv, coal); + return mlx5e_ethtool_get_coalesce(priv, coal, kernel_coal); } static int mlx5i_get_ts_info(struct net_device *netdev, From patchwork Wed Nov 17 04:33:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623685 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22A1AC433F5 for ; Wed, 17 Nov 2021 04:34:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EEBC061BE1 for ; Wed, 17 Nov 2021 04:34:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233076AbhKQEhG (ORCPT ); Tue, 16 Nov 2021 23:37:06 -0500 Received: from mail.kernel.org ([198.145.29.99]:41408 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229984AbhKQEhF (ORCPT ); Tue, 16 Nov 2021 23:37:05 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E1CBC615E6; Wed, 17 Nov 2021 04:34:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123647; bh=WXQX7ZfRe82hnCr/RGXlB8TjAxGKGZILcZVzR93BQrQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uX/lg8aEDuNGCA8UIBEE7KXJAAryRgeCR7/c6fUCgROrkBetcoblvuIstWTKdCT9T kz7fT2Fq28r5yitFQwVdaAt5mwtmXWA9I1no67jEYL7Ky+UHnBJCz6ICySg38NyZAy pXtTJKQHZBERcF0FZ+xa/iEMnbpFwYZ2sh5Sgf4Yh0LZQNSfcWwHHmoL4wmxshguAU sW51E07G9X5H7tp1pFAXGoU782MEGAjjm9bd7Yp8vBUzxo0czt/putnUZHQ6NhfdAH UV2GoJicMm+N38gvYcB8fA9kIjMo7o6y4U0HhZLkafyDfYKvYsDqt6BhPH/DtwyGSN qTlI/1xJUmrLw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Saeed Mahameed , Shay Drory , Moshe Shemesh Subject: [net-next v0 02/15] net/mlx5: Fix format-security build warnings Date: Tue, 16 Nov 2021 20:33:44 -0800 Message-Id: <20211117043357.345072-3-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed Treat the string as an argument to avoid this. drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c:482:5: error: format string is not a string literal (potentially insecure) name); ^~~~ drivers/net/ethernet/mellanox/mlx5/core/en_stats.c:2079:4: error: format string is not a string literal (potentially insecure) ptp_ch_stats_desc[i].format); ^~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Saeed Mahameed Reviewed-by: Shay Drory Reviewed-by: Moshe Shemesh --- drivers/net/ethernet/mellanox/mlx5/core/en_stats.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index 2a9bfc3ffa2e..3c91a11e27ad 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -2076,7 +2076,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(ptp) for (i = 0; i < NUM_PTP_CH_STATS; i++) sprintf(data + (idx++) * ETH_GSTRING_LEN, - ptp_ch_stats_desc[i].format); + "%s", ptp_ch_stats_desc[i].format); if (priv->tx_ptp_opened) { for (tc = 0; tc < priv->max_opened_tc; tc++) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c index 830444f927d4..19bf2b66707d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c @@ -479,7 +479,7 @@ irq_pool_alloc(struct mlx5_core_dev *dev, int start, int size, char *name, pool->xa_num_irqs.max = start + size - 1; if (name) snprintf(pool->name, MLX5_MAX_IRQ_NAME - MLX5_MAX_IRQ_IDX_CHARS, - name); + "%s", name); pool->min_threshold = min_threshold * MLX5_EQ_REFS_PER_IRQ; pool->max_threshold = max_threshold * MLX5_EQ_REFS_PER_IRQ; mlx5_core_dbg(dev, "pool->name = %s, pool->size = %d, pool->start = %d", From patchwork Wed Nov 17 04:33:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623689 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95D6DC433F5 for ; Wed, 17 Nov 2021 04:34:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7BB4961BE2 for ; Wed, 17 Nov 2021 04:34:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232903AbhKQEhH (ORCPT ); Tue, 16 Nov 2021 23:37:07 -0500 Received: from mail.kernel.org ([198.145.29.99]:41442 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231983AbhKQEhF (ORCPT ); Tue, 16 Nov 2021 23:37:05 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5B3F6619F6; Wed, 17 Nov 2021 04:34:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123647; bh=q5+ayHVuQGGf2sqcj9QuTW/GsOFmsbM9YF4CVcRPeTE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i/O0NPH59MmiyWe3xrAnjHKTwl9wyyCWCuYXBWLKGR0LLo6iDFsb3Im/5hJLTQq1+ nnal4boLrnHVYRaTelZztI5IaEnZ50EhBRdLZ59Ntb46kEjCWEF78T0ALkG829kFth zpM1eDrBllIyjFxlLOfFXEA4JqTNoy0iflhWoNyqWA3K9JwWuY17RDzviPaRmyYNTc +hWXeQ2KT/X/B0XIXMdlys46mpXxFFmPX3RElChC8my9KvDw9QLru0OHc4Iztbdr8Q MLb10iwjWAeJa+Ig1PEfcmGS2CJLuoNVjMQwbQcYmUfhRW+4vt7L5ZEEISOEqmPKmv 7rBfHFiXTsSng== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Aya Levin , Gal Pressman , Saeed Mahameed Subject: [net-next v0 03/15] net/mlx5: Avoid printing health buffer when firmware is unavailable Date: Tue, 16 Nov 2021 20:33:45 -0800 Message-Id: <20211117043357.345072-4-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Aya Levin Use firmware version field as an indication to health buffer's sanity. When firmware version is 0xFFFFFFFF, deduce that firmware is unavailable and avoid printing the health buffer to dmesg as it doesn't provide debug info. Signed-off-by: Aya Levin Reviewed-by: Gal Pressman Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/health.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c index 64f1abc4dc36..75121bc1eaa5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c @@ -420,6 +420,11 @@ static void print_health_info(struct mlx5_core_dev *dev) if (!ioread8(&h->synd)) return; + if (ioread32be(&h->fw_ver) == 0xFFFFFFFF) { + mlx5_log(dev, LOGLEVEL_ERR, "PCI slot is unavailable\n"); + return; + } + rfr_severity = ioread8(&h->rfr_severity); severity = mlx5_health_get_severity(rfr_severity); mlx5_log(dev, severity, "Health issue observed, %s, severity(%d) %s:\n", From patchwork Wed Nov 17 04:33:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623687 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AFD3C433EF for ; Wed, 17 Nov 2021 04:34:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7058E613A2 for ; Wed, 17 Nov 2021 04:34:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233101AbhKQEhH (ORCPT ); Tue, 16 Nov 2021 23:37:07 -0500 Received: from mail.kernel.org ([198.145.29.99]:41478 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232056AbhKQEhG (ORCPT ); Tue, 16 Nov 2021 23:37:06 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id C9E6061BE3; Wed, 17 Nov 2021 04:34:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123648; bh=ElPYJOhmFOaVjKU6lb5E3zyJMK1GdN5kG2LrGtUgg4k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QDdWXT0jNjy2s05Qk6jHFD7HJLYE6WIsi06wBEhxVDhKldBlqajh+1DZuFK/N/JJU SXzJmT0wgcId7SHZRR0+m54ohG7nSYORSeDZYyT1735ha6l1G/Hwd5Wv4pT2lPJisx 6un7VNa/0vKyTw56L5P7/LgvdpTL+8yl38+DdztZWBo/edLUQjHdeORksxJ3moNJ7e 2yLaWqfteg5ZkkCljrHBhaReqACLdmDMX/7uJJ0czjOCTs/PPdfj1xtUmgtCx7s1p3 gHWAV39BRs69DjHhR1z5ITiKjVM+UO+zle1CAqdckYbkvTM5dLgAb/o/xHqdoq288Y oJsdEMZcfqYuw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Paul Blakey , Oz Shlomo , Roi Dayan , Saeed Mahameed Subject: [net-next v0 04/15] net/mlx5e: Refactor mod header management API Date: Tue, 16 Nov 2021 20:33:46 -0800 Message-Id: <20211117043357.345072-5-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Paul Blakey For all mod hdr related functions to reside in a single self contained component (mod_hdr.c), refactor alloc() and add get_id() so that user won't rely on internal implementation, and move both to mod_hdr component. Rename the prefix to mlx5e_mod_hdr_* as other mod hdr functions. Signed-off-by: Paul Blakey Reviewed-by: Oz Shlomo Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/mod_hdr.c | 47 ++++++++++ .../ethernet/mellanox/mlx5/core/en/mod_hdr.h | 13 +++ .../mellanox/mlx5/core/en/tc/sample.c | 5 +- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 25 ++---- .../net/ethernet/mellanox/mlx5/core/en_tc.c | 90 ++++--------------- .../net/ethernet/mellanox/mlx5/core/en_tc.h | 5 -- .../mellanox/mlx5/core/esw/indir_table.c | 5 +- 7 files changed, 90 insertions(+), 100 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c index 7edde4d536fd..19d05fb4aab2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c @@ -155,3 +155,50 @@ struct mlx5_modify_hdr *mlx5e_mod_hdr_get(struct mlx5e_mod_hdr_handle *mh) return mh->modify_hdr; } +char * +mlx5e_mod_hdr_alloc(struct mlx5_core_dev *mdev, int namespace, + struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts) +{ + int new_num_actions, max_hw_actions; + size_t new_sz, old_sz; + void *ret; + + if (mod_hdr_acts->num_actions < mod_hdr_acts->max_actions) + goto out; + + max_hw_actions = mlx5e_mod_hdr_max_actions(mdev, namespace); + new_num_actions = min(max_hw_actions, + mod_hdr_acts->actions ? + mod_hdr_acts->max_actions * 2 : 1); + if (mod_hdr_acts->max_actions == new_num_actions) + return ERR_PTR(-ENOSPC); + + new_sz = MLX5_MH_ACT_SZ * new_num_actions; + old_sz = mod_hdr_acts->max_actions * MLX5_MH_ACT_SZ; + + ret = krealloc(mod_hdr_acts->actions, new_sz, GFP_KERNEL); + if (!ret) + return ERR_PTR(-ENOMEM); + + memset(ret + old_sz, 0, new_sz - old_sz); + mod_hdr_acts->actions = ret; + mod_hdr_acts->max_actions = new_num_actions; + +out: + return mod_hdr_acts->actions + (mod_hdr_acts->num_actions * MLX5_MH_ACT_SZ); +} + +void +mlx5e_mod_hdr_dealloc(struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts) +{ + kfree(mod_hdr_acts->actions); + mod_hdr_acts->actions = NULL; + mod_hdr_acts->num_actions = 0; + mod_hdr_acts->max_actions = 0; +} + +char * +mlx5e_mod_hdr_get_item(struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts, int pos) +{ + return mod_hdr_acts->actions + (pos * MLX5_MH_ACT_SZ); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.h b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.h index 33b23d8f9182..b8cd1a7a31be 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.h @@ -15,6 +15,11 @@ struct mlx5e_tc_mod_hdr_acts { void *actions; }; +char *mlx5e_mod_hdr_alloc(struct mlx5_core_dev *mdev, int namespace, + struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts); +void mlx5e_mod_hdr_dealloc(struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts); +char *mlx5e_mod_hdr_get_item(struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts, int pos); + struct mlx5e_mod_hdr_handle * mlx5e_mod_hdr_attach(struct mlx5_core_dev *mdev, struct mod_hdr_tbl *tbl, @@ -28,4 +33,12 @@ struct mlx5_modify_hdr *mlx5e_mod_hdr_get(struct mlx5e_mod_hdr_handle *mh); void mlx5e_mod_hdr_tbl_init(struct mod_hdr_tbl *tbl); void mlx5e_mod_hdr_tbl_destroy(struct mod_hdr_tbl *tbl); +static inline int mlx5e_mod_hdr_max_actions(struct mlx5_core_dev *mdev, int namespace) +{ + if (namespace == MLX5_FLOW_NAMESPACE_FDB) /* FDB offloading */ + return MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, max_modify_header_actions); + else /* namespace is MLX5_FLOW_NAMESPACE_KERNEL - NIC offloading */ + return MLX5_CAP_FLOWTABLE_NIC_RX(mdev, max_modify_header_actions); +} + #endif /* __MLX5E_EN_MOD_HDR_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c index df6888c4793c..ff4b4f8a5a9d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c @@ -5,6 +5,7 @@ #include #include "en/mapping.h" #include "en/tc/post_act.h" +#include "en/mod_hdr.h" #include "sample.h" #include "eswitch.h" #include "en_tc.h" @@ -255,12 +256,12 @@ sample_modify_hdr_get(struct mlx5_core_dev *mdev, u32 obj_id, goto err_modify_hdr; } - dealloc_mod_hdr_actions(&mod_acts); + mlx5e_mod_hdr_dealloc(&mod_acts); return modify_hdr; err_modify_hdr: err_post_act: - dealloc_mod_hdr_actions(&mod_acts); + mlx5e_mod_hdr_dealloc(&mod_acts); err_set_regc0: return ERR_PTR(err); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index c1c6e74c79c4..89065fac7590 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -609,22 +609,15 @@ mlx5_tc_ct_entry_create_nat(struct mlx5_tc_ct_priv *ct_priv, struct flow_action *flow_action = &flow_rule->action; struct mlx5_core_dev *mdev = ct_priv->dev; struct flow_action_entry *act; - size_t action_size; char *modact; int err, i; - action_size = MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto); - flow_action_for_each(i, act, flow_action) { switch (act->id) { case FLOW_ACTION_MANGLE: { - err = alloc_mod_hdr_actions(mdev, ct_priv->ns_type, - mod_acts); - if (err) - return err; - - modact = mod_acts->actions + - mod_acts->num_actions * action_size; + modact = mlx5e_mod_hdr_alloc(mdev, ct_priv->ns_type, mod_acts); + if (IS_ERR(modact)) + return PTR_ERR(modact); err = mlx5_tc_ct_parse_mangle_to_mod_act(act, modact); if (err) @@ -706,11 +699,11 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv, attr->modify_hdr = mlx5e_mod_hdr_get(*mh); } - dealloc_mod_hdr_actions(&mod_acts); + mlx5e_mod_hdr_dealloc(&mod_acts); return 0; err_mapping: - dealloc_mod_hdr_actions(&mod_acts); + mlx5e_mod_hdr_dealloc(&mod_acts); mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); return err; } @@ -1445,7 +1438,7 @@ static int tc_ct_pre_ct_add_rules(struct mlx5_ct_ft *ct_ft, } pre_ct->miss_rule = rule; - dealloc_mod_hdr_actions(&pre_mod_acts); + mlx5e_mod_hdr_dealloc(&pre_mod_acts); kvfree(spec); return 0; @@ -1454,7 +1447,7 @@ static int tc_ct_pre_ct_add_rules(struct mlx5_ct_ft *ct_ft, err_flow_rule: mlx5_modify_header_dealloc(dev, pre_ct->modify_hdr); err_mapping: - dealloc_mod_hdr_actions(&pre_mod_acts); + mlx5e_mod_hdr_dealloc(&pre_mod_acts); kvfree(spec); return err; } @@ -1850,14 +1843,14 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, } attr->ct_attr.ct_flow = ct_flow; - dealloc_mod_hdr_actions(&pre_mod_acts); + mlx5e_mod_hdr_dealloc(&pre_mod_acts); return ct_flow->pre_ct_rule; err_insert_orig: mlx5_modify_header_dealloc(priv->mdev, pre_ct_attr->modify_hdr); err_mapping: - dealloc_mod_hdr_actions(&pre_mod_acts); + mlx5e_mod_hdr_dealloc(&pre_mod_acts); mlx5_chains_put_chain_mapping(ct_priv->chains, ct_flow->chain_mapping); err_get_chain: kfree(ct_flow->pre_ct_attr); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 835caa1c7b74..e620100eabe0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -71,7 +71,6 @@ #include "lag/mp.h" #define nic_chains(priv) ((priv)->fs.tc.chains) -#define MLX5_MH_ACT_SZ MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto) #define MLX5E_TC_TABLE_NUM_GROUPS 4 #define MLX5E_TC_TABLE_MAX_GROUP_SIZE BIT(18) @@ -209,12 +208,9 @@ mlx5e_tc_match_to_reg_set_and_get_id(struct mlx5_core_dev *mdev, char *modact; int err; - err = alloc_mod_hdr_actions(mdev, ns, mod_hdr_acts); - if (err) - return err; - - modact = mod_hdr_acts->actions + - (mod_hdr_acts->num_actions * MLX5_MH_ACT_SZ); + modact = mlx5e_mod_hdr_alloc(mdev, ns, mod_hdr_acts); + if (IS_ERR(modact)) + return PTR_ERR(modact); /* Firmware has 5bit length field and 0 means 32bits */ if (mlen == 32) @@ -333,7 +329,7 @@ void mlx5e_tc_match_to_reg_mod_hdr_change(struct mlx5_core_dev *mdev, int mlen = mlx5e_tc_attr_to_reg_mappings[type].mlen; char *modact; - modact = mod_hdr_acts->actions + (act_id * MLX5_MH_ACT_SZ); + modact = mlx5e_mod_hdr_get_item(mod_hdr_acts, act_id); /* Firmware has 5bit length field and 0 means 32bits */ if (mlen == 32) @@ -1076,7 +1072,7 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv, if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { err = mlx5e_attach_mod_hdr(priv, flow, parse_attr); - dealloc_mod_hdr_actions(&parse_attr->mod_hdr_acts); + mlx5e_mod_hdr_dealloc(&parse_attr->mod_hdr_acts); if (err) return err; } @@ -1623,7 +1619,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv, mlx5_tc_ct_match_del(get_ct_priv(priv), &flow->attr->ct_attr); if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { - dealloc_mod_hdr_actions(&attr->parse_attr->mod_hdr_acts); + mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts); if (vf_tun && attr->modify_hdr) mlx5_modify_header_dealloc(priv->mdev, attr->modify_hdr); else @@ -2766,13 +2762,12 @@ static int offload_pedit_fields(struct mlx5e_priv *priv, struct netlink_ext_ack *extack) { struct pedit_headers *set_masks, *add_masks, *set_vals, *add_vals; - int i, action_size, first, last, next_z; void *headers_c, *headers_v, *action, *vals_p; u32 *s_masks_p, *a_masks_p, s_mask, a_mask; struct mlx5e_tc_mod_hdr_acts *mod_acts; - struct mlx5_fields *f; unsigned long mask, field_mask; - int err; + int i, first, last, next_z; + struct mlx5_fields *f; u8 cmd; mod_acts = &parse_attr->mod_hdr_acts; @@ -2784,8 +2779,6 @@ static int offload_pedit_fields(struct mlx5e_priv *priv, set_vals = &hdrs[0].vals; add_vals = &hdrs[1].vals; - action_size = MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto); - for (i = 0; i < ARRAY_SIZE(fields); i++) { bool skip; @@ -2853,18 +2846,16 @@ static int offload_pedit_fields(struct mlx5e_priv *priv, return -EOPNOTSUPP; } - err = alloc_mod_hdr_actions(priv->mdev, namespace, mod_acts); - if (err) { + action = mlx5e_mod_hdr_alloc(priv->mdev, namespace, mod_acts); + if (IS_ERR(action)) { NL_SET_ERR_MSG_MOD(extack, "too many pedit actions, can't offload"); mlx5_core_warn(priv->mdev, "mlx5: parsed %d pedit actions, can't do more\n", mod_acts->num_actions); - return err; + return PTR_ERR(action); } - action = mod_acts->actions + - (mod_acts->num_actions * action_size); MLX5_SET(set_action_in, action, action_type, cmd); MLX5_SET(set_action_in, action, field, f->field); @@ -2894,57 +2885,6 @@ static int offload_pedit_fields(struct mlx5e_priv *priv, return 0; } -static int mlx5e_flow_namespace_max_modify_action(struct mlx5_core_dev *mdev, - int namespace) -{ - if (namespace == MLX5_FLOW_NAMESPACE_FDB) /* FDB offloading */ - return MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, max_modify_header_actions); - else /* namespace is MLX5_FLOW_NAMESPACE_KERNEL - NIC offloading */ - return MLX5_CAP_FLOWTABLE_NIC_RX(mdev, max_modify_header_actions); -} - -int alloc_mod_hdr_actions(struct mlx5_core_dev *mdev, - int namespace, - struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts) -{ - int action_size, new_num_actions, max_hw_actions; - size_t new_sz, old_sz; - void *ret; - - if (mod_hdr_acts->num_actions < mod_hdr_acts->max_actions) - return 0; - - action_size = MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto); - - max_hw_actions = mlx5e_flow_namespace_max_modify_action(mdev, - namespace); - new_num_actions = min(max_hw_actions, - mod_hdr_acts->actions ? - mod_hdr_acts->max_actions * 2 : 1); - if (mod_hdr_acts->max_actions == new_num_actions) - return -ENOSPC; - - new_sz = action_size * new_num_actions; - old_sz = mod_hdr_acts->max_actions * action_size; - ret = krealloc(mod_hdr_acts->actions, new_sz, GFP_KERNEL); - if (!ret) - return -ENOMEM; - - memset(ret + old_sz, 0, new_sz - old_sz); - mod_hdr_acts->actions = ret; - mod_hdr_acts->max_actions = new_num_actions; - - return 0; -} - -void dealloc_mod_hdr_actions(struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts) -{ - kfree(mod_hdr_acts->actions); - mod_hdr_acts->actions = NULL; - mod_hdr_acts->num_actions = 0; - mod_hdr_acts->max_actions = 0; -} - static const struct pedit_headers zero_masks = {}; static int @@ -2967,7 +2907,7 @@ parse_pedit_to_modify_hdr(struct mlx5e_priv *priv, goto out_err; } - if (!mlx5e_flow_namespace_max_modify_action(priv->mdev, namespace)) { + if (!mlx5e_mod_hdr_max_actions(priv->mdev, namespace)) { NL_SET_ERR_MSG_MOD(extack, "The pedit offload action is not supported"); goto out_err; @@ -3060,7 +3000,7 @@ static int alloc_tc_pedit_action(struct mlx5e_priv *priv, int namespace, return 0; out_dealloc_parsed_actions: - dealloc_mod_hdr_actions(&parse_attr->mod_hdr_acts); + mlx5e_mod_hdr_dealloc(&parse_attr->mod_hdr_acts); return err; } @@ -3489,7 +3429,7 @@ actions_prepare_mod_hdr_actions(struct mlx5e_priv *priv, return 0; attr->action &= ~MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; - dealloc_mod_hdr_actions(&parse_attr->mod_hdr_acts); + mlx5e_mod_hdr_dealloc(&parse_attr->mod_hdr_acts); if (ns_type != MLX5_FLOW_NAMESPACE_FDB) return 0; @@ -4708,7 +4648,7 @@ mlx5e_add_nic_flow(struct mlx5e_priv *priv, err_free: flow_flag_set(flow, FAILED); - dealloc_mod_hdr_actions(&parse_attr->mod_hdr_acts); + mlx5e_mod_hdr_dealloc(&parse_attr->mod_hdr_acts); mlx5e_flow_put(priv, flow); out: return err; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h index fdb222793027..eb042f0f5a41 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h @@ -247,11 +247,6 @@ int mlx5e_tc_add_flow_mod_hdr(struct mlx5e_priv *priv, struct mlx5e_tc_flow_parse_attr *parse_attr, struct mlx5e_tc_flow *flow); -int alloc_mod_hdr_actions(struct mlx5_core_dev *mdev, - int namespace, - struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts); -void dealloc_mod_hdr_actions(struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts); - struct mlx5e_tc_flow; u32 mlx5e_tc_get_flow_tun_id(struct mlx5e_tc_flow *flow); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c index 425c91814b34..c275fe028b6d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c @@ -14,6 +14,7 @@ #include "fs_core.h" #include "esw/indir_table.h" #include "lib/fs_chains.h" +#include "en/mod_hdr.h" #define MLX5_ESW_INDIR_TABLE_SIZE 128 #define MLX5_ESW_INDIR_TABLE_RECIRC_IDX_MAX (MLX5_ESW_INDIR_TABLE_SIZE - 2) @@ -226,7 +227,7 @@ static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw, goto err_handle; } - dealloc_mod_hdr_actions(&mod_acts); + mlx5e_mod_hdr_dealloc(&mod_acts); rule->handle = handle; rule->vni = esw_attr->rx_tun_attr->vni; rule->mh = flow_act.modify_hdr; @@ -243,7 +244,7 @@ static int mlx5_esw_indir_table_rule_get(struct mlx5_eswitch *esw, mlx5_modify_header_dealloc(esw->dev, flow_act.modify_hdr); err_mod_hdr_alloc: err_mod_hdr_regc1: - dealloc_mod_hdr_actions(&mod_acts); + mlx5e_mod_hdr_dealloc(&mod_acts); err_mod_hdr_regc0: err_ethertype: kfree(rule); From patchwork Wed Nov 17 04:33:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623691 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BEA6C433F5 for ; Wed, 17 Nov 2021 04:34:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5C3936137E for ; Wed, 17 Nov 2021 04:34:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233113AbhKQEhK (ORCPT ); Tue, 16 Nov 2021 23:37:10 -0500 Received: from mail.kernel.org ([198.145.29.99]:41480 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232579AbhKQEhG (ORCPT ); Tue, 16 Nov 2021 23:37:06 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5AA726137E; Wed, 17 Nov 2021 04:34:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123648; bh=yTXZrajXKcgdlEBq8KlDX5d8Apy09LABtQqxukRMxVY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=USzqs4GDEKJu2hwJ3YTvkIvSOIhbLaynECMzarXSo8Jb2eC6HKipU0tGmIP/949Rb Cmz/zA2oJ7RdNJYkQge455+ZoDx6nRGkfh6/jOm7AppPGeLFU8l12kBuhGf63XKoIV DNdt50HOmINwhCla1yuNj9AmSiyiElQs7X2NmskEA8QmY2FioMh3J3rFbm8n/nATn7 mgDu7rSb4Eg/NFFw7pmszA7UWsvMw131hfE4DA1roDXOYrK9ukUw3LL9oGxu0gDQc2 bLvNJGR/8c1pRPSDvrwKw1ZZ1UxP8Hysw7W3EjAp7jXDqa5D2B3sfFsG6uBPIvwgyN nw1+iAHS7fmDQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Paul Blakey , Saeed Mahameed , Oz Shlomo , Roi Dayan Subject: [net-next v0 05/15] net/mlx5: CT: Allow static allocation of mod headers Date: Tue, 16 Nov 2021 20:33:47 -0800 Message-Id: <20211117043357.345072-6-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Paul Blakey As each CT rule uses at least 4 modify header actions, each rule causes at least 3 reallocations by the mod header actions api. Allow initial static allocation of the mod acts array, and use it for CT rules. If the static allocation is exceeded go back to dynamic allocation. Signed-off-by: Saeed Mahameed Signed-off-by: Paul Blakey Reviewed-by: Oz Shlomo Reviewed-by: Roi Dayan --- .../ethernet/mellanox/mlx5/core/en/mod_hdr.c | 17 ++++++++++++++--- .../ethernet/mellanox/mlx5/core/en/mod_hdr.h | 13 +++++++++++++ .../net/ethernet/mellanox/mlx5/core/en/tc_ct.c | 9 ++++++++- 3 files changed, 35 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c index 19d05fb4aab2..17325c5d6516 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.c @@ -176,11 +176,20 @@ mlx5e_mod_hdr_alloc(struct mlx5_core_dev *mdev, int namespace, new_sz = MLX5_MH_ACT_SZ * new_num_actions; old_sz = mod_hdr_acts->max_actions * MLX5_MH_ACT_SZ; - ret = krealloc(mod_hdr_acts->actions, new_sz, GFP_KERNEL); + if (mod_hdr_acts->is_static) { + ret = kzalloc(new_sz, GFP_KERNEL); + if (ret) { + memcpy(ret, mod_hdr_acts->actions, old_sz); + mod_hdr_acts->is_static = false; + } + } else { + ret = krealloc(mod_hdr_acts->actions, new_sz, GFP_KERNEL); + if (ret) + memset(ret + old_sz, 0, new_sz - old_sz); + } if (!ret) return ERR_PTR(-ENOMEM); - memset(ret + old_sz, 0, new_sz - old_sz); mod_hdr_acts->actions = ret; mod_hdr_acts->max_actions = new_num_actions; @@ -191,7 +200,9 @@ mlx5e_mod_hdr_alloc(struct mlx5_core_dev *mdev, int namespace, void mlx5e_mod_hdr_dealloc(struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts) { - kfree(mod_hdr_acts->actions); + if (!mod_hdr_acts->is_static) + kfree(mod_hdr_acts->actions); + mod_hdr_acts->actions = NULL; mod_hdr_acts->num_actions = 0; mod_hdr_acts->max_actions = 0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.h b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.h index b8cd1a7a31be..b8dac418d0a5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/mod_hdr.h @@ -7,14 +7,27 @@ #include #include +#define MLX5_MH_ACT_SZ MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto) + struct mlx5e_mod_hdr_handle; struct mlx5e_tc_mod_hdr_acts { int num_actions; int max_actions; + bool is_static; void *actions; }; +#define DECLARE_MOD_HDR_ACTS_ACTIONS(name, len) \ + u8 name[len][MLX5_MH_ACT_SZ] = {} + +#define DECLARE_MOD_HDR_ACTS(name, acts_arr) \ + struct mlx5e_tc_mod_hdr_acts name = { \ + .max_actions = ARRAY_SIZE(acts_arr), \ + .is_static = true, \ + .actions = acts_arr, \ + } + char *mlx5e_mod_hdr_alloc(struct mlx5_core_dev *mdev, int namespace, struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts); void mlx5e_mod_hdr_dealloc(struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index 89065fac7590..f89a4c7a4f71 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -36,6 +36,12 @@ #define MLX5_CT_LABELS_BITS (mlx5e_tc_attr_to_reg_mappings[LABELS_TO_REG].mlen) #define MLX5_CT_LABELS_MASK GENMASK(MLX5_CT_LABELS_BITS - 1, 0) +/* Statically allocate modify actions for + * ipv6 and port nat (5) + tuple fields (4) + nic mode zone restore (1) = 10. + * This will be increased dynamically if needed (for the ipv6 snat + dnat). + */ +#define MLX5_CT_MIN_MOD_ACTS 10 + #define ct_dbg(fmt, args...)\ netdev_dbg(ct_priv->netdev, "ct_debug: " fmt "\n", ##args) @@ -645,7 +651,8 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv, struct mlx5e_mod_hdr_handle **mh, u8 zone_restore_id, bool nat) { - struct mlx5e_tc_mod_hdr_acts mod_acts = {}; + DECLARE_MOD_HDR_ACTS_ACTIONS(actions_arr, MLX5_CT_MIN_MOD_ACTS); + DECLARE_MOD_HDR_ACTS(mod_acts, actions_arr); struct flow_action_entry *meta; u16 ct_state = 0; int err; From patchwork Wed Nov 17 04:33:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623693 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B838C433FE for ; Wed, 17 Nov 2021 04:34:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DF274615E6 for ; Wed, 17 Nov 2021 04:34:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233120AbhKQEhL (ORCPT ); Tue, 16 Nov 2021 23:37:11 -0500 Received: from mail.kernel.org ([198.145.29.99]:41490 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233084AbhKQEhH (ORCPT ); Tue, 16 Nov 2021 23:37:07 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id D41EA613A2; Wed, 17 Nov 2021 04:34:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123649; bh=/QWsJdm9P/gzLtpcchtBuSjP8/VuvdlsIhZ1cDIlVF4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TsyLJ/uFXW+w+4CmPGdZ2La9iSULHWY4qNg9ezt6bO0d9JWuDZApjis5NpAl2qR25 gw2I20YGGSK85Y389u8kifkSHcKWNsCLMvZTK4aK4gUZSGSnYNn6EzSAdjSHX4zGr2 BEabXxuRU34eas+cCGBbJPRLLByhB5iWdoVe+aZ4fNZ7Tk1Xij0y8kWKID5YogUQ1i QNhLr/LYJj2uZJHRJfeCQfIym/C2KjDnRjhiOZlq694g7bFJqznDkW8/zpNitZBfaG OzvRtVDRhmgBQQrLzn+AgWQ5SmTPWegF5Sj6/N5FdRV2vpr0erhyEGaaThqEaqdEbF CalueyOqRQWCw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Yihao Han , Roi Dayan , Saeed Mahameed Subject: [net-next v0 06/15] net/mlx5: TC, using swap() instead of tmp variable Date: Tue, 16 Nov 2021 20:33:48 -0800 Message-Id: <20211117043357.345072-7-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yihao Han swap() was used instead of the tmp variable to swap values Signed-off-by: Yihao Han Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index f89a4c7a4f71..9a31f45a9d9b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -907,12 +907,9 @@ mlx5_tc_ct_shared_counter_get(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_tuple rev_tuple = entry->tuple; struct mlx5_ct_counter *shared_counter; struct mlx5_ct_entry *rev_entry; - __be16 tmp_port; /* get the reversed tuple */ - tmp_port = rev_tuple.port.src; - rev_tuple.port.src = rev_tuple.port.dst; - rev_tuple.port.dst = tmp_port; + swap(rev_tuple.port.src, rev_tuple.port.dst); if (rev_tuple.addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { __be32 tmp_addr = rev_tuple.ip.src_v4; From patchwork Wed Nov 17 04:33:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623697 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70612C433EF for ; Wed, 17 Nov 2021 04:34:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60D9A61423 for ; Wed, 17 Nov 2021 04:34:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233122AbhKQEhM (ORCPT ); Tue, 16 Nov 2021 23:37:12 -0500 Received: from mail.kernel.org ([198.145.29.99]:41502 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233103AbhKQEhH (ORCPT ); Tue, 16 Nov 2021 23:37:07 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4CB7661BE2; Wed, 17 Nov 2021 04:34:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123649; bh=UC3JFjTqCdAL69z/iGsEJgpz+jx33KnBfQMWLIUn+sY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IJrkZ3KFAo4UArK+/RLWvzzLtNDVv6bE6dAlLLeLqxsxRJ+xddoeMEvMkRGwQnHyr GqYRyUs1X4TX75NPIDz1aG3MRBn0L1p6a9hvJ2m7C7lSDQNJbc8wtPUmVjWmu9hry2 Na6k1SStoq19C83F/ew9PjiaaTWqtnxk9N1PMRN4dB2h3rl8IiOCkmYoDP7Sxdw+vO 0NNfrth2wKRZEqo2uPxiB+5ECBCqRTYteVQMkUwW2I6Q8aOfLCsxImqHmG2b7xSWEJ B/t5dGh3iUyLGzoD3qYX2gCyA+qervaHND6IGeruryVc+nW8tyKFGE28Rl5Fy9BCLo VThH4Q47QFRDw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Vlad Buslov , Saeed Mahameed Subject: [net-next v0 07/15] net/mlx5e: TC, Destroy nic flow counter if exists Date: Tue, 16 Nov 2021 20:33:49 -0800 Message-Id: <20211117043357.345072-8-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Roi Dayan Counter is only added if counter flag exists. So check the counter fag exists for deleting the counter. This is the same as in add/del fdb flow. Signed-off-by: Roi Dayan Reviewed-by: Vlad Buslov Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index e620100eabe0..3e542b030fc1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -1133,7 +1133,8 @@ static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv, if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) mlx5e_detach_mod_hdr(priv, flow); - mlx5_fc_destroy(priv->mdev, attr->counter); + if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) + mlx5_fc_destroy(priv->mdev, attr->counter); if (flow_flag_test(flow, HAIRPIN)) mlx5e_hairpin_flow_del(priv, flow); From patchwork Wed Nov 17 04:33:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623695 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 727DCC433FE for ; Wed, 17 Nov 2021 04:34:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 54C81615E6 for ; Wed, 17 Nov 2021 04:34:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233125AbhKQEhM (ORCPT ); Tue, 16 Nov 2021 23:37:12 -0500 Received: from mail.kernel.org ([198.145.29.99]:41520 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233037AbhKQEhH (ORCPT ); Tue, 16 Nov 2021 23:37:07 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id BAE7561875; Wed, 17 Nov 2021 04:34:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123650; bh=mdqhvsW4CQOlR6VXHaUi1T12dLm9N+o8AE/ZyoJRAbA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FSFayc/rlf2MZOciG5ifQtOJzKZzQACvqpjEOXUH/eF6JpBQ2kEyWay3SZxcdosh1 Ko6sTwCf/xUd1+RSeY6MyQ4PCIvtJLfQF/UepEt/Dqu1NKQAxApFgP6WziiyA4iSnG T2hwiNNdHiV2Ql+ebUv+A5RVXkM4QMPigwGwQijaHLk2Q3Q1xX5cXRFeTzBDZohygW CgBIqLzbKfrGzB4LB53bAuZ3qF6N5ikYqrBgkliM/eMkVfaX3ohgdT0jRvgYNHvm1R jfb2Kc2LSZhW6//hPuwZSo7FgMZkS3gGldwKgE6IdtFRqPxMJ5qKwpOLJHVrjicS+c s5ZcO06YsMF2w== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Vlad Buslov , Saeed Mahameed Subject: [net-next v0 08/15] net/mlx5e: TC, Move kfree() calls after destroying all resources Date: Tue, 16 Nov 2021 20:33:50 -0800 Message-Id: <20211117043357.345072-9-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Roi Dayan When deleting fdb/nic flow rules first release all resources and then call the kfree() calls instead of sparse them around the function. Signed-off-by: Roi Dayan Reviewed-by: Vlad Buslov Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 3e542b030fc1..aa4da8d1e252 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -1128,8 +1128,6 @@ static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv, } mutex_unlock(&priv->fs.tc.t_lock); - kvfree(attr->parse_attr); - if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) mlx5e_detach_mod_hdr(priv, flow); @@ -1139,6 +1137,7 @@ static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv, if (flow_flag_test(flow, HAIRPIN)) mlx5e_hairpin_flow_del(priv, flow); + kvfree(attr->parse_attr); kfree(flow->attr); } @@ -1626,9 +1625,6 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv, else mlx5e_detach_mod_hdr(priv, flow); } - kfree(attr->sample_attr); - kvfree(attr->parse_attr); - kvfree(attr->esw_attr->rx_tun_attr); if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) mlx5_fc_destroy(esw_attr->counter_dev, attr->counter); @@ -1642,6 +1638,9 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv, if (flow_flag_test(flow, L3_TO_L2_DECAP)) mlx5e_detach_decap(priv, flow); + kfree(attr->sample_attr); + kvfree(attr->esw_attr->rx_tun_attr); + kvfree(attr->parse_attr); kfree(flow->attr); } From patchwork Wed Nov 17 04:33:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623699 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1A8CC4332F for ; Wed, 17 Nov 2021 04:34:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 849DF61BFA for ; Wed, 17 Nov 2021 04:34:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233130AbhKQEhN (ORCPT ); Tue, 16 Nov 2021 23:37:13 -0500 Received: from mail.kernel.org ([198.145.29.99]:41526 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233105AbhKQEhI (ORCPT ); Tue, 16 Nov 2021 23:37:08 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 345CD615E6; Wed, 17 Nov 2021 04:34:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123650; bh=a8pFSV0H0TTbtm+KVo1dvE//T7pp5jOJBAraMnDjpNA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nmI+U0NYKRTN1sNWkjL07X5gMlnzth6eheNyZEfm9muMjp4Z5UHCZqNHe7mbwb+VN uN3w3um3nRwIEv8oP9qnG6u/hh6tfylFjjDas98cO2g7Wawz+eAzO+cGurAAw2XAc2 2Ml3O6pOjvKqS846yVFOiiqK3FHWfgpMM7+l45q0rxqXDAtjXn10FhSJBfdyzQMQZ6 iKIZZwlXYvcNvGkVMbD7ySLeddd655+ri2LzAhVVjJ1BkbvduuLX3016zwmLpVkJ0N IkwPR8CafnMXALPFVkoJVYqKtltpFjP0CkA2vwyWx+oGVNSpiZ8PSQZR/puERvZhqH 8X00mXoPfxO1g== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Maor Dickman , Saeed Mahameed Subject: [net-next v0 09/15] net/mlx5e: TC, Move comment about mod header flag to correct place Date: Tue, 16 Nov 2021 20:33:51 -0800 Message-Id: <20211117043357.345072-10-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Roi Dayan Move the comment to the correct place where the driver actually removes the flag and not in the check that maybe pedit actions exists. Signed-off-by: Roi Dayan Reviewed-by: Maor Dickman Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index aa4da8d1e252..686bb2e08e9e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -3424,10 +3424,10 @@ actions_prepare_mod_hdr_actions(struct mlx5e_priv *priv, if (err) return err; - /* In case all pedit actions are skipped, remove the MOD_HDR flag. */ if (parse_attr->mod_hdr_acts.num_actions > 0) return 0; + /* In case all pedit actions are skipped, remove the MOD_HDR flag. */ attr->action &= ~MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; mlx5e_mod_hdr_dealloc(&parse_attr->mod_hdr_acts); From patchwork Wed Nov 17 04:33:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623701 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC6C0C433F5 for ; Wed, 17 Nov 2021 04:34:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 979F261BFB for ; Wed, 17 Nov 2021 04:34:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233131AbhKQEhO (ORCPT ); Tue, 16 Nov 2021 23:37:14 -0500 Received: from mail.kernel.org ([198.145.29.99]:41540 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232056AbhKQEhI (ORCPT ); Tue, 16 Nov 2021 23:37:08 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A633361BE1; Wed, 17 Nov 2021 04:34:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123651; bh=NsDOG6DqPNkMKyL/zwFn1yXusXgVPnQ86j1ohVmul50=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EaiKZ32IqGMXB7QeI688lkjJmJKl9bqW9BQtIB3ycBXyIp9g+YSwcDIkg5NC4Aac3 N9IEMNDmv0zCl37icyW3mhnmGmEmx85COZPjH31LzvMaxFODAsUt2ocU4qqgBi1tXv Q/S7p4rGM6N8IQ++MG7Svr4oPqOplo+SkJOPJgpIazevIswCCXdwBsl+mXJ2bzxXM7 ZSmlSA6+tYqhLaIBscp2abcL0jXBXW5K9DDakDacEVUL6mDX/FUZYqTLC6151x0PT0 bitX9y16s1JVegNEF/H8jmthMqSHGNGlt3kxEkA0wAMOpvXP4NYBlIoruDN5JF5u97 iYDtAV2J90znA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Chris Mi , Roi Dayan , Saeed Mahameed Subject: [net-next v0 10/15] net/mlx5e: Specify out ifindex when looking up decap route Date: Tue, 16 Nov 2021 20:33:52 -0800 Message-Id: <20211117043357.345072-11-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Chris Mi There is a use case that the local and remote VTEPs are in the same host. Currently, the out ifindex is not specified when looking up the decap route for offloads. So in this case, a local route is returned and the route dev is lo. Actual tunnel interface can be created with a parameter "dev" [1], which specifies the physical device to use for tunnel endpoint communication. Pass this parameter to driver when looking up decap route for offloads. So that a unicast route will be returned. [1] ip link add name vxlan1 type vxlan id 100 dev enp4s0f0 remote 1.1.1.1 dstport 4789 Signed-off-by: Chris Mi Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/tc_tun.c | 23 ++++++++++--------- .../ethernet/mellanox/mlx5/core/en/tc_tun.h | 3 ++- .../mellanox/mlx5/core/en/tc_tun_encap.c | 4 ++-- 3 files changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c index a5e450973225..33815246fead 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c @@ -103,7 +103,7 @@ static int get_route_and_out_devs(struct mlx5e_priv *priv, } static int mlx5e_route_lookup_ipv4_get(struct mlx5e_priv *priv, - struct net_device *mirred_dev, + struct net_device *dev, struct mlx5e_tc_tun_route_attr *attr) { struct net_device *route_dev; @@ -122,13 +122,13 @@ static int mlx5e_route_lookup_ipv4_get(struct mlx5e_priv *priv, uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH); attr->fl.fl4.flowi4_oif = uplink_dev->ifindex; } else { - struct mlx5e_tc_tunnel *tunnel = mlx5e_get_tc_tun(mirred_dev); + struct mlx5e_tc_tunnel *tunnel = mlx5e_get_tc_tun(dev); if (tunnel && tunnel->get_remote_ifindex) - attr->fl.fl4.flowi4_oif = tunnel->get_remote_ifindex(mirred_dev); + attr->fl.fl4.flowi4_oif = tunnel->get_remote_ifindex(dev); } - rt = ip_route_output_key(dev_net(mirred_dev), &attr->fl.fl4); + rt = ip_route_output_key(dev_net(dev), &attr->fl.fl4); if (IS_ERR(rt)) return PTR_ERR(rt); @@ -440,10 +440,10 @@ int mlx5e_tc_tun_update_header_ipv4(struct mlx5e_priv *priv, #if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6) static int mlx5e_route_lookup_ipv6_get(struct mlx5e_priv *priv, - struct net_device *mirred_dev, + struct net_device *dev, struct mlx5e_tc_tun_route_attr *attr) { - struct mlx5e_tc_tunnel *tunnel = mlx5e_get_tc_tun(mirred_dev); + struct mlx5e_tc_tunnel *tunnel = mlx5e_get_tc_tun(dev); struct net_device *route_dev; struct net_device *out_dev; struct dst_entry *dst; @@ -451,8 +451,8 @@ static int mlx5e_route_lookup_ipv6_get(struct mlx5e_priv *priv, int ret; if (tunnel && tunnel->get_remote_ifindex) - attr->fl.fl6.flowi6_oif = tunnel->get_remote_ifindex(mirred_dev); - dst = ipv6_stub->ipv6_dst_lookup_flow(dev_net(mirred_dev), NULL, &attr->fl.fl6, + attr->fl.fl6.flowi6_oif = tunnel->get_remote_ifindex(dev); + dst = ipv6_stub->ipv6_dst_lookup_flow(dev_net(dev), NULL, &attr->fl.fl6, NULL); if (IS_ERR(dst)) return PTR_ERR(dst); @@ -708,7 +708,8 @@ int mlx5e_tc_tun_update_header_ipv6(struct mlx5e_priv *priv, int mlx5e_tc_tun_route_lookup(struct mlx5e_priv *priv, struct mlx5_flow_spec *spec, - struct mlx5_flow_attr *flow_attr) + struct mlx5_flow_attr *flow_attr, + struct net_device *filter_dev) { struct mlx5_esw_flow_attr *esw_attr = flow_attr->esw_attr; struct mlx5e_tc_int_port *int_port; @@ -720,14 +721,14 @@ int mlx5e_tc_tun_route_lookup(struct mlx5e_priv *priv, /* Addresses are swapped for decap */ attr.fl.fl4.saddr = esw_attr->rx_tun_attr->dst_ip.v4; attr.fl.fl4.daddr = esw_attr->rx_tun_attr->src_ip.v4; - err = mlx5e_route_lookup_ipv4_get(priv, priv->netdev, &attr); + err = mlx5e_route_lookup_ipv4_get(priv, filter_dev, &attr); } #if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6) else if (flow_attr->tun_ip_version == 6) { /* Addresses are swapped for decap */ attr.fl.fl6.saddr = esw_attr->rx_tun_attr->dst_ip.v6; attr.fl.fl6.daddr = esw_attr->rx_tun_attr->src_ip.v6; - err = mlx5e_route_lookup_ipv6_get(priv, priv->netdev, &attr); + err = mlx5e_route_lookup_ipv6_get(priv, filter_dev, &attr); } #endif else diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h index aa092eaeaec3..b38f693bbb52 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h @@ -94,7 +94,8 @@ mlx5e_tc_tun_update_header_ipv6(struct mlx5e_priv *priv, #endif int mlx5e_tc_tun_route_lookup(struct mlx5e_priv *priv, struct mlx5_flow_spec *spec, - struct mlx5_flow_attr *attr); + struct mlx5_flow_attr *attr, + struct net_device *filter_dev); bool mlx5e_tc_tun_device_to_offload(struct mlx5e_priv *priv, struct net_device *netdev); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c index 660cca73c36c..de16bbc08679 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c @@ -1153,7 +1153,7 @@ int mlx5e_attach_decap_route(struct mlx5e_priv *priv, tbl_time_before = mlx5e_route_tbl_get_last_update(priv); tbl_time_after = tbl_time_before; - err = mlx5e_tc_tun_route_lookup(priv, &parse_attr->spec, attr); + err = mlx5e_tc_tun_route_lookup(priv, &parse_attr->spec, attr, parse_attr->filter_dev); if (err || !esw_attr->rx_tun_attr->decap_vport) goto out; @@ -1474,7 +1474,7 @@ static void mlx5e_reoffload_decap(struct mlx5e_priv *priv, parse_attr = attr->parse_attr; spec = &parse_attr->spec; - err = mlx5e_tc_tun_route_lookup(priv, spec, attr); + err = mlx5e_tc_tun_route_lookup(priv, spec, attr, parse_attr->filter_dev); if (err) { mlx5_core_warn(priv->mdev, "Failed to lookup route for flow, %d\n", err); From patchwork Wed Nov 17 04:33:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623703 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D51C9C433F5 for ; Wed, 17 Nov 2021 04:34:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BF2FE61057 for ; Wed, 17 Nov 2021 04:34:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233145AbhKQEhR (ORCPT ); Tue, 16 Nov 2021 23:37:17 -0500 Received: from mail.kernel.org ([198.145.29.99]:41570 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233109AbhKQEhJ (ORCPT ); Tue, 16 Nov 2021 23:37:09 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 21DE361155; Wed, 17 Nov 2021 04:34:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123651; bh=Dr/RM5IMWN97wlWq6QEcpwKh6cRWK6NlljFkbHtJSc8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EXxOQlJ4EqfFlDBY6tVP8oFI+0pDDnZZCa7euxyuf7rZGS1RZ239X6PfIhJUNZ707 1/MNbSPF0BCN54/2/zcdYKv8mOEiV1hV6YU05yXtQO5h1ZPGNNmtGl1ag5A+v8OIGQ Yi9uZ0CwBLh8NfYbFTaLQtl2Mb2F7jZdpRA7oPJOdTWXjUqiX3i+9+kWH1OQqdU8H7 7HoBRx7L5jYDPNgj3NlahiQUntEdDIUr4hQko6ASVgohyDvdSihOTaibIZSnw9TcTy omVK4CNBecP6zTRUSdDYrpdCTSVwSGxNSxLaqQmSgiXlMyu/d+uPxM4R4tp6YQpt52 1FOddpOsfPRnQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Parav Pandit , Sunil Sudhakar Rani , Mark Bloch , Saeed Mahameed Subject: [net-next v0 11/15] net/mlx5: E-switch, Remove vport enabled check Date: Tue, 16 Nov 2021 20:33:53 -0800 Message-Id: <20211117043357.345072-12-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Parav Pandit An eswitch vport of the devlink port is always enabled before a devlink port is registered. And a eswitch vport is always disabled after a devlink port is unregistered. Hence avoid the vport enabled check in the devlink callback routine. Such check is only applicable in the legacy SR-IOV callbacks. Signed-off-by: Parav Pandit Reviewed-by: Sunil Sudhakar Rani Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/eswitch.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index ec136b499204..b039f8b07d31 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1704,7 +1704,6 @@ int mlx5_devlink_port_function_hw_addr_get(struct devlink_port *port, { struct mlx5_eswitch *esw; struct mlx5_vport *vport; - int err = -EOPNOTSUPP; u16 vport_num; esw = mlx5_devlink_eswitch_get(port->devlink); @@ -1722,13 +1721,10 @@ int mlx5_devlink_port_function_hw_addr_get(struct devlink_port *port, } mutex_lock(&esw->state_lock); - if (vport->enabled) { - ether_addr_copy(hw_addr, vport->info.mac); - *hw_addr_len = ETH_ALEN; - err = 0; - } + ether_addr_copy(hw_addr, vport->info.mac); + *hw_addr_len = ETH_ALEN; mutex_unlock(&esw->state_lock); - return err; + return 0; } int mlx5_devlink_port_function_hw_addr_set(struct devlink_port *port, @@ -1737,8 +1733,8 @@ int mlx5_devlink_port_function_hw_addr_set(struct devlink_port *port, { struct mlx5_eswitch *esw; struct mlx5_vport *vport; - int err = -EOPNOTSUPP; u16 vport_num; + int err; esw = mlx5_devlink_eswitch_get(port->devlink); if (IS_ERR(esw)) { @@ -1758,10 +1754,7 @@ int mlx5_devlink_port_function_hw_addr_set(struct devlink_port *port, } mutex_lock(&esw->state_lock); - if (vport->enabled) - err = mlx5_esw_set_vport_mac_locked(esw, vport, hw_addr); - else - NL_SET_ERR_MSG_MOD(extack, "Eswitch vport is disabled"); + err = mlx5_esw_set_vport_mac_locked(esw, vport, hw_addr); mutex_unlock(&esw->state_lock); return err; } From patchwork Wed Nov 17 04:33:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623705 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62482C433EF for ; Wed, 17 Nov 2021 04:34:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4DB9661057 for ; Wed, 17 Nov 2021 04:34:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233132AbhKQEhS (ORCPT ); Tue, 16 Nov 2021 23:37:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:41576 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233110AbhKQEhK (ORCPT ); Tue, 16 Nov 2021 23:37:10 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9D4F6613A2; Wed, 17 Nov 2021 04:34:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123651; bh=A+5PETo5MR9vCScCQQmU0qbqMXgepGG2kkXw8HPUlbw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PMpIpHo8cGkc7Q9JTZJiKfyRoXTpv3NiL7VR0m/LlvYJ72XGMNYCO4dz7AXJjZPkK xZb3JorY6ZdrsTsh2iSU6FqCeP+bFxgTLVfnhhqrTtmxuGhlQrXtPaT+mhekSM06ke AebDkiUTEJlvvAGnJx2spuywLQ7/J5PfVTMJrnzCUBJVBKio4awMRlPNRt8u2GLKuy 6HmQFGi68HnRMpBxRtl3iOTkQBBHifDvbGV/YYEjWR5KQMRyA4sxIJem20Rg+6NJbt lP4r7H4P4i0Q1/h1ecRo55uS9tqhWyecNNKxcZW8cZd3Hu6DmkLI2o5Qb6QDzamqKB VtLyn/ZSv7cUQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Parav Pandit , Mark Bloch , Saeed Mahameed Subject: [net-next v0 12/15] net/mlx5: E-switch, Reuse mlx5_eswitch_set_vport_mac Date: Tue, 16 Nov 2021 20:33:54 -0800 Message-Id: <20211117043357.345072-13-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Parav Pandit mlx5_eswitch_set_vport_mac() routine already does necessary checks which are duplicated in implementation of mlx5_devlink_port_function_hw_addr_set(). Hence, reuse mlx5_eswitch_set_vport_mac() and cut down the code. Signed-off-by: Parav Pandit Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index b039f8b07d31..c0526fc27ad6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1732,9 +1732,7 @@ int mlx5_devlink_port_function_hw_addr_set(struct devlink_port *port, struct netlink_ext_ack *extack) { struct mlx5_eswitch *esw; - struct mlx5_vport *vport; u16 vport_num; - int err; esw = mlx5_devlink_eswitch_get(port->devlink); if (IS_ERR(esw)) { @@ -1747,16 +1745,8 @@ int mlx5_devlink_port_function_hw_addr_set(struct devlink_port *port, NL_SET_ERR_MSG_MOD(extack, "Port doesn't support set hw_addr"); return -EINVAL; } - vport = mlx5_eswitch_get_vport(esw, vport_num); - if (IS_ERR(vport)) { - NL_SET_ERR_MSG_MOD(extack, "Invalid port"); - return PTR_ERR(vport); - } - mutex_lock(&esw->state_lock); - err = mlx5_esw_set_vport_mac_locked(esw, vport, hw_addr); - mutex_unlock(&esw->state_lock); - return err; + return mlx5_eswitch_set_vport_mac(esw, vport_num, hw_addr); } int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw, From patchwork Wed Nov 17 04:33:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623707 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5D5BC433FE for ; Wed, 17 Nov 2021 04:34:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA6FA61155 for ; Wed, 17 Nov 2021 04:34:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233111AbhKQEhT (ORCPT ); Tue, 16 Nov 2021 23:37:19 -0500 Received: from mail.kernel.org ([198.145.29.99]:41606 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232579AbhKQEhL (ORCPT ); Tue, 16 Nov 2021 23:37:11 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1B969619F6; Wed, 17 Nov 2021 04:34:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123652; bh=MfGRTO5dC0E38da3ARs3dUzXCVX3wHeLNTcyaEMzigE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=scPLYVGMwF+T2oLfdSLCOyTQcNRTA58neLhmT5MWWM0Tg4PyzGyPN+6J0IY7pAYzE MFdFmU2rBvHVyixWqAHZ8l79lGUOxgzg5QPfC6M12BdE6fbAfDkB4GJGcC9DVXbAHi 79D2Qyv7QBVuzJICMfA0zCchY0BuGFAdawYJ9QyroYMohJN8Q0igssex2tfrqL30ck GvRTvfXbCkNr3OQXxrREh/4ZEG+a0y2Sa99uuUW1P+bOOLNP55CAvf2WoLj59SvQeY eNwfFNsmSU3j7F3Tj4J/uLKLtv0/aF5M4HRHMTwZDZW7jcbjfMV4GiRwyTyRPNovmE ER0+Db45kBhTQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Parav Pandit , Mark Bloch , Saeed Mahameed Subject: [net-next v0 13/15] net/mlx5: E-switch, move offloads mode callbacks to offloads file Date: Tue, 16 Nov 2021 20:33:55 -0800 Message-Id: <20211117043357.345072-14-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Parav Pandit eswitch.c is mainly for common code between legacy and offloads mode. MAC address get and set via devlink is applicable only in offloads mode. Hence, move it to eswitch_offloads.c file. Signed-off-by: Parav Pandit Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/eswitch.c | 59 ------------------- .../mellanox/mlx5/core/eswitch_offloads.c | 59 +++++++++++++++++++ 2 files changed, 59 insertions(+), 59 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index c0526fc27ad6..ec5b1641d40c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1690,65 +1690,6 @@ bool mlx5_esw_is_sf_vport(struct mlx5_eswitch *esw, u16 vport_num) return mlx5_esw_check_port_type(esw, vport_num, MLX5_ESW_VPT_SF); } -static bool -is_port_function_supported(struct mlx5_eswitch *esw, u16 vport_num) -{ - return vport_num == MLX5_VPORT_PF || - mlx5_eswitch_is_vf_vport(esw, vport_num) || - mlx5_esw_is_sf_vport(esw, vport_num); -} - -int mlx5_devlink_port_function_hw_addr_get(struct devlink_port *port, - u8 *hw_addr, int *hw_addr_len, - struct netlink_ext_ack *extack) -{ - struct mlx5_eswitch *esw; - struct mlx5_vport *vport; - u16 vport_num; - - esw = mlx5_devlink_eswitch_get(port->devlink); - if (IS_ERR(esw)) - return PTR_ERR(esw); - - vport_num = mlx5_esw_devlink_port_index_to_vport_num(port->index); - if (!is_port_function_supported(esw, vport_num)) - return -EOPNOTSUPP; - - vport = mlx5_eswitch_get_vport(esw, vport_num); - if (IS_ERR(vport)) { - NL_SET_ERR_MSG_MOD(extack, "Invalid port"); - return PTR_ERR(vport); - } - - mutex_lock(&esw->state_lock); - ether_addr_copy(hw_addr, vport->info.mac); - *hw_addr_len = ETH_ALEN; - mutex_unlock(&esw->state_lock); - return 0; -} - -int mlx5_devlink_port_function_hw_addr_set(struct devlink_port *port, - const u8 *hw_addr, int hw_addr_len, - struct netlink_ext_ack *extack) -{ - struct mlx5_eswitch *esw; - u16 vport_num; - - esw = mlx5_devlink_eswitch_get(port->devlink); - if (IS_ERR(esw)) { - NL_SET_ERR_MSG_MOD(extack, "Eswitch doesn't support set hw_addr"); - return PTR_ERR(esw); - } - - vport_num = mlx5_esw_devlink_port_index_to_vport_num(port->index); - if (!is_port_function_supported(esw, vport_num)) { - NL_SET_ERR_MSG_MOD(extack, "Port doesn't support set hw_addr"); - return -EINVAL; - } - - return mlx5_eswitch_set_vport_mac(esw, vport_num, hw_addr); -} - int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw, u16 vport, int link_state) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index f4eaa5893886..4bd502ae82b6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -3862,3 +3862,62 @@ u32 mlx5_eswitch_get_vport_metadata_for_set(struct mlx5_eswitch *esw, return vport->metadata; } EXPORT_SYMBOL(mlx5_eswitch_get_vport_metadata_for_set); + +static bool +is_port_function_supported(struct mlx5_eswitch *esw, u16 vport_num) +{ + return vport_num == MLX5_VPORT_PF || + mlx5_eswitch_is_vf_vport(esw, vport_num) || + mlx5_esw_is_sf_vport(esw, vport_num); +} + +int mlx5_devlink_port_function_hw_addr_get(struct devlink_port *port, + u8 *hw_addr, int *hw_addr_len, + struct netlink_ext_ack *extack) +{ + struct mlx5_eswitch *esw; + struct mlx5_vport *vport; + u16 vport_num; + + esw = mlx5_devlink_eswitch_get(port->devlink); + if (IS_ERR(esw)) + return PTR_ERR(esw); + + vport_num = mlx5_esw_devlink_port_index_to_vport_num(port->index); + if (!is_port_function_supported(esw, vport_num)) + return -EOPNOTSUPP; + + vport = mlx5_eswitch_get_vport(esw, vport_num); + if (IS_ERR(vport)) { + NL_SET_ERR_MSG_MOD(extack, "Invalid port"); + return PTR_ERR(vport); + } + + mutex_lock(&esw->state_lock); + ether_addr_copy(hw_addr, vport->info.mac); + *hw_addr_len = ETH_ALEN; + mutex_unlock(&esw->state_lock); + return 0; +} + +int mlx5_devlink_port_function_hw_addr_set(struct devlink_port *port, + const u8 *hw_addr, int hw_addr_len, + struct netlink_ext_ack *extack) +{ + struct mlx5_eswitch *esw; + u16 vport_num; + + esw = mlx5_devlink_eswitch_get(port->devlink); + if (IS_ERR(esw)) { + NL_SET_ERR_MSG_MOD(extack, "Eswitch doesn't support set hw_addr"); + return PTR_ERR(esw); + } + + vport_num = mlx5_esw_devlink_port_index_to_vport_num(port->index); + if (!is_port_function_supported(esw, vport_num)) { + NL_SET_ERR_MSG_MOD(extack, "Port doesn't support set hw_addr"); + return -EINVAL; + } + + return mlx5_eswitch_set_vport_mac(esw, vport_num, hw_addr); +} From patchwork Wed Nov 17 04:33:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623709 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4949AC433F5 for ; Wed, 17 Nov 2021 04:34:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33418619F6 for ; Wed, 17 Nov 2021 04:34:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233139AbhKQEhV (ORCPT ); Tue, 16 Nov 2021 23:37:21 -0500 Received: from mail.kernel.org ([198.145.29.99]:41608 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233100AbhKQEhL (ORCPT ); Tue, 16 Nov 2021 23:37:11 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8AA8461057; Wed, 17 Nov 2021 04:34:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123652; bh=ncQNbEfxSifNw5yovNSNN65a4vz0bo1cwHTw7TjHoUA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=V7fb0+A+BtY++LpIUm9eOT3aeb+5LyTmHD396ddVsSp/qJQIk226hgKucJhz2jv/e PEO1+FK1ToCY8oMxP5Xt+CJUislbsSfAJWtUkftlrhgev4Hcl9Dzum8Oeg7u6J5oNE 4htSKobPsdU2RCSP2qVCBUcfbmubY9QUtl6cy/Ez300C+jvs6+nNvQDLmVIVPoBDum /ZeZzKkecQYUcKmz2exZxHvF0iCGTQ/Xxhd00E4Kz16Lxjw+4Y7csOQ/cLI761lW4A ZE39lTow1PEglZSKUqB0Tpm1ZKHUM9dGF7m9xBkrasHsKXYgLPHbhU3nncrcsdwgLj 7lTx4YoIoMQOw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Dmytro Linkin , Parav Pandit , Mark Bloch , Saeed Mahameed Subject: [net-next v0 14/15] net/mlx5: E-switch, Enable vport QoS on demand Date: Tue, 16 Nov 2021 20:33:56 -0800 Message-Id: <20211117043357.345072-15-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Dmytro Linkin Vports' QoS is not commonly used but consume SW/HW resources, which becomes an issue on BlueField SoC systems. Don't enable QoS on vports by default on eswitch mode change and enable when it's going to be used by one of the top level users: - configuring TC matchall filter with police action; - setting rate with legacy NDO API; - calling devlink ops->rate_leaf_*() callbacks. Disable vport QoS on vport cleanup. Signed-off-by: Dmytro Linkin Reviewed-by: Parav Pandit Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/esw/legacy.c | 4 +- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 82 +++++++++++++------ .../net/ethernet/mellanox/mlx5/core/esw/qos.h | 12 +-- .../net/ethernet/mellanox/mlx5/core/eswitch.c | 9 +- 4 files changed, 64 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c index df277a6cddc0..2b52f7c09152 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c @@ -522,9 +522,7 @@ int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, u16 vport, return PTR_ERR(evport); mutex_lock(&esw->state_lock); - err = mlx5_esw_qos_set_vport_min_rate(esw, evport, min_rate, NULL); - if (!err) - err = mlx5_esw_qos_set_vport_max_rate(esw, evport, max_rate, NULL); + err = mlx5_esw_qos_set_vport_rate(esw, evport, max_rate, min_rate); mutex_unlock(&esw->state_lock); return err; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index c6cc67cb4f6a..304abc293086 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -204,10 +204,8 @@ static int esw_qos_normalize_groups_min_rate(struct mlx5_eswitch *esw, u32 divid return 0; } -int mlx5_esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, - struct mlx5_vport *evport, - u32 min_rate, - struct netlink_ext_ack *extack) +static int esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, struct mlx5_vport *evport, + u32 min_rate, struct netlink_ext_ack *extack) { u32 fw_max_bw_share, previous_min_rate; bool min_rate_supported; @@ -231,10 +229,8 @@ int mlx5_esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, return err; } -int mlx5_esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, - struct mlx5_vport *evport, - u32 max_rate, - struct netlink_ext_ack *extack) +static int esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, struct mlx5_vport *evport, + u32 max_rate, struct netlink_ext_ack *extack) { u32 act_max_rate = max_rate; bool max_rate_supported; @@ -605,8 +601,8 @@ void mlx5_esw_qos_destroy(struct mlx5_eswitch *esw) mutex_unlock(&esw->state_lock); } -int mlx5_esw_qos_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vport, - u32 max_rate, u32 bw_share) +static int esw_qos_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vport, + u32 max_rate, u32 bw_share) { int err; @@ -615,7 +611,7 @@ int mlx5_esw_qos_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vport return 0; if (vport->qos.enabled) - return -EEXIST; + return 0; vport->qos.group = esw->qos.group0; @@ -645,31 +641,55 @@ void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo esw_warn(esw->dev, "E-Switch destroy TSAR vport element failed (vport=%d,err=%d)\n", vport->vport, err); - vport->qos.enabled = false; + memset(&vport->qos, 0, sizeof(vport->qos)); trace_mlx5_esw_vport_qos_destroy(vport); } +int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vport, + u32 min_rate, u32 max_rate) +{ + int err; + + lockdep_assert_held(&esw->state_lock); + err = esw_qos_vport_enable(esw, vport, 0, 0); + if (err) + return err; + + err = esw_qos_set_vport_min_rate(esw, vport, min_rate, NULL); + if (!err) + err = esw_qos_set_vport_max_rate(esw, vport, max_rate, NULL); + + return err; +} + int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 rate_mbps) { u32 ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_vport *vport; u32 bitmask; + int err; vport = mlx5_eswitch_get_vport(esw, vport_num); if (IS_ERR(vport)) return PTR_ERR(vport); - if (!vport->qos.enabled) - return -EOPNOTSUPP; - - MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps); - bitmask = MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; + mutex_lock(&esw->state_lock); + if (!vport->qos.enabled) { + /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ + err = esw_qos_vport_enable(esw, vport, rate_mbps, vport->qos.bw_share); + } else { + MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps); + + bitmask = MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW; + err = mlx5_modify_scheduling_element_cmd(esw->dev, + SCHEDULING_HIERARCHY_E_SWITCH, + ctx, + vport->qos.esw_tsar_ix, + bitmask); + } + mutex_unlock(&esw->state_lock); - return mlx5_modify_scheduling_element_cmd(esw->dev, - SCHEDULING_HIERARCHY_E_SWITCH, - ctx, - vport->qos.esw_tsar_ix, - bitmask); + return err; } #define MLX5_LINKSPEED_UNIT 125000 /* 1Mbps in Bps */ @@ -728,7 +748,12 @@ int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void return err; mutex_lock(&esw->state_lock); - err = mlx5_esw_qos_set_vport_min_rate(esw, vport, tx_share, extack); + err = esw_qos_vport_enable(esw, vport, 0, 0); + if (err) + goto unlock; + + err = esw_qos_set_vport_min_rate(esw, vport, tx_share, extack); +unlock: mutex_unlock(&esw->state_lock); return err; } @@ -749,7 +774,12 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * return err; mutex_lock(&esw->state_lock); - err = mlx5_esw_qos_set_vport_max_rate(esw, vport, tx_max, extack); + err = esw_qos_vport_enable(esw, vport, 0, 0); + if (err) + goto unlock; + + err = esw_qos_set_vport_max_rate(esw, vport, tx_max, extack); +unlock: mutex_unlock(&esw->state_lock); return err; } @@ -846,7 +876,9 @@ int mlx5_esw_qos_vport_update_group(struct mlx5_eswitch *esw, int err; mutex_lock(&esw->state_lock); - err = esw_qos_vport_update_group(esw, vport, group, extack); + err = esw_qos_vport_enable(esw, vport, 0, 0); + if (!err) + err = esw_qos_vport_update_group(esw, vport, group, extack); mutex_unlock(&esw->state_lock); return err; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h index 28451abe2d2f..91b66c1b9881 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h @@ -6,18 +6,10 @@ #ifdef CONFIG_MLX5_ESWITCH -int mlx5_esw_qos_set_vport_min_rate(struct mlx5_eswitch *esw, - struct mlx5_vport *evport, - u32 min_rate, - struct netlink_ext_ack *extack); -int mlx5_esw_qos_set_vport_max_rate(struct mlx5_eswitch *esw, - struct mlx5_vport *evport, - u32 max_rate, - struct netlink_ext_ack *extack); +int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *evport, + u32 max_rate, u32 min_rate); void mlx5_esw_qos_create(struct mlx5_eswitch *esw); void mlx5_esw_qos_destroy(struct mlx5_eswitch *esw); -int mlx5_esw_qos_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vport, - u32 max_rate, u32 bw_share); void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vport); int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void *priv, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index ec5b1641d40c..2d188f462028 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -781,9 +781,6 @@ static int esw_vport_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport) if (err) return err; - /* Attach vport to the eswitch rate limiter */ - mlx5_esw_qos_vport_enable(esw, vport, vport->qos.max_rate, vport->qos.bw_share); - if (mlx5_esw_is_manager_vport(esw, vport_num)) return 0; @@ -1746,8 +1743,10 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw, ivi->qos = evport->info.qos; ivi->spoofchk = evport->info.spoofchk; ivi->trusted = evport->info.trusted; - ivi->min_tx_rate = evport->qos.min_rate; - ivi->max_tx_rate = evport->qos.max_rate; + if (evport->qos.enabled) { + ivi->min_tx_rate = evport->qos.min_rate; + ivi->max_tx_rate = evport->qos.max_rate; + } mutex_unlock(&esw->state_lock); return 0; From patchwork Wed Nov 17 04:33:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 12623711 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D50ACC433EF for ; Wed, 17 Nov 2021 04:34:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFE7861057 for ; Wed, 17 Nov 2021 04:34:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233142AbhKQEhV (ORCPT ); Tue, 16 Nov 2021 23:37:21 -0500 Received: from mail.kernel.org ([198.145.29.99]:41610 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233119AbhKQEhL (ORCPT ); Tue, 16 Nov 2021 23:37:11 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0E7A361BE2; Wed, 17 Nov 2021 04:34:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637123653; bh=KPhPKHQqfcLkcucc8sonHd1WUicEs6SaTpGxKTs8CM8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gZ1FunyUvtOIvCD4cR7iAoxbq5spWsxvm6aQfkKY+3lI19N0ILkHjm8Ml9SyQWpgo Xafq8zTOJv61L5sJ7oGySlQ4JbKieO+gZtwAh6dzgv05nzQlaXSnadLQZLBbk1Yin0 rgl5pBWlAD0wnrG9BqWZNp4v4YM0qA+itU+Z0I3jx4Q0Yo5GTxvRAhKBfsj1QQDl3+ CP0c4pDFQpqODPi4H5ZSeiA+ZgTBosbG4DCbtR4dXfzjIsxWqoZLtqrCqvdjLJfIwT ibjNg4sT4iY0SR5D6NZLXBGkuUmE/5TTO1ZA6vUHVJZ1GTlkBzksznVOnRr7kAUOdm Dts8g/G1yxhfA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Dmytro Linkin , Parav Pandit , Mark Bloch , Saeed Mahameed Subject: [net-next v0 15/15] net/mlx5: E-switch, Create QoS on demand Date: Tue, 16 Nov 2021 20:33:57 -0800 Message-Id: <20211117043357.345072-16-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211117043357.345072-1-saeed@kernel.org> References: <20211117043357.345072-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Dmytro Linkin Don't create eswitch QoS (root TSAR) on switch mode change. Create it on first child TSAR object creation - vport or rate group. Keep track root TSAR references and release root TSAR with last object deletion. No need to check for QoS is enabled when installing tc matchall filter. Remove related helper function due to no users of it. Signed-off-by: Dmytro Linkin Reviewed-by: Parav Pandit Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/en_tc.c | 6 - .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 152 ++++++++++++------ .../net/ethernet/mellanox/mlx5/core/esw/qos.h | 2 - .../net/ethernet/mellanox/mlx5/core/eswitch.c | 9 +- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 11 +- 5 files changed, 111 insertions(+), 69 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 686bb2e08e9e..55e384abd364 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -4948,14 +4948,8 @@ static int scan_tc_matchall_fdb_actions(struct mlx5e_priv *priv, int mlx5e_tc_configure_matchall(struct mlx5e_priv *priv, struct tc_cls_matchall_offload *ma) { - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct netlink_ext_ack *extack = ma->common.extack; - if (!mlx5_esw_qos_enabled(esw)) { - NL_SET_ERR_MSG_MOD(extack, "QoS is not supported on this device"); - return -EOPNOTSUPP; - } - if (ma->common.prio != 1) { NL_SET_ERR_MSG_MOD(extack, "only priority 1 is supported"); return -EINVAL; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 304abc293086..ff0a07a91992 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -428,16 +428,13 @@ static int esw_qos_vport_update_group(struct mlx5_eswitch *esw, } static struct mlx5_esw_rate_group * -esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +__esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_esw_rate_group *group; u32 divider; int err; - if (!MLX5_CAP_QOS(esw->dev, log_esw_max_sched_depth)) - return ERR_PTR(-EOPNOTSUPP); - group = kzalloc(sizeof(*group), GFP_KERNEL); if (!group) return ERR_PTR(-ENOMEM); @@ -478,9 +475,32 @@ esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta return ERR_PTR(err); } -static int esw_qos_destroy_rate_group(struct mlx5_eswitch *esw, - struct mlx5_esw_rate_group *group, - struct netlink_ext_ack *extack) +static int esw_qos_get(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack); +static void esw_qos_put(struct mlx5_eswitch *esw); + +static struct mlx5_esw_rate_group * +esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +{ + struct mlx5_esw_rate_group *group; + int err; + + if (!MLX5_CAP_QOS(esw->dev, log_esw_max_sched_depth)) + return ERR_PTR(-EOPNOTSUPP); + + err = esw_qos_get(esw, extack); + if (err) + return ERR_PTR(err); + + group = __esw_qos_create_rate_group(esw, extack); + if (IS_ERR(group)) + esw_qos_put(esw); + + return group; +} + +static int __esw_qos_destroy_rate_group(struct mlx5_eswitch *esw, + struct mlx5_esw_rate_group *group, + struct netlink_ext_ack *extack) { u32 divider; int err; @@ -499,7 +519,21 @@ static int esw_qos_destroy_rate_group(struct mlx5_eswitch *esw, NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR_ID failed"); trace_mlx5_esw_group_qos_destroy(esw->dev, group, group->tsar_ix); + kfree(group); + + return err; +} + +static int esw_qos_destroy_rate_group(struct mlx5_eswitch *esw, + struct mlx5_esw_rate_group *group, + struct netlink_ext_ack *extack) +{ + int err; + + err = __esw_qos_destroy_rate_group(esw, group, extack); + esw_qos_put(esw); + return err; } @@ -522,7 +556,7 @@ static bool esw_qos_element_type_supported(struct mlx5_core_dev *dev, int type) return false; } -void mlx5_esw_qos_create(struct mlx5_eswitch *esw) +static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) { u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_core_dev *dev = esw->dev; @@ -530,14 +564,10 @@ void mlx5_esw_qos_create(struct mlx5_eswitch *esw) int err; if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling)) - return; + return -EOPNOTSUPP; if (!esw_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR)) - return; - - mutex_lock(&esw->state_lock); - if (esw->qos.enabled) - goto unlock; + return -EOPNOTSUPP; MLX5_SET(scheduling_context, tsar_ctx, element_type, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR); @@ -551,75 +581,93 @@ void mlx5_esw_qos_create(struct mlx5_eswitch *esw) &esw->qos.root_tsar_ix); if (err) { esw_warn(dev, "E-Switch create root TSAR failed (%d)\n", err); - goto unlock; + return err; } INIT_LIST_HEAD(&esw->qos.groups); if (MLX5_CAP_QOS(dev, log_esw_max_sched_depth)) { - esw->qos.group0 = esw_qos_create_rate_group(esw, NULL); + esw->qos.group0 = __esw_qos_create_rate_group(esw, extack); if (IS_ERR(esw->qos.group0)) { esw_warn(dev, "E-Switch create rate group 0 failed (%ld)\n", PTR_ERR(esw->qos.group0)); goto err_group0; } } - esw->qos.enabled = true; -unlock: - mutex_unlock(&esw->state_lock); - return; + refcount_set(&esw->qos.refcnt, 1); + + return 0; err_group0: - err = mlx5_destroy_scheduling_element_cmd(esw->dev, - SCHEDULING_HIERARCHY_E_SWITCH, - esw->qos.root_tsar_ix); - if (err) - esw_warn(esw->dev, "E-Switch destroy root TSAR failed (%d)\n", err); - mutex_unlock(&esw->state_lock); + if (mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, + esw->qos.root_tsar_ix)) + esw_warn(esw->dev, "E-Switch destroy root TSAR failed.\n"); + + return err; } -void mlx5_esw_qos_destroy(struct mlx5_eswitch *esw) +static void esw_qos_destroy(struct mlx5_eswitch *esw) { - struct devlink *devlink = priv_to_devlink(esw->dev); int err; - devlink_rate_nodes_destroy(devlink); - mutex_lock(&esw->state_lock); - if (!esw->qos.enabled) - goto unlock; - if (esw->qos.group0) - esw_qos_destroy_rate_group(esw, esw->qos.group0, NULL); + __esw_qos_destroy_rate_group(esw, esw->qos.group0, NULL); err = mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH, esw->qos.root_tsar_ix); if (err) esw_warn(esw->dev, "E-Switch destroy root TSAR failed (%d)\n", err); +} - esw->qos.enabled = false; -unlock: - mutex_unlock(&esw->state_lock); +static int esw_qos_get(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack) +{ + int err = 0; + + lockdep_assert_held(&esw->state_lock); + + if (!refcount_inc_not_zero(&esw->qos.refcnt)) { + /* esw_qos_create() set refcount to 1 only on success. + * No need to decrement on failure. + */ + err = esw_qos_create(esw, extack); + } + + return err; +} + +static void esw_qos_put(struct mlx5_eswitch *esw) +{ + lockdep_assert_held(&esw->state_lock); + if (refcount_dec_and_test(&esw->qos.refcnt)) + esw_qos_destroy(esw); } static int esw_qos_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vport, - u32 max_rate, u32 bw_share) + u32 max_rate, u32 bw_share, struct netlink_ext_ack *extack) { int err; lockdep_assert_held(&esw->state_lock); - if (!esw->qos.enabled) - return 0; - if (vport->qos.enabled) return 0; + err = esw_qos_get(esw, extack); + if (err) + return err; + vport->qos.group = esw->qos.group0; err = esw_qos_vport_create_sched_element(esw, vport, max_rate, bw_share); - if (!err) { - vport->qos.enabled = true; - trace_mlx5_esw_vport_qos_create(vport, bw_share, max_rate); - } + if (err) + goto err_out; + + vport->qos.enabled = true; + trace_mlx5_esw_vport_qos_create(vport, bw_share, max_rate); + + return 0; + +err_out: + esw_qos_put(esw); return err; } @@ -629,7 +677,7 @@ void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo int err; lockdep_assert_held(&esw->state_lock); - if (!esw->qos.enabled || !vport->qos.enabled) + if (!vport->qos.enabled) return; WARN(vport->qos.group && vport->qos.group != esw->qos.group0, "Disabling QoS on port before detaching it from group"); @@ -643,6 +691,8 @@ void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo memset(&vport->qos, 0, sizeof(vport->qos)); trace_mlx5_esw_vport_qos_destroy(vport); + + esw_qos_put(esw); } int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vport, @@ -651,7 +701,7 @@ int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vpo int err; lockdep_assert_held(&esw->state_lock); - err = esw_qos_vport_enable(esw, vport, 0, 0); + err = esw_qos_vport_enable(esw, vport, 0, 0, NULL); if (err) return err; @@ -676,7 +726,7 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 mutex_lock(&esw->state_lock); if (!vport->qos.enabled) { /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ - err = esw_qos_vport_enable(esw, vport, rate_mbps, vport->qos.bw_share); + err = esw_qos_vport_enable(esw, vport, rate_mbps, vport->qos.bw_share, NULL); } else { MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps); @@ -748,7 +798,7 @@ int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void return err; mutex_lock(&esw->state_lock); - err = esw_qos_vport_enable(esw, vport, 0, 0); + err = esw_qos_vport_enable(esw, vport, 0, 0, extack); if (err) goto unlock; @@ -774,7 +824,7 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void * return err; mutex_lock(&esw->state_lock); - err = esw_qos_vport_enable(esw, vport, 0, 0); + err = esw_qos_vport_enable(esw, vport, 0, 0, extack); if (err) goto unlock; @@ -876,7 +926,7 @@ int mlx5_esw_qos_vport_update_group(struct mlx5_eswitch *esw, int err; mutex_lock(&esw->state_lock); - err = esw_qos_vport_enable(esw, vport, 0, 0); + err = esw_qos_vport_enable(esw, vport, 0, 0, extack); if (!err) err = esw_qos_vport_update_group(esw, vport, group, extack); mutex_unlock(&esw->state_lock); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h index 91b66c1b9881..0141e9d52037 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.h @@ -8,8 +8,6 @@ int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *evport, u32 max_rate, u32 min_rate); -void mlx5_esw_qos_create(struct mlx5_eswitch *esw); -void mlx5_esw_qos_destroy(struct mlx5_eswitch *esw); void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vport); int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void *priv, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index 2d188f462028..46532dd42b43 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1257,8 +1257,6 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int mode, int num_vfs) mlx5_eswitch_update_num_of_vfs(esw, num_vfs); - mlx5_esw_qos_create(esw); - esw->mode = mode; if (mode == MLX5_ESWITCH_LEGACY) { @@ -1287,7 +1285,6 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int mode, int num_vfs) if (mode == MLX5_ESWITCH_OFFLOADS) mlx5_rescan_drivers(esw->dev); - mlx5_esw_qos_destroy(esw); mlx5_esw_acls_ns_cleanup(esw); return err; } @@ -1327,6 +1324,7 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs) void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw, bool clear_vf) { + struct devlink *devlink = priv_to_devlink(esw->dev); int old_mode; lockdep_assert_held_write(&esw->mode_lock); @@ -1356,7 +1354,8 @@ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw, bool clear_vf) if (old_mode == MLX5_ESWITCH_OFFLOADS) mlx5_rescan_drivers(esw->dev); - mlx5_esw_qos_destroy(esw); + devlink_rate_nodes_destroy(devlink); + mlx5_esw_acls_ns_cleanup(esw); if (clear_vf) @@ -1565,6 +1564,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev) lockdep_register_key(&esw->mode_lock_key); init_rwsem(&esw->mode_lock); lockdep_set_class(&esw->mode_lock, &esw->mode_lock_key); + refcount_set(&esw->qos.refcnt, 0); esw->enabled_vports = 0; esw->mode = MLX5_ESWITCH_NONE; @@ -1598,6 +1598,7 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) esw->dev->priv.eswitch = NULL; destroy_workqueue(esw->work_queue); + WARN_ON(refcount_read(&esw->qos.refcnt)); lockdep_unregister_key(&esw->mode_lock_key); mutex_destroy(&esw->state_lock); WARN_ON(!xa_empty(&esw->offloads.vhca_map)); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 42f8ee2e5d9f..513f741d16c7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -308,10 +308,14 @@ struct mlx5_eswitch { atomic64_t user_count; struct { - bool enabled; u32 root_tsar_ix; struct mlx5_esw_rate_group *group0; struct list_head groups; /* Protected by esw->state_lock */ + + /* Protected by esw->state_lock. + * Initially 0, meaning no QoS users and QoS is disabled. + */ + refcount_t refcnt; } qos; struct mlx5_esw_bridge_offloads *br_offloads; @@ -516,11 +520,6 @@ int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw, int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw, u16 vport, u16 vlan, u8 qos, u8 set_flags); -static inline bool mlx5_esw_qos_enabled(struct mlx5_eswitch *esw) -{ - return esw->qos.enabled; -} - static inline bool mlx5_eswitch_vlan_actions_supported(struct mlx5_core_dev *dev, u8 vlan_depth) {