From patchwork Thu Apr 13 12:29:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13210264 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39107C77B6F for ; Thu, 13 Apr 2023 12:30:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229895AbjDMMaO (ORCPT ); Thu, 13 Apr 2023 08:30:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229929AbjDMMaM (ORCPT ); Thu, 13 Apr 2023 08:30:12 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88D64901A for ; Thu, 13 Apr 2023 05:30:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D7BA763CA0 for ; Thu, 13 Apr 2023 12:30:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7245C433D2; Thu, 13 Apr 2023 12:30:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681389007; bh=MkUUzOHHplHN6G6IHQQSQZhJJRawjZg0zPUp2EdIo8g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S8ieUO75lyu81BrjHNo97ndPz3lhLG3diHv0JDQrdWccTCcbizU2bMe9kCYPNaEJh uD+wqivKeVBQvcZ/XPc00SAw1wMPebCJuQLNdPqVj+IqvthEc8XdbcssKlHKXrY8dX Ah8UgUdiXvlQfWYhcEBGtgsbumlxCWoahbZE/rhyRiV16sG+j8PN2D5G/VYS3aqPI9 wdPcvDLNyvO0VufeA/9ULMsHkdwbVUWXqEi9nRUke051ziJTHdEAJMi52rI0MiTkTQ 3XjCWR2bMp3ANcezZmkk7ClcDVGLwp1pMyxhcyLwmj2PvdN50al9Ox9JW1Ta0OqVqY kKk5H0wOTYvbw== From: Leon Romanovsky To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Leon Romanovsky , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org, Saeed Mahameed , Raed Salem , Emeel Hakim , Simon Horman Subject: [PATCH net-next v1 08/10] net/mlx5: Allow blocking encap changes in eswitch Date: Thu, 13 Apr 2023 15:29:26 +0300 Message-Id: <47dc63412a5c0b8b60ff4127a54d709845b4e4de.1681388425.git.leonro@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Existing eswitch encap option enables header encapsulation. Unfortunately currently available hardware isn't able to perform double encapsulation, which can happen once IPsec packet offload tunnel mode is used together with encap mode set to BASIC. So as a solution for misconfiguration, provide an option to block encap changes, which will be used for IPsec packet offload. Reviewed-by: Emeel Hakim Signed-off-by: Leon Romanovsky Reviewed-by: Simon Horman --- .../net/ethernet/mellanox/mlx5/core/eswitch.h | 14 ++++++ .../mellanox/mlx5/core/eswitch_offloads.c | 48 +++++++++++++++++++ 2 files changed, 62 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 19e9a77c4633..e9d68fdf68f5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -263,6 +263,7 @@ struct mlx5_esw_offload { const struct mlx5_eswitch_rep_ops *rep_ops[NUM_REP_TYPES]; u8 inline_mode; atomic64_t num_flows; + u64 num_block_encap; enum devlink_eswitch_encap_mode encap; struct ida vport_metadata_ida; unsigned int host_number; /* ECPF supports one external host */ @@ -748,6 +749,9 @@ void mlx5_eswitch_offloads_destroy_single_fdb(struct mlx5_eswitch *master_esw, struct mlx5_eswitch *slave_esw); int mlx5_eswitch_reload_reps(struct mlx5_eswitch *esw); +bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev); +void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev); + static inline int mlx5_eswitch_num_vfs(struct mlx5_eswitch *esw) { if (mlx5_esw_allowed(esw)) @@ -761,6 +765,7 @@ mlx5_eswitch_get_slow_fdb(struct mlx5_eswitch *esw) { return esw->fdb_table.offloads.slow_fdb; } + #else /* CONFIG_MLX5_ESWITCH */ /* eswitch API stubs */ static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; } @@ -805,6 +810,15 @@ mlx5_eswitch_reload_reps(struct mlx5_eswitch *esw) { return 0; } + +static inline bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev) +{ + return true; +} + +static inline void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev) +{ +} #endif /* CONFIG_MLX5_ESWITCH */ #endif /* __MLX5_ESWITCH_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index 48036dfddd5e..b6e2709c1371 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -3586,6 +3586,47 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode) return err; } +bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev) +{ + struct devlink *devlink = priv_to_devlink(dev); + struct mlx5_eswitch *esw; + + devl_lock(devlink); + esw = mlx5_devlink_eswitch_get(devlink); + if (IS_ERR(esw)) { + devl_unlock(devlink); + /* Failure means no eswitch => not possible to change encap */ + return true; + } + + down_write(&esw->mode_lock); + if (esw->mode != MLX5_ESWITCH_LEGACY && + esw->offloads.encap != DEVLINK_ESWITCH_ENCAP_MODE_NONE) { + up_write(&esw->mode_lock); + devl_unlock(devlink); + return false; + } + + esw->offloads.num_block_encap++; + up_write(&esw->mode_lock); + devl_unlock(devlink); + return true; +} + +void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev) +{ + struct devlink *devlink = priv_to_devlink(dev); + struct mlx5_eswitch *esw; + + esw = mlx5_devlink_eswitch_get(devlink); + if (IS_ERR(esw)) + return; + + down_write(&esw->mode_lock); + esw->offloads.num_block_encap--; + up_write(&esw->mode_lock); +} + int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink, enum devlink_eswitch_encap_mode encap, struct netlink_ext_ack *extack) @@ -3627,6 +3668,13 @@ int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink, goto unlock; } + if (esw->offloads.num_block_encap) { + NL_SET_ERR_MSG_MOD(extack, + "Can't set encapsulation when IPsec SA and/or policies are configured"); + err = -EOPNOTSUPP; + goto unlock; + } + esw_destroy_offloads_fdb_tables(esw); esw->offloads.encap = encap;