From patchwork Wed Sep 20 06:35:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392122 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32E265245 for ; Wed, 20 Sep 2023 06:35:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E33B8C433CD; Wed, 20 Sep 2023 06:35:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191759; bh=9ARnDZRP9AUyU5H6IoANuR91u5yR9WEuuO5J8EgCXHU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GQOGyaVi5Hk2XDXJ/WEcDQoFDfdhoTxd4lkL3j6/xqaopWlhgW3O36DyRBUVqBTYP iztQn+/+fE4TEbe+DPLy2KoWgrb/fxbRwiBtpxsifHILMQCDvRi/02qyZhdSF39HMJ XV90HY980HDAutA0RVrE7cFegnPBfIBrdSnSE8bbBgLR0U/dhe7u7HCqnecVCfQqE2 gRPtzNLXY5YY/82xVjVt3TzNyUBx7x5dgHQ1oi3FA1K03WGONX2aukybIFgDmwORXy UoUBlQIPEYMjV0Uq619ExlZVlzV7dzqDbsOr6Tp/iWM53wPiNrAvBs9NxMSeDHu9YS U1TrWi4wTPapQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jiri Pirko , Shay Drory Subject: [net-next 01/15] net/mlx5: Call mlx5_sf_id_erase() once in mlx5_sf_dealloc() Date: Tue, 19 Sep 2023 23:35:38 -0700 Message-ID: <20230920063552.296978-2-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Before every call of mlx5_sf_dealloc(), there is a call to mlx5_sf_id_erase(). So move it to the beginning of mlx5_sf_dealloc(). Also remove redundant mlx5_sf_id_erase() call from mlx5_sf_free() as it is called only from mlx5_sf_dealloc(). Signed-off-by: Jiri Pirko Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c index 964a5b1876f3..519033d70e05 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c @@ -112,7 +112,6 @@ mlx5_sf_alloc(struct mlx5_sf_table *table, struct mlx5_eswitch *esw, static void mlx5_sf_free(struct mlx5_sf_table *table, struct mlx5_sf *sf) { - mlx5_sf_id_erase(table, sf); mlx5_sf_hw_table_sf_free(table->dev, sf->controller, sf->id); trace_mlx5_sf_free(table->dev, sf->port_index, sf->controller, sf->hw_fn_id); kfree(sf); @@ -362,6 +361,8 @@ int mlx5_devlink_sf_port_new(struct devlink *devlink, static void mlx5_sf_dealloc(struct mlx5_sf_table *table, struct mlx5_sf *sf) { + mlx5_sf_id_erase(table, sf); + if (sf->hw_state == MLX5_VHCA_STATE_ALLOCATED) { mlx5_sf_free(table, sf); } else if (mlx5_sf_is_active(sf)) { @@ -402,7 +403,6 @@ int mlx5_devlink_sf_port_del(struct devlink *devlink, } mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id); - mlx5_sf_id_erase(table, sf); mutex_lock(&table->sf_state_lock); mlx5_sf_dealloc(table, sf); @@ -474,7 +474,6 @@ static void mlx5_sf_deactivate_all(struct mlx5_sf_table *table) */ xa_for_each(&table->port_indices, index, sf) { mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id); - mlx5_sf_id_erase(table, sf); mlx5_sf_dealloc(table, sf); } } From patchwork Wed Sep 20 06:35:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392123 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D0515671 for ; Wed, 20 Sep 2023 06:36:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5F8EC433A9; Wed, 20 Sep 2023 06:35:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191760; bh=Xo5uKJbtf58bWVDg5AwhD2u1TTEtQPhbmvUD+Jb157o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D2MH9lnpTGezkb1scYZ/+xXPy64wErk++I1aoUMPQ40ZgLcb0VacUm1IrueNeaMsL ijpiQ+xPuRiNGNJFegj9CKwTiZGIQrEZ2JMtrKa+i8aSCJYH4F9y2jfRLOnQEN3bdA TYdRf6NgNYEJLW+XXVQE0w2suIbFsayim7iL2w2JUIq0WPY+V5ALUghbf+koopoylT 9CpSldhL5uBrIfpp04VGfFBdWgZwRaHAi5i2/Ef44mr/cX28oZhJ32NwyBh6Z0yB4W 5umvTTUw+ChSVgYXMijbfuD5Fggl4W8++l9ystdRgoSEyBBe3VuHG7l7Zy2c+MbfwJ slCcVh5/uQwaA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jiri Pirko , Shay Drory Subject: [net-next 02/15] net/mlx5: Use devlink port pointer to get the pointer of container SF struct Date: Tue, 19 Sep 2023 23:35:39 -0700 Message-ID: <20230920063552.296978-3-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Benefit from the fact that struct devlink_port is eventually embedded in struct mlx5_sf and use container_of() macro to get it instead of the xarray lookup in every devlink port op. Signed-off-by: Jiri Pirko Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/sf/devlink.c | 44 +++++-------------- 1 file changed, 12 insertions(+), 32 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c index 519033d70e05..b4a373d2ba15 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c @@ -20,6 +20,13 @@ struct mlx5_sf { u16 hw_state; }; +static void *mlx5_sf_by_dl_port(struct devlink_port *dl_port) +{ + struct mlx5_devlink_port *mlx5_dl_port = mlx5_devlink_port_get(dl_port); + + return container_of(mlx5_dl_port, struct mlx5_sf, dl_port); +} + struct mlx5_sf_table { struct mlx5_core_dev *dev; /* To refer from notifier context. */ struct xarray port_indices; /* port index based lookup. */ @@ -31,12 +38,6 @@ struct mlx5_sf_table { struct notifier_block mdev_nb; }; -static struct mlx5_sf * -mlx5_sf_lookup_by_index(struct mlx5_sf_table *table, unsigned int port_index) -{ - return xa_load(&table->port_indices, port_index); -} - static struct mlx5_sf * mlx5_sf_lookup_by_function_id(struct mlx5_sf_table *table, unsigned int fn_id) { @@ -172,26 +173,19 @@ int mlx5_devlink_sf_port_fn_state_get(struct devlink_port *dl_port, struct netlink_ext_ack *extack) { struct mlx5_core_dev *dev = devlink_priv(dl_port->devlink); + struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port); struct mlx5_sf_table *table; - struct mlx5_sf *sf; - int err = 0; table = mlx5_sf_table_try_get(dev); if (!table) return -EOPNOTSUPP; - sf = mlx5_sf_lookup_by_index(table, dl_port->index); - if (!sf) { - err = -EOPNOTSUPP; - goto sf_err; - } mutex_lock(&table->sf_state_lock); *state = mlx5_sf_to_devlink_state(sf->hw_state); *opstate = mlx5_sf_to_devlink_opstate(sf->hw_state); mutex_unlock(&table->sf_state_lock); -sf_err: mlx5_sf_table_put(table); - return err; + return 0; } static int mlx5_sf_activate(struct mlx5_core_dev *dev, struct mlx5_sf *sf, @@ -257,8 +251,8 @@ int mlx5_devlink_sf_port_fn_state_set(struct devlink_port *dl_port, struct netlink_ext_ack *extack) { struct mlx5_core_dev *dev = devlink_priv(dl_port->devlink); + struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port); struct mlx5_sf_table *table; - struct mlx5_sf *sf; int err; table = mlx5_sf_table_try_get(dev); @@ -267,14 +261,7 @@ int mlx5_devlink_sf_port_fn_state_set(struct devlink_port *dl_port, "Port state set is only supported in eswitch switchdev mode or SF ports are disabled."); return -EOPNOTSUPP; } - sf = mlx5_sf_lookup_by_index(table, dl_port->index); - if (!sf) { - err = -ENODEV; - goto out; - } - err = mlx5_sf_state_set(dev, table, sf, state, extack); -out: mlx5_sf_table_put(table); return err; } @@ -385,10 +372,9 @@ int mlx5_devlink_sf_port_del(struct devlink *devlink, struct netlink_ext_ack *extack) { struct mlx5_core_dev *dev = devlink_priv(devlink); + struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port); struct mlx5_eswitch *esw = dev->priv.eswitch; struct mlx5_sf_table *table; - struct mlx5_sf *sf; - int err = 0; table = mlx5_sf_table_try_get(dev); if (!table) { @@ -396,20 +382,14 @@ int mlx5_devlink_sf_port_del(struct devlink *devlink, "Port del is only supported in eswitch switchdev mode or SF ports are disabled."); return -EOPNOTSUPP; } - sf = mlx5_sf_lookup_by_index(table, dl_port->index); - if (!sf) { - err = -ENODEV; - goto sf_err; - } mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id); mutex_lock(&table->sf_state_lock); mlx5_sf_dealloc(table, sf); mutex_unlock(&table->sf_state_lock); -sf_err: mlx5_sf_table_put(table); - return err; + return 0; } static bool mlx5_sf_state_update_check(const struct mlx5_sf *sf, u8 new_state) From patchwork Wed Sep 20 06:35:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392124 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40DAB747B for ; Wed, 20 Sep 2023 06:36:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 00B79C433C7; Wed, 20 Sep 2023 06:36:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191761; bh=yNqS0lHPUy3893jAU/ldlUiTtu911ksJ68gIhhPieuw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MZMwuxF7IzPrVXT6TeAeCm/YEVk7Vf9fdNiTmc2Hd/B8BPErHriUT0IGa7yC5HTaT wdYoOoihwOXgFssbgjbkPUeqUAvHYB/sS8qCKYgu93hX5m1zlgLC3Lb2j5WrR+kwO8 TKyrpCtweCVzyftAe0rEvl8WTRlcjDKaV/UuNRpDN59XANk8AQRzwDYTnapNQqXHku w5kevgApRhRey3Gf5wdgkFkhsS3oPhE0VeajBprs5vWRpi47PlALfdo7W1oyH/vw9A UgdM6j9RuA1mLrJW0rpGv1c/falf6rpqA8NGzKFeYtCYekcGxSwU9cBv3VqJmqsGTb pOeCcRxwTr/6g== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jiri Pirko , Shay Drory Subject: [net-next 03/15] net/mlx5: Convert SF port_indices xarray to function_ids xarray Date: Tue, 19 Sep 2023 23:35:40 -0700 Message-ID: <20230920063552.296978-4-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko No need to lookup for sf by a port index. Convert the xarray to have function id as an index and optimize the remaining function id based lookup. Signed-off-by: Jiri Pirko Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/sf/devlink.c | 29 +++++++------------ 1 file changed, 11 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c index b4a373d2ba15..78cdfe595a01 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c @@ -29,7 +29,7 @@ static void *mlx5_sf_by_dl_port(struct devlink_port *dl_port) struct mlx5_sf_table { struct mlx5_core_dev *dev; /* To refer from notifier context. */ - struct xarray port_indices; /* port index based lookup. */ + struct xarray function_ids; /* function id based lookup. */ refcount_t refcount; struct completion disable_complete; struct mutex sf_state_lock; /* Serializes sf state among user cmds & vhca event handler. */ @@ -41,24 +41,17 @@ struct mlx5_sf_table { static struct mlx5_sf * mlx5_sf_lookup_by_function_id(struct mlx5_sf_table *table, unsigned int fn_id) { - unsigned long index; - struct mlx5_sf *sf; - - xa_for_each(&table->port_indices, index, sf) { - if (sf->hw_fn_id == fn_id) - return sf; - } - return NULL; + return xa_load(&table->function_ids, fn_id); } -static int mlx5_sf_id_insert(struct mlx5_sf_table *table, struct mlx5_sf *sf) +static int mlx5_sf_function_id_insert(struct mlx5_sf_table *table, struct mlx5_sf *sf) { - return xa_insert(&table->port_indices, sf->port_index, sf, GFP_KERNEL); + return xa_insert(&table->function_ids, sf->hw_fn_id, sf, GFP_KERNEL); } -static void mlx5_sf_id_erase(struct mlx5_sf_table *table, struct mlx5_sf *sf) +static void mlx5_sf_function_id_erase(struct mlx5_sf_table *table, struct mlx5_sf *sf) { - xa_erase(&table->port_indices, sf->port_index); + xa_erase(&table->function_ids, sf->hw_fn_id); } static struct mlx5_sf * @@ -95,7 +88,7 @@ mlx5_sf_alloc(struct mlx5_sf_table *table, struct mlx5_eswitch *esw, sf->hw_state = MLX5_VHCA_STATE_ALLOCATED; sf->controller = controller; - err = mlx5_sf_id_insert(table, sf); + err = mlx5_sf_function_id_insert(table, sf); if (err) goto insert_err; @@ -348,7 +341,7 @@ int mlx5_devlink_sf_port_new(struct devlink *devlink, static void mlx5_sf_dealloc(struct mlx5_sf_table *table, struct mlx5_sf *sf) { - mlx5_sf_id_erase(table, sf); + mlx5_sf_function_id_erase(table, sf); if (sf->hw_state == MLX5_VHCA_STATE_ALLOCATED) { mlx5_sf_free(table, sf); @@ -452,7 +445,7 @@ static void mlx5_sf_deactivate_all(struct mlx5_sf_table *table) /* At this point, no new user commands can start and no vhca event can * arrive. It is safe to destroy all user created SFs. */ - xa_for_each(&table->port_indices, index, sf) { + xa_for_each(&table->function_ids, index, sf) { mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id); mlx5_sf_dealloc(table, sf); } @@ -540,7 +533,7 @@ int mlx5_sf_table_init(struct mlx5_core_dev *dev) mutex_init(&table->sf_state_lock); table->dev = dev; - xa_init(&table->port_indices); + xa_init(&table->function_ids); dev->priv.sf_table = table; refcount_set(&table->refcount, 0); table->esw_nb.notifier_call = mlx5_sf_esw_event; @@ -579,6 +572,6 @@ void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev) mlx5_esw_event_notifier_unregister(dev->priv.eswitch, &table->esw_nb); WARN_ON(refcount_read(&table->refcount)); mutex_destroy(&table->sf_state_lock); - WARN_ON(!xa_empty(&table->port_indices)); + WARN_ON(!xa_empty(&table->function_ids)); kfree(table); } From patchwork Wed Sep 20 06:35:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392125 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EE9E8475 for ; Wed, 20 Sep 2023 06:36:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1145C433C8; Wed, 20 Sep 2023 06:36:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191762; bh=cLBLe7sysdezFUApsta72Mo5zskijgbOxtYj0dZXZec=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iy9+mUIVSgJFDmbJ1UnfsEeDULR63ANI3dvfeV40G2F7JAC+NZQTOjOBkdlbffe4+ 6HcivxyGgQgmzV5k2VJjgTfK2LApZc+nOlGXxw/lO/8iGzHptsT0kplJrPOO+w9p5G /QNMlYuxJFIjaClsigIw120ZH727iV8W5u+zMekWCVZQpFZQqgpJXmbShxmSK0eHF/ PhWA7BhiGLt8SIxBfXXusQPrf9IQk2XoHIOqjwinY5sPDqI+pmImPIgKd7xI2Pq6JE 9jVxj6ITWwhXjMhoJ17GUu3eYj3UchzHcywHA6DBW1Nny7x5hhhv1Z1xlFBVdORzRn Twz/QGcEFlesQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jiri Pirko , Shay Drory Subject: [net-next 04/15] net/mlx5: Move state lock taking into mlx5_sf_dealloc() Date: Tue, 19 Sep 2023 23:35:41 -0700 Message-ID: <20230920063552.296978-5-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Instead of taking lock and calling mlx5_sf_dealloc(), move the lock taking into mlx5_sf_dealloc(). The other caller of mlx5_sf_dealloc() does not need it now, but will need it after a follow-up patch removing the table reference counting. Signed-off-by: Jiri Pirko Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c index 78cdfe595a01..bed3fe8759d2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c @@ -341,6 +341,8 @@ int mlx5_devlink_sf_port_new(struct devlink *devlink, static void mlx5_sf_dealloc(struct mlx5_sf_table *table, struct mlx5_sf *sf) { + mutex_lock(&table->sf_state_lock); + mlx5_sf_function_id_erase(table, sf); if (sf->hw_state == MLX5_VHCA_STATE_ALLOCATED) { @@ -358,6 +360,8 @@ static void mlx5_sf_dealloc(struct mlx5_sf_table *table, struct mlx5_sf *sf) mlx5_sf_hw_table_sf_deferred_free(table->dev, sf->controller, sf->id); kfree(sf); } + + mutex_unlock(&table->sf_state_lock); } int mlx5_devlink_sf_port_del(struct devlink *devlink, @@ -377,10 +381,7 @@ int mlx5_devlink_sf_port_del(struct devlink *devlink, } mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id); - - mutex_lock(&table->sf_state_lock); mlx5_sf_dealloc(table, sf); - mutex_unlock(&table->sf_state_lock); mlx5_sf_table_put(table); return 0; } From patchwork Wed Sep 20 06:35:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392126 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98E5A8475 for ; Wed, 20 Sep 2023 06:36:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0DFEBC433CA; Wed, 20 Sep 2023 06:36:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191763; bh=KlvOUU0P6aIhdXDuJ8/ABMzAhvqzUWvtyCx8rBoO9y8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dAHz5G5eppnwLyRbS4/KLyT/7jKzFB2Z06bdfZ4pkwSo2W+X/pVZcLtR3Q6jkNO5/ Hb9ExHmzNvtrkJ9ZaUDpJilGff3gYonYhRaqYDk7i0sDsGA92r9kVKBVXfE7fFy3Aa L0X9lQJVfMvhRHMwn4C8glgpSbcot1ZPNT8eATlE9d5A0z3mDMjRKRnzyj0Y/5Ionf APzsfxvPPKiZAa2yVVzdv2+QSNWmxvgF9r1Fwwaand3Ihm2cO5jvfsTAHw3Q9biyIC KMDE5WqBiZG7Xu/6vzDGXn6vyKQlzeTonXk7VIUfz+dcFHUX6GUYNJb5RS4fu/Qnx6 Kmk1uvzFiIO8Q== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jiri Pirko , Shay Drory Subject: [net-next 05/15] net/mlx5: Rename mlx5_sf_deactivate_all() to mlx5_sf_del_all() Date: Tue, 19 Sep 2023 23:35:42 -0700 Message-ID: <20230920063552.296978-6-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko The function does not do deactivation, but it deletes all SFs instead. Rename accordingly. Signed-off-by: Jiri Pirko Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c index bed3fe8759d2..454185ef04f3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c @@ -437,7 +437,7 @@ static void mlx5_sf_table_enable(struct mlx5_sf_table *table) refcount_set(&table->refcount, 1); } -static void mlx5_sf_deactivate_all(struct mlx5_sf_table *table) +static void mlx5_sf_del_all(struct mlx5_sf_table *table) { struct mlx5_eswitch *esw = table->dev->priv.eswitch; unsigned long index; @@ -463,7 +463,7 @@ static void mlx5_sf_table_disable(struct mlx5_sf_table *table) mlx5_sf_table_put(table); wait_for_completion(&table->disable_complete); - mlx5_sf_deactivate_all(table); + mlx5_sf_del_all(table); } static int mlx5_sf_esw_event(struct notifier_block *nb, unsigned long event, void *data) From patchwork Wed Sep 20 06:35:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392127 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59F638829 for ; Wed, 20 Sep 2023 06:36:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19B34C433C7; Wed, 20 Sep 2023 06:36:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191764; bh=CFAZHzZlvqK29fZgZPM3jTFFiD8PtosX4Hn5rBBtQtM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GKO0HbL0sVd9+vUjXiYnxNghzqKz640643dzmWDAue6y7KJx2YRMXCyzXYx6GyejT cbW4n2DFa52TeTwpGEWir30sUB0HuQkc9Q6rEaJWA5riF/mp3ARG1WsILSGvVnQtnK 0GtCfn9XM0adkvHWrVXZWTOHfi4s2dyB8dW+m00rvTNXrAL+fmEv3Wpm7rs6+MB4aQ yaFBLEkp/Aq5ub5bzGPaWN1g/kvcPNHjoNVGpBq+yJFeaXxPdfxDuQxdLs48yVmIMA 1xrAzv25VIDsm0+/YExMpBXkbtmfbirHupP5+B/7VOm6Ep415iolTThjSjzyHDqgY7 AD8K8QPqvdcYg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jiri Pirko , Shay Drory Subject: [net-next 06/15] net/mlx5: Push common deletion code into mlx5_sf_del() Date: Tue, 19 Sep 2023 23:35:43 -0700 Message-ID: <20230920063552.296978-7-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Don't call the same functions for SF deletion on multiple places. Instead, introduce a helper mlx5_sf_del() and move the code there. Signed-off-by: Jiri Pirko Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/sf/devlink.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c index 454185ef04f3..c8a043b2a8e0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c @@ -364,13 +364,20 @@ static void mlx5_sf_dealloc(struct mlx5_sf_table *table, struct mlx5_sf *sf) mutex_unlock(&table->sf_state_lock); } +static void mlx5_sf_del(struct mlx5_sf_table *table, struct mlx5_sf *sf) +{ + struct mlx5_eswitch *esw = table->dev->priv.eswitch; + + mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id); + mlx5_sf_dealloc(table, sf); +} + int mlx5_devlink_sf_port_del(struct devlink *devlink, struct devlink_port *dl_port, struct netlink_ext_ack *extack) { struct mlx5_core_dev *dev = devlink_priv(devlink); struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port); - struct mlx5_eswitch *esw = dev->priv.eswitch; struct mlx5_sf_table *table; table = mlx5_sf_table_try_get(dev); @@ -380,8 +387,7 @@ int mlx5_devlink_sf_port_del(struct devlink *devlink, return -EOPNOTSUPP; } - mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id); - mlx5_sf_dealloc(table, sf); + mlx5_sf_del(table, sf); mlx5_sf_table_put(table); return 0; } @@ -439,17 +445,14 @@ static void mlx5_sf_table_enable(struct mlx5_sf_table *table) static void mlx5_sf_del_all(struct mlx5_sf_table *table) { - struct mlx5_eswitch *esw = table->dev->priv.eswitch; unsigned long index; struct mlx5_sf *sf; /* At this point, no new user commands can start and no vhca event can * arrive. It is safe to destroy all user created SFs. */ - xa_for_each(&table->function_ids, index, sf) { - mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id); - mlx5_sf_dealloc(table, sf); - } + xa_for_each(&table->function_ids, index, sf) + mlx5_sf_del(table, sf); } static void mlx5_sf_table_disable(struct mlx5_sf_table *table) From patchwork Wed Sep 20 06:35:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392128 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B64108C0A for ; Wed, 20 Sep 2023 06:36:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18C10C433CB; Wed, 20 Sep 2023 06:36:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191765; bh=DOxJwd7omEDtKXyM56jAYrjnDX4Zj3wpEdjtqI9iL5A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JMKCFnfNcs6btEh2o4s6h27PpwXbeWBtegAWFm2AxlkxIxsAUv8xS7sWbUCQbMI74 BPgr5Yx5Ju1coEASzjEtfmtgH2YCV7ixZ3DEcltHSR3d5izzRHjsFr03fvHtYK22Xc YcZmTC0S/u1iWYTQOcGQoUJZmwjswgwbQttmMLpDg7hu0BYpuiSV3xgICSnNFWrjjQ xMg+nAS/dlo+vSMM3oL2nHqtKs9ngCBhcdpB/UIQPtrwEiC3j5wgq5/8heZuPc4dzq MsD8AUxXKVcAKBj+BXSm5lG0PErtqT+c6GxOrRbkGMkX78qGq7jz3vDY9x5nWiC5Qz S86NpKcCB/Mzg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jiri Pirko , Shay Drory Subject: [net-next 07/15] net/mlx5: Remove SF table reference counting Date: Tue, 19 Sep 2023 23:35:44 -0700 Message-ID: <20230920063552.296978-8-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Historically, the SF table reference counting was present in order to protect parallel executions of devlink ops. However, since currently this is protected with devlink instance lock, the SF table reference counting is no longer needed. Remove it entirely. Signed-off-by: Jiri Pirko Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/sf/devlink.c | 120 ++++-------------- 1 file changed, 23 insertions(+), 97 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c index c8a043b2a8e0..6c11e075cab0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c @@ -30,8 +30,6 @@ static void *mlx5_sf_by_dl_port(struct devlink_port *dl_port) struct mlx5_sf_table { struct mlx5_core_dev *dev; /* To refer from notifier context. */ struct xarray function_ids; /* function id based lookup. */ - refcount_t refcount; - struct completion disable_complete; struct mutex sf_state_lock; /* Serializes sf state among user cmds & vhca event handler. */ struct notifier_block esw_nb; struct notifier_block vhca_nb; @@ -111,22 +109,6 @@ static void mlx5_sf_free(struct mlx5_sf_table *table, struct mlx5_sf *sf) kfree(sf); } -static struct mlx5_sf_table *mlx5_sf_table_try_get(struct mlx5_core_dev *dev) -{ - struct mlx5_sf_table *table = dev->priv.sf_table; - - if (!table) - return NULL; - - return refcount_inc_not_zero(&table->refcount) ? table : NULL; -} - -static void mlx5_sf_table_put(struct mlx5_sf_table *table) -{ - if (refcount_dec_and_test(&table->refcount)) - complete(&table->disable_complete); -} - static enum devlink_port_fn_state mlx5_sf_to_devlink_state(u8 hw_state) { switch (hw_state) { @@ -166,18 +148,13 @@ int mlx5_devlink_sf_port_fn_state_get(struct devlink_port *dl_port, struct netlink_ext_ack *extack) { struct mlx5_core_dev *dev = devlink_priv(dl_port->devlink); + struct mlx5_sf_table *table = dev->priv.sf_table; struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port); - struct mlx5_sf_table *table; - - table = mlx5_sf_table_try_get(dev); - if (!table) - return -EOPNOTSUPP; mutex_lock(&table->sf_state_lock); *state = mlx5_sf_to_devlink_state(sf->hw_state); *opstate = mlx5_sf_to_devlink_opstate(sf->hw_state); mutex_unlock(&table->sf_state_lock); - mlx5_sf_table_put(table); return 0; } @@ -244,19 +221,10 @@ int mlx5_devlink_sf_port_fn_state_set(struct devlink_port *dl_port, struct netlink_ext_ack *extack) { struct mlx5_core_dev *dev = devlink_priv(dl_port->devlink); + struct mlx5_sf_table *table = dev->priv.sf_table; struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port); - struct mlx5_sf_table *table; - int err; - table = mlx5_sf_table_try_get(dev); - if (!table) { - NL_SET_ERR_MSG_MOD(extack, - "Port state set is only supported in eswitch switchdev mode or SF ports are disabled."); - return -EOPNOTSUPP; - } - err = mlx5_sf_state_set(dev, table, sf, state, extack); - mlx5_sf_table_put(table); - return err; + return mlx5_sf_state_set(dev, table, sf, state, extack); } static int mlx5_sf_add(struct mlx5_core_dev *dev, struct mlx5_sf_table *table, @@ -315,28 +283,37 @@ mlx5_sf_new_check_attr(struct mlx5_core_dev *dev, const struct devlink_port_new_ return 0; } +static bool mlx5_sf_table_supported(const struct mlx5_core_dev *dev) +{ + return dev->priv.eswitch && MLX5_ESWITCH_MANAGER(dev) && + mlx5_sf_hw_table_supported(dev); +} + int mlx5_devlink_sf_port_new(struct devlink *devlink, const struct devlink_port_new_attrs *new_attr, struct netlink_ext_ack *extack, struct devlink_port **dl_port) { struct mlx5_core_dev *dev = devlink_priv(devlink); - struct mlx5_sf_table *table; + struct mlx5_sf_table *table = dev->priv.sf_table; int err; err = mlx5_sf_new_check_attr(dev, new_attr, extack); if (err) return err; - table = mlx5_sf_table_try_get(dev); - if (!table) { + if (!mlx5_sf_table_supported(dev)) { + NL_SET_ERR_MSG_MOD(extack, "SF ports are not supported."); + return -EOPNOTSUPP; + } + + if (!is_mdev_switchdev_mode(dev)) { NL_SET_ERR_MSG_MOD(extack, - "Port add is only supported in eswitch switchdev mode or SF ports are disabled."); + "SF ports are only supported in eswitch switchdev mode."); return -EOPNOTSUPP; } - err = mlx5_sf_add(dev, table, new_attr, extack, dl_port); - mlx5_sf_table_put(table); - return err; + + return mlx5_sf_add(dev, table, new_attr, extack, dl_port); } static void mlx5_sf_dealloc(struct mlx5_sf_table *table, struct mlx5_sf *sf) @@ -377,18 +354,10 @@ int mlx5_devlink_sf_port_del(struct devlink *devlink, struct netlink_ext_ack *extack) { struct mlx5_core_dev *dev = devlink_priv(devlink); + struct mlx5_sf_table *table = dev->priv.sf_table; struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port); - struct mlx5_sf_table *table; - - table = mlx5_sf_table_try_get(dev); - if (!table) { - NL_SET_ERR_MSG_MOD(extack, - "Port del is only supported in eswitch switchdev mode or SF ports are disabled."); - return -EOPNOTSUPP; - } mlx5_sf_del(table, sf); - mlx5_sf_table_put(table); return 0; } @@ -414,14 +383,10 @@ static int mlx5_sf_vhca_event(struct notifier_block *nb, unsigned long opcode, v bool update = false; struct mlx5_sf *sf; - table = mlx5_sf_table_try_get(table->dev); - if (!table) - return 0; - mutex_lock(&table->sf_state_lock); sf = mlx5_sf_lookup_by_function_id(table, event->function_id); if (!sf) - goto sf_err; + goto unlock; /* When driver is attached or detached to a function, an event * notifies such state change. @@ -431,55 +396,28 @@ static int mlx5_sf_vhca_event(struct notifier_block *nb, unsigned long opcode, v sf->hw_state = event->new_vhca_state; trace_mlx5_sf_update_state(table->dev, sf->port_index, sf->controller, sf->hw_fn_id, sf->hw_state); -sf_err: +unlock: mutex_unlock(&table->sf_state_lock); - mlx5_sf_table_put(table); return 0; } -static void mlx5_sf_table_enable(struct mlx5_sf_table *table) -{ - init_completion(&table->disable_complete); - refcount_set(&table->refcount, 1); -} - static void mlx5_sf_del_all(struct mlx5_sf_table *table) { unsigned long index; struct mlx5_sf *sf; - /* At this point, no new user commands can start and no vhca event can - * arrive. It is safe to destroy all user created SFs. - */ xa_for_each(&table->function_ids, index, sf) mlx5_sf_del(table, sf); } -static void mlx5_sf_table_disable(struct mlx5_sf_table *table) -{ - if (!refcount_read(&table->refcount)) - return; - - /* Balances with refcount_set; drop the reference so that new user cmd cannot start - * and new vhca event handler cannot run. - */ - mlx5_sf_table_put(table); - wait_for_completion(&table->disable_complete); - - mlx5_sf_del_all(table); -} - static int mlx5_sf_esw_event(struct notifier_block *nb, unsigned long event, void *data) { struct mlx5_sf_table *table = container_of(nb, struct mlx5_sf_table, esw_nb); const struct mlx5_esw_event_info *mode = data; switch (mode->new_mode) { - case MLX5_ESWITCH_OFFLOADS: - mlx5_sf_table_enable(table); - break; case MLX5_ESWITCH_LEGACY: - mlx5_sf_table_disable(table); + mlx5_sf_del_all(table); break; default: break; @@ -498,9 +436,6 @@ static int mlx5_sf_mdev_event(struct notifier_block *nb, unsigned long event, vo if (event != MLX5_DRIVER_EVENT_SF_PEER_DEVLINK) return NOTIFY_DONE; - table = mlx5_sf_table_try_get(table->dev); - if (!table) - return NOTIFY_DONE; mutex_lock(&table->sf_state_lock); sf = mlx5_sf_lookup_by_function_id(table, event_ctx->fn_id); @@ -513,16 +448,9 @@ static int mlx5_sf_mdev_event(struct notifier_block *nb, unsigned long event, vo ret = NOTIFY_OK; out: mutex_unlock(&table->sf_state_lock); - mlx5_sf_table_put(table); return ret; } -static bool mlx5_sf_table_supported(const struct mlx5_core_dev *dev) -{ - return dev->priv.eswitch && MLX5_ESWITCH_MANAGER(dev) && - mlx5_sf_hw_table_supported(dev); -} - int mlx5_sf_table_init(struct mlx5_core_dev *dev) { struct mlx5_sf_table *table; @@ -539,7 +467,6 @@ int mlx5_sf_table_init(struct mlx5_core_dev *dev) table->dev = dev; xa_init(&table->function_ids); dev->priv.sf_table = table; - refcount_set(&table->refcount, 0); table->esw_nb.notifier_call = mlx5_sf_esw_event; err = mlx5_esw_event_notifier_register(dev->priv.eswitch, &table->esw_nb); if (err) @@ -574,7 +501,6 @@ void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev) mlx5_blocking_notifier_unregister(dev, &table->mdev_nb); mlx5_vhca_event_notifier_unregister(table->dev, &table->vhca_nb); mlx5_esw_event_notifier_unregister(dev->priv.eswitch, &table->esw_nb); - WARN_ON(refcount_read(&table->refcount)); mutex_destroy(&table->sf_state_lock); WARN_ON(!xa_empty(&table->function_ids)); kfree(table); From patchwork Wed Sep 20 06:35:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392129 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C5228C19 for ; Wed, 20 Sep 2023 06:36:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0E3AEC433CD; Wed, 20 Sep 2023 06:36:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191766; bh=+xsryfgaPixIgPNogVTJCa8i5jkYie0rwQbaXU9Xexg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZuBdjc3KbeX0jC6mvZwqaTskS3gajwvNDfNX3dh7T8YSBWm93LnLIIBENSkFsw2ka Cl8I7mlOfM20r9SPn7nf2CxODu6XUi/wGu20exhDi3oUcau7xnDSQ0iODQ7kdst05K gR2jRtZSGrS7HW2AfaHR6Jozq4/HIhlroHZXrSUddCFzhNufbYdYIj6+S3xl6X/RN9 nMams/eh8ByCfBN3AiGXyzPlWYJLZtvx6kRPkapfT3er8mOvLf5jpU8+mipHtDyCrj S1iuwgYqlPantKgqJwFubwnsXTDhSlEk5jZXesA7pjq+l84Uihnlvi1EiXOJaQb0QO gySPz/Ncv0F8g== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jiri Pirko , Shay Drory Subject: [net-next 08/15] net/mlx5: Remove redundant max_sfs check and field from struct mlx5_sf_dev_table Date: Tue, 19 Sep 2023 23:35:45 -0700 Message-ID: <20230920063552.296978-9-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko table->max_sfs is initialized in mlx5_sf_dev_table_create() and only used to check for 0 in mlx5_sf_dev_add(). mlx5_sf_dev_add() is called either from mlx5_sf_dev_state_change_handler() or mlx5_sf_dev_add_active_work(). Both ensure max SF count is not 0, using mlx5_sf_max_functions() helper before calling mlx5_sf_dev_add(). So remove the redundant check and no longer used max_sfs field. Signed-off-by: Jiri Pirko Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/sf/dev/dev.c | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c index 05e148db9889..0f9b280514b8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c @@ -14,7 +14,6 @@ struct mlx5_sf_dev_table { struct xarray devices; - unsigned int max_sfs; phys_addr_t base_address; u64 sf_bar_length; struct notifier_block nb; @@ -110,12 +109,6 @@ static void mlx5_sf_dev_add(struct mlx5_core_dev *dev, u16 sf_index, u16 fn_id, sf_dev->parent_mdev = dev; sf_dev->fn_id = fn_id; - if (!table->max_sfs) { - mlx5_adev_idx_free(id); - kfree(sf_dev); - err = -EOPNOTSUPP; - goto add_err; - } sf_dev->bar_base_addr = table->base_address + (sf_index * table->sf_bar_length); trace_mlx5_sf_dev_add(dev, sf_dev, id); @@ -296,7 +289,6 @@ static void mlx5_sf_dev_destroy_active_work(struct mlx5_sf_dev_table *table) void mlx5_sf_dev_table_create(struct mlx5_core_dev *dev) { struct mlx5_sf_dev_table *table; - unsigned int max_sfs; int err; if (!mlx5_sf_dev_supported(dev)) @@ -310,13 +302,8 @@ void mlx5_sf_dev_table_create(struct mlx5_core_dev *dev) table->nb.notifier_call = mlx5_sf_dev_state_change_handler; table->dev = dev; - if (MLX5_CAP_GEN(dev, max_num_sf)) - max_sfs = MLX5_CAP_GEN(dev, max_num_sf); - else - max_sfs = 1 << MLX5_CAP_GEN(dev, log_max_sf); table->sf_bar_length = 1 << (MLX5_CAP_GEN(dev, log_min_sf_size) + 12); table->base_address = pci_resource_start(dev->pdev, 2); - table->max_sfs = max_sfs; xa_init(&table->devices); mutex_init(&table->table_lock); dev->priv.sf_dev_table = table; @@ -332,7 +319,6 @@ void mlx5_sf_dev_table_create(struct mlx5_core_dev *dev) err = mlx5_sf_dev_vhca_arm_all(table); if (err) goto arm_err; - mlx5_core_dbg(dev, "SF DEV: max sf devices=%d\n", max_sfs); return; arm_err: @@ -340,7 +326,6 @@ void mlx5_sf_dev_table_create(struct mlx5_core_dev *dev) add_active_err: mlx5_vhca_event_notifier_unregister(dev, &table->nb); vhca_err: - table->max_sfs = 0; kfree(table); dev->priv.sf_dev_table = NULL; table_err: From patchwork Wed Sep 20 06:35:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392130 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E4008F72 for ; Wed, 20 Sep 2023 06:36:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18140C433C7; Wed, 20 Sep 2023 06:36:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191767; bh=/PbfMOZhJMNV00rw/gz9c/Tmp0rtYfzGzyUD04wIurw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=spMCw0CoETkIyZVLQNgMPIteHd5D88TkkkY2yy4lNmN0NUs3Oet2MvZXmcUFhDhf2 hLZW8SwTKTOVmYSbOWHzBOqvgVtkzC7cLDc2mx8Fi6o4kfYOzrNkV1hMuoDz98Dhfj 68U7QzQ2ZX3NWNZLL1wSUxb+ntlJrrnDKmC74CaTwuidruXBkmfk5TOo3tlEmpNToK amNY8OiS1oCzjkWGWM/6xy6UNOI4pLURIjKZFjTuVhEeOpVYR2netkpg7q+Z9ZRqtv XMGYm5TcIePD4MIX4u79jPoPc14yYx3iPaMtUIunGR3rFeCmeTKYlCM8EdBjWAZ3gM bfWFvjHWgTZzQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jianbo Liu , Parav Pandit , Dragos Tatulea Subject: [net-next 09/15] net/mlx5e: Consider aggregated port speed during rate configuration Date: Tue, 19 Sep 2023 23:35:46 -0700 Message-ID: <20230920063552.296978-10-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jianbo Liu When LAG is configured, functions (PF,VF,SF) can utilize the maximum aggregated link speed for transmission. Currently the aggregated link speed is not considered. Hence, improve it to use the aggregated link speed by referring to the physical port's upper bonding device when LAG is configured. Signed-off-by: Jianbo Liu Reviewed-by: Parav Pandit Reviewed-by: Dragos Tatulea Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/esw/qos.c | 84 ++++++++++++++++--- 1 file changed, 72 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index 1887a24ee414..f76c8f0562e9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -2,6 +2,7 @@ /* Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ #include "eswitch.h" +#include "lib/mlx5.h" #include "esw/qos.h" #include "en/port.h" #define CREATE_TRACE_POINTS @@ -701,6 +702,70 @@ int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vpo return err; } +static u32 mlx5_esw_qos_lag_link_speed_get_locked(struct mlx5_core_dev *mdev) +{ + struct ethtool_link_ksettings lksettings; + struct net_device *slave, *master; + u32 speed = SPEED_UNKNOWN; + + /* Lock ensures a stable reference to master and slave netdevice + * while port speed of master is queried. + */ + ASSERT_RTNL(); + + slave = mlx5_uplink_netdev_get(mdev); + if (!slave) + goto out; + + master = netdev_master_upper_dev_get(slave); + if (master && !__ethtool_get_link_ksettings(master, &lksettings)) + speed = lksettings.base.speed; + +out: + return speed; +} + +static int mlx5_esw_qos_max_link_speed_get(struct mlx5_core_dev *mdev, u32 *link_speed_max, + bool hold_rtnl_lock, struct netlink_ext_ack *extack) +{ + int err; + + if (!mlx5_lag_is_active(mdev)) + goto skip_lag; + + if (hold_rtnl_lock) + rtnl_lock(); + + *link_speed_max = mlx5_esw_qos_lag_link_speed_get_locked(mdev); + + if (hold_rtnl_lock) + rtnl_unlock(); + + if (*link_speed_max != (u32)SPEED_UNKNOWN) + return 0; + +skip_lag: + err = mlx5_port_max_linkspeed(mdev, link_speed_max); + if (err) + NL_SET_ERR_MSG_MOD(extack, "Failed to get link maximum speed"); + + return err; +} + +static int mlx5_esw_qos_link_speed_verify(struct mlx5_core_dev *mdev, + const char *name, u32 link_speed_max, + u64 value, struct netlink_ext_ack *extack) +{ + if (value > link_speed_max) { + pr_err("%s rate value %lluMbps exceed link maximum speed %u.\n", + name, value, link_speed_max); + NL_SET_ERR_MSG_MOD(extack, "TX rate value exceed link maximum speed"); + return -EINVAL; + } + + return 0; +} + int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 rate_mbps) { u32 ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; @@ -744,12 +809,6 @@ static int esw_qos_devlink_rate_to_mbps(struct mlx5_core_dev *mdev, const char * u64 value; int err; - err = mlx5_port_max_linkspeed(mdev, &link_speed_max); - if (err) { - NL_SET_ERR_MSG_MOD(extack, "Failed to get link maximum speed"); - return err; - } - value = div_u64_rem(*rate, MLX5_LINKSPEED_UNIT, &remainder); if (remainder) { pr_err("%s rate value %lluBps not in link speed units of 1Mbps.\n", @@ -758,12 +817,13 @@ static int esw_qos_devlink_rate_to_mbps(struct mlx5_core_dev *mdev, const char * return -EINVAL; } - if (value > link_speed_max) { - pr_err("%s rate value %lluMbps exceed link maximum speed %u.\n", - name, value, link_speed_max); - NL_SET_ERR_MSG_MOD(extack, "TX rate value exceed link maximum speed"); - return -EINVAL; - } + err = mlx5_esw_qos_max_link_speed_get(mdev, &link_speed_max, true, extack); + if (err) + return err; + + err = mlx5_esw_qos_link_speed_verify(mdev, name, link_speed_max, value, extack); + if (err) + return err; *rate = value; return 0; From patchwork Wed Sep 20 06:35:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392131 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A3BDC13D for ; Wed, 20 Sep 2023 06:36:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18A06C433CC; Wed, 20 Sep 2023 06:36:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191768; bh=5cNcDJukxUqhZtOP1tOQ5D7Z3jgZBxscbObP9pX8/Q0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TH5uBpz62nn9IA4Ytp0MG5kTUmBcPg96UjMUGIzIQigze9y8Mr3JMmdJReF1vgQ/g VjJvE5hmhK0sZOyjWhp26sna0LH5ad9K8fYLOqM2W1cgM0YVk4c1dTXWGW8YQdIHzs EZiPChExEAvEjZU+Kt7+vF+QxH4jqGrT6I1xAP2x27rESNO0gVy95bEwPmEv+yURNY qsWwTZrIeD7PHo2a5CnU64GC1R0b1hd2RHzJ96LTMZsuC8LDIE64E5pjzjwi/MwRD3 NSKyz0MaX28h9vHtGu0E1iJ24N3vKTmuENHbDZeAK/zK9pxNY+rMFtlhtdngb8ox9W YhOrWlOAq7CEg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Jianbo Liu , Parav Pandit , Dragos Tatulea Subject: [net-next 10/15] net/mlx5e: Check police action rate for matchall filter Date: Tue, 19 Sep 2023 23:35:47 -0700 Message-ID: <20230920063552.296978-11-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jianbo Liu As matchall filter uses TSAR (Transmit Scheduling Arbiter) for rate limit, the rate of police action should not be over the port's max link speed, or the maximum aggregated speed of both ports if LAG is configured. Signed-off-by: Jianbo Liu Reviewed-by: Parav Pandit Reviewed-by: Dragos Tatulea Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c index f76c8f0562e9..d2ebe56c3977 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c @@ -770,6 +770,7 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 { u32 ctx[MLX5_ST_SZ_DW(scheduling_context)] = {}; struct mlx5_vport *vport; + u32 link_speed_max; u32 bitmask; int err; @@ -777,6 +778,17 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 if (IS_ERR(vport)) return PTR_ERR(vport); + if (rate_mbps) { + err = mlx5_esw_qos_max_link_speed_get(esw->dev, &link_speed_max, false, NULL); + if (err) + return err; + + err = mlx5_esw_qos_link_speed_verify(esw->dev, "Police", + link_speed_max, rate_mbps, NULL); + if (err) + return err; + } + mutex_lock(&esw->state_lock); if (!vport->qos.enabled) { /* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */ From patchwork Wed Sep 20 06:35:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392132 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D658C13D for ; Wed, 20 Sep 2023 06:36:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20FBAC433C9; Wed, 20 Sep 2023 06:36:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191769; bh=tNng2k+y2H0WRG39MPaMpRplIFtogBg1r85RRsvfOrY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D1vM/DXMn7U9q+e1G1G6st/RRVR+Ntmmu2P7ZAQyTuX5VJH9g3l9u+nXkymgc9fzz Kx5zHPWQXdYYHfOXRcEZa8/IPwMBu5tto6BNYgsoAlRD/8Ia2gnKJ8Iz4jSUybUXJf JVym+uLMuuidXGWUoJ4J3/6yw1mYsn5sKi1wR6p/sYXfvO9ntAaoTP2Xjv3/xPfMCb zc+Irjbr5XnSFLV/pJ3+zYtlb7CzG67xZ/ZixQQMy7x/ZWiQBM7qNId4Tnq8Vt+h40 QEBPKzWr34XTKuXey6Zpuqib1NRWw+DyriGYm2IH6xGD1qWrJssslkh/WDiliNDlST 6BR+2zxIcYF5g== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Erez Shitrit , Moshe Shemesh , Vlad Buslov Subject: [net-next 11/15] net/mlx5: Bridge, Enable mcast in smfs steering mode Date: Tue, 19 Sep 2023 23:35:48 -0700 Message-ID: <20230920063552.296978-12-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Erez Shitrit In order to have mcast offloads the driver needs the following: It should know if that mcast comes from wire port, in addition the flow should not be marked as any specific source, that way it will give the flexibility for the driver not to be depended on the way iterator implemented in the FW. Signed-off-by: Erez Shitrit Reviewed-by: Moshe Shemesh Reviewed-by: Vlad Buslov Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/esw/bridge_mcast.c | 11 ++--------- include/linux/mlx5/fs.h | 1 + 2 files changed, 3 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c index 7a01714b3780..a7ed87e9d842 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c @@ -78,6 +78,8 @@ mlx5_esw_bridge_mdb_flow_create(u16 esw_owner_vhca_id, struct mlx5_esw_bridge_md xa_for_each(&entry->ports, idx, port) { dests[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; dests[i].ft = port->mcast.ft; + if (port->vport_num == MLX5_VPORT_UPLINK) + dests[i].ft->flags |= MLX5_FLOW_TABLE_UPLINK_VPORT; i++; } @@ -585,10 +587,6 @@ mlx5_esw_bridge_mcast_vlan_flow_create(u16 vlan_proto, struct mlx5_esw_bridge_po if (!rule_spec) return ERR_PTR(-ENOMEM); - if (MLX5_CAP_ESW_FLOWTABLE(bridge->br_offloads->esw->dev, flow_source) && - port->vport_num == MLX5_VPORT_UPLINK) - rule_spec->flow_context.flow_source = - MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT; rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS; flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; @@ -660,11 +658,6 @@ mlx5_esw_bridge_mcast_fwd_flow_create(struct mlx5_esw_bridge_port *port) if (!rule_spec) return ERR_PTR(-ENOMEM); - if (MLX5_CAP_ESW_FLOWTABLE(bridge->br_offloads->esw->dev, flow_source) && - port->vport_num == MLX5_VPORT_UPLINK) - rule_spec->flow_context.flow_source = - MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT; - if (MLX5_CAP_ESW(bridge->br_offloads->esw->dev, merged_eswitch)) { dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID; dest.vport.vhca_id = port->esw_owner_vhca_id; diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h index 1e00c2436377..6f7725238abc 100644 --- a/include/linux/mlx5/fs.h +++ b/include/linux/mlx5/fs.h @@ -67,6 +67,7 @@ enum { MLX5_FLOW_TABLE_TERMINATION = BIT(2), MLX5_FLOW_TABLE_UNMANAGED = BIT(3), MLX5_FLOW_TABLE_OTHER_VPORT = BIT(4), + MLX5_FLOW_TABLE_UPLINK_VPORT = BIT(5), }; #define LEFTOVERS_RULE_NUM 2 From patchwork Wed Sep 20 06:35:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392133 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59B2AC2CE for ; Wed, 20 Sep 2023 06:36:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A88DC433CA; Wed, 20 Sep 2023 06:36:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191770; bh=xbIDfbwrx5ffOD4J8e33bqUjoA7tA9ioZ92tFFReuIg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q/swOmPA00jRYaHe4Lxe+dMHyJ4mbMfs+ZHPBiXvG6NqrwWSPPa4ekF6ZgFm6xS/L oxAva8n2NPB+2d22BPmc0LyQ1piQMSpvJgs08mgY9wEdaU7odYRIm2FPWLdFu+f1em zZgz0hytdswnZM6K0JMKVvHaVrbjqqNIR14MaLbZ4i3W/3sHnQyX7MCAKu8P0bkoB8 vppz2PvqlJ2SMagiRPLeUr46m+/fAHpGrOjwLpm5TykeR7bEtEB0M6CTzTkMbN0yq/ PeUac9AYxdbsNOFjXVVTI+tnRLHYdkrPo7exAOhFnJIkJHGyXg8yioGoRmwFOW+8b8 ini4hxQ3pZe0A== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Erez Shitrit , Moshe Shemesh , Yevgeny Kliteynik Subject: [net-next 12/15] net/mlx5: DR, Add check for multi destination FTE Date: Tue, 19 Sep 2023 23:35:49 -0700 Message-ID: <20230920063552.296978-13-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Erez Shitrit The driver should not allow rule that forward to more than one FT in TX flow unless there is a specific support from the FW. Signed-off-by: Erez Shitrit Reviewed-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/steering/dr_action.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c index 5b83da08692d..7179542e9164 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c @@ -55,6 +55,12 @@ static const char *dr_action_id_to_str(enum mlx5dr_action_type action_id) return action_type_to_str[action_id]; } +static bool mlx5dr_action_supp_fwd_fdb_multi_ft(struct mlx5_core_dev *dev) +{ + return (MLX5_CAP_ESW_FLOWTABLE(dev, fdb_multi_path_any_table_limit_regc) || + MLX5_CAP_ESW_FLOWTABLE(dev, fdb_multi_path_any_table)); +} + static const enum dr_action_valid_state next_action_state[DR_ACTION_DOMAIN_MAX][DR_ACTION_STATE_MAX][DR_ACTION_TYP_MAX] = { [DR_ACTION_DOMAIN_NIC_INGRESS] = { @@ -1167,6 +1173,7 @@ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn, struct mlx5dr_action **ref_actions; struct mlx5dr_action *action; bool reformat_req = false; + u16 num_dst_ft = 0; u32 num_of_ref = 0; u32 ref_act_cnt; int ret; @@ -1210,6 +1217,12 @@ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn, break; case DR_ACTION_TYP_FT: + if (num_dst_ft && + !mlx5dr_action_supp_fwd_fdb_multi_ft(dmn->mdev)) { + mlx5dr_dbg(dmn, "multiple FT destinations not supported\n"); + goto free_ref_actions; + } + num_dst_ft++; hw_dests[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; if (dest_action->dest_tbl->is_fw_tbl) hw_dests[i].ft_id = dest_action->dest_tbl->fw_tbl.id; From patchwork Wed Sep 20 06:35:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392134 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FE6EC8C0 for ; Wed, 20 Sep 2023 06:36:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 11D40C433C8; Wed, 20 Sep 2023 06:36:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191771; bh=CzpjofPmz2uCktoqCJll9eKy6rDJBBUqcabXJbReep0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N6DrhVu1cnWzgspbVpAKVDvj7VjN3isFhIeEe5luyYqGZ91dHKp4KnbDH4mtEsWtB mAEdMeeuSFCsy6Sh1n6yu3NtVHihY7J4p+YneSQEUqhKGvOSToMgNz6rP50sscUKfv pK/8WH+Q0s0YpZzqOSVlEesvOB1mnuJfkyiDH+Pyrj4/3Hh7SGV4TOkVYkVtwP/GVC OhdEw55zieMnDL07FIpSQmBki7wua2UZAgMB2pnBuD9nH2YBC3eHwmzWly+AOZ9STi 6QXWzdlagtz4twirpPtobaixnH/RItVDbONUazWuPxaAvkIbR8wQ1NGtkrk0grG7jw zH5izhAvyfl3w== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Erez Shitrit , Moshe Shemesh , Yevgeny Kliteynik Subject: [net-next 13/15] net/mlx5: DR, Handle multi destination action in the right order Date: Tue, 19 Sep 2023 23:35:50 -0700 Message-ID: <20230920063552.296978-14-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Erez Shitrit Whenever we have few destinations from Flow-table type we need to put the one that goes to the wire to be the last one. We are using FW in order to get iterator, the FW uses RX for the first destinations and TX for the last destination, if we want the packet to be directed to the wire it should be done in the TX path and not in the RX. The code now checks if the FT is directed to the wire and if so puts it as the last destination. Signed-off-by: Erez Shitrit Reviewed-by: Moshe Shemesh Reviewed-by: Yevgeny Kliteynik Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/steering/dr_action.c | 22 +++++++++++++++++-- .../mellanox/mlx5/core/steering/dr_types.h | 1 + .../mellanox/mlx5/core/steering/fs_dr.c | 9 +++++++- 3 files changed, 29 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c index 7179542e9164..6ea88a581804 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c @@ -1169,13 +1169,16 @@ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn, bool ignore_flow_level, u32 flow_source) { + struct mlx5dr_cmd_flow_destination_hw_info tmp_hw_dest; struct mlx5dr_cmd_flow_destination_hw_info *hw_dests; struct mlx5dr_action **ref_actions; struct mlx5dr_action *action; bool reformat_req = false; + bool is_ft_wire = false; u16 num_dst_ft = 0; u32 num_of_ref = 0; u32 ref_act_cnt; + u16 last_dest; int ret; int i; @@ -1224,10 +1227,15 @@ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn, } num_dst_ft++; hw_dests[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; - if (dest_action->dest_tbl->is_fw_tbl) + if (dest_action->dest_tbl->is_fw_tbl) { hw_dests[i].ft_id = dest_action->dest_tbl->fw_tbl.id; - else + } else { hw_dests[i].ft_id = dest_action->dest_tbl->tbl->table_id; + if (dest_action->dest_tbl->is_wire_ft) { + is_ft_wire = true; + last_dest = i; + } + } break; default: @@ -1236,6 +1244,16 @@ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn, } } + /* In multidest, the FW does the iterator in the RX except of the last + * one that done in the TX. + * So, if one of the ft target is wire, put it at the end of the dest list. + */ + if (is_ft_wire && num_dst_ft > 1) { + tmp_hw_dest = hw_dests[last_dest]; + hw_dests[last_dest] = hw_dests[num_of_dests - 1]; + hw_dests[num_of_dests - 1] = tmp_hw_dest; + } + action = dr_action_create_generic(DR_ACTION_TYP_FT); if (!action) goto free_ref_actions; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h index 6c59de3e28f6..55dc7383477c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h @@ -1064,6 +1064,7 @@ struct mlx5dr_action_sampler { struct mlx5dr_action_dest_tbl { u8 is_fw_tbl:1; + u8 is_wire_ft:1; union { struct mlx5dr_table *tbl; struct { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c index 14f6df88b1f9..50c2554c9ccf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c @@ -209,10 +209,17 @@ static struct mlx5dr_action *create_ft_action(struct mlx5dr_domain *domain, struct mlx5_flow_rule *dst) { struct mlx5_flow_table *dest_ft = dst->dest_attr.ft; + struct mlx5dr_action *tbl_action; if (mlx5dr_is_fw_table(dest_ft)) return mlx5dr_action_create_dest_flow_fw_table(domain, dest_ft); - return mlx5dr_action_create_dest_table(dest_ft->fs_dr_table.dr_table); + + tbl_action = mlx5dr_action_create_dest_table(dest_ft->fs_dr_table.dr_table); + if (tbl_action) + tbl_action->dest_tbl->is_wire_ft = + dest_ft->flags & MLX5_FLOW_TABLE_UPLINK_VPORT ? 1 : 0; + + return tbl_action; } static struct mlx5dr_action *create_range_action(struct mlx5dr_domain *domain, From patchwork Wed Sep 20 06:35:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392135 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4748EC8DD for ; Wed, 20 Sep 2023 06:36:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05B02C43397; Wed, 20 Sep 2023 06:36:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191772; bh=1MpMA1ZY2Tu59wjimrOZ0yw6YgENDvqiPLirA02/6oE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e+8QoPuK2W+mpZ2YwUfkAbN1osPgZrr6RfisUu9lKgzg6ph2UeDmyMpJ65Ajw2Ehf BWX6j5RIiYI5WXV/UFqVp1jOg4MIV2G+QpQ+YA64YDAbY6BjUe9faT0tgfxXeV3Awu 6utU05E2SCSxprDty/v0aD6D29mpMpbMcp7w7D22IFYJLtJnMQSejWeIdyzVsldh1S JmevMY0E4V7aKMW7YnX45opYA5p4HfYbjsS86JzY0XX6rlBDOwtJSCOsOMzFpXlpe7 lwR5+qVp1iEuu/o0HkwYBlY/jy3yc4eJ4ZUtxmq/73d7CLH7YoGEuJkXe6mbpAcSz7 dyT6n5AWTvr9Q== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Moshe Shemesh , Shay Drory Subject: [net-next 14/15] net/mlx5: Add a health error syndrome for pci data poisoned Date: Tue, 19 Sep 2023 23:35:51 -0700 Message-ID: <20230920063552.296978-15-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Moshe Shemesh Add new health error syndrome to indicate that pci data poisoned error has been received while fetching device ICM data. Signed-off-by: Moshe Shemesh Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/health.c | 2 ++ include/linux/mlx5/mlx5_ifc.h | 1 + 2 files changed, 3 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c index 2fb2598b775e..1c220048ae9a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c @@ -365,6 +365,8 @@ static const char *hsynd_str(u8 synd) return "FFSER error"; case MLX5_INITIAL_SEG_HEALTH_SYNDROME_HIGH_TEMP_ERR: return "High temperature"; + case MLX5_INITIAL_SEG_HEALTH_SYNDROME_ICM_PCI_POISONED_ERR: + return "ICM fetch PCI data poisoned error"; default: return "unrecognized error"; } diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index dd8421d021cf..b23d8ff286a1 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -10574,6 +10574,7 @@ enum { MLX5_INITIAL_SEG_HEALTH_SYNDROME_EQ_INV = 0xe, MLX5_INITIAL_SEG_HEALTH_SYNDROME_FFSER_ERR = 0xf, MLX5_INITIAL_SEG_HEALTH_SYNDROME_HIGH_TEMP_ERR = 0x10, + MLX5_INITIAL_SEG_HEALTH_SYNDROME_ICM_PCI_POISONED_ERR = 0x12, }; struct mlx5_ifc_initial_seg_bits { From patchwork Wed Sep 20 06:35:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13392136 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4BCA0D312 for ; Wed, 20 Sep 2023 06:36:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 09A3BC433B8; Wed, 20 Sep 2023 06:36:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695191773; bh=NUzQPZmUn05uhEhKit7421WnEOnmH6bNUKX3oHb+GSo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HLQTu96y0RpqbzGEnwQbNBipowrcBpGbRMP2cHQJJkZ3YDUH7jumpboJq/VA28djU A9xW4l65mwlI1IM2xelv9Y8+MG/R1hKrzWad4f167GBWBLSQHqPAuUWHYtKYXUTaKm PuK3P7kCigbzNM6LPyl6wg6ygshjJRsmQe3bWyjxLPLOPV3e8jtyU2QnuUjD3u+uO+ NRgUE8wmkJTQ4ryItl9GnopZRoRyKX7Tnta44nWu3iLzaSxdAWBfHcOiNCmn5Jtrzk GTxqsqx7Fwf7QtklvpK1IhdlhoUldWkkWqWgHHW+4RNvu4phxQUqwDmz1p0lZBvV71 mTm9U0CsH7beA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Shay Drory , Mark Bloch Subject: [net-next 15/15] net/mlx5: Enable 4 ports multiport E-switch Date: Tue, 19 Sep 2023 23:35:52 -0700 Message-ID: <20230920063552.296978-16-saeed@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920063552.296978-1-saeed@kernel.org> References: <20230920063552.296978-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Shay Drory enable_mpesw() assumed only 2 ports are available, fix this by removing that assumption and looping through the existing lag ports to enable multi-port E-switch for cards with more than 2 ports. Signed-off-by: Shay Drory Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/lag/mpesw.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c index 4bf15391525c..0857eebf4f07 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c @@ -65,12 +65,12 @@ static int mlx5_mpesw_metadata_set(struct mlx5_lag *ldev) return err; } -#define MLX5_LAG_MPESW_OFFLOADS_SUPPORTED_PORTS 2 +#define MLX5_LAG_MPESW_OFFLOADS_SUPPORTED_PORTS 4 static int enable_mpesw(struct mlx5_lag *ldev) { struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev; - struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev; int err; + int i; if (ldev->mode != MLX5_LAG_MODE_NONE) return -EINVAL; @@ -98,11 +98,11 @@ static int enable_mpesw(struct mlx5_lag *ldev) dev0->priv.flags &= ~MLX5_PRIV_FLAGS_DISABLE_IB_ADEV; mlx5_rescan_drivers_locked(dev0); - err = mlx5_eswitch_reload_reps(dev0->priv.eswitch); - if (!err) - err = mlx5_eswitch_reload_reps(dev1->priv.eswitch); - if (err) - goto err_rescan_drivers; + for (i = 0; i < ldev->ports; i++) { + err = mlx5_eswitch_reload_reps(ldev->pf[i].dev->priv.eswitch); + if (err) + goto err_rescan_drivers; + } return 0; @@ -112,8 +112,8 @@ static int enable_mpesw(struct mlx5_lag *ldev) mlx5_deactivate_lag(ldev); err_add_devices: mlx5_lag_add_devices(ldev); - mlx5_eswitch_reload_reps(dev0->priv.eswitch); - mlx5_eswitch_reload_reps(dev1->priv.eswitch); + for (i = 0; i < ldev->ports; i++) + mlx5_eswitch_reload_reps(ldev->pf[i].dev->priv.eswitch); mlx5_mpesw_metadata_cleanup(ldev); return err; }