From patchwork Tue Nov 28 12:29:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13471091 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F17256774; Tue, 28 Nov 2023 12:30:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bA22Dlsf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2E20DC433C7; Tue, 28 Nov 2023 12:30:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701174607; bh=Hr4nB6qIyAl9SrKx63EHLJuuK3BK3CxkeypRrNZUQbc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bA22Dlsf3bnDdvMXSPmTUc5ShXV9BFcJhBq6wUNUAiXPIeAo7sCBtQqxPS/P1Sue/ 7nM0MlVJxkEs4dR+ho+pQVrw7EMSBSGSn1n1VpucFLuPA7kBwThJh0P9Deuy+OPs0j s2EcJ28ZWqphrJ6obSr5RuqjbYwzkKg+k/Q1s01IPI3me1n3voVigIqlz86K8zHm3L anPXVh9QDJbQ3gmTGyCuXf49yhep46SgynozPat5kNP55iHw2M8Gj9A7E0Wa/w+SHU Jou1l+FAZ6W1FGJ7DGaq74uVhjHX49EoK9AoEBJaAXVjWQ1cVa1oCZAUtHyPVYhcRV N8XbP/IcjR2kA== From: Leon Romanovsky To: Jason Gunthorpe Cc: Shun Hao , Eric Dumazet , Jakub Kicinski , linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed Subject: [PATCH mlx5-next 1/5] net/mlx5: Introduce indirect-sw-encap icm properties Date: Tue, 28 Nov 2023 14:29:45 +0200 Message-ID: <79470c08ce852496b03d777749074efd937d5bf6.1701172481.git.leon@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Shun Hao Add new fields for device memory capabilities, in order to support creation of new ICM memory type of SW encap. Signed-off-by: Shun Hao Signed-off-by: Leon Romanovsky --- include/linux/mlx5/mlx5_ifc.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 6f3631425f38..02b25dc36143 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1193,7 +1193,8 @@ struct mlx5_ifc_device_mem_cap_bits { u8 log_sw_icm_alloc_granularity[0x6]; u8 log_steering_sw_icm_size[0x8]; - u8 reserved_at_120[0x18]; + u8 log_indirect_encap_sw_icm_size[0x8]; + u8 reserved_at_128[0x10]; u8 log_header_modify_pattern_sw_icm_size[0x8]; u8 header_modify_sw_icm_start_address[0x40]; @@ -1204,7 +1205,11 @@ struct mlx5_ifc_device_mem_cap_bits { u8 memic_operations[0x20]; - u8 reserved_at_220[0x5e0]; + u8 reserved_at_220[0x20]; + + u8 indirect_encap_sw_icm_start_address[0x40]; + + u8 reserved_at_280[0x580]; }; struct mlx5_ifc_device_event_cap_bits { From patchwork Tue Nov 28 12:29:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13471096 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6557056B75; Tue, 28 Nov 2023 12:30:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="E17mDLWf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AEE3AC433C8; Tue, 28 Nov 2023 12:30:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701174628; bh=FqRs+n2vyKD4g05NJ+vvDrUy0df4FL7mrdN+0I3HKWM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E17mDLWfQc/fBBLMN2Y6k32hv0lCxOz9WBatAXPiwEszWlPIyrFZCTyLCo8i6Zazo FqyDvgsIVtyuwRSfe6fZUjbHMDI5zR+5+ow3CzVxwbMzrNw6ZO3q9RargZKhxDX9z6 lLYgwGSi7/FoMcTj4HMQpAWkJn5jXolRat2pQ31UIBymid0RKnhwM0iuwiwIEvFRTh OyQMrsWT/Kx4IwfB1rWnKS6+FeGmW5UgBoVbBMpcto0d1VYfUBdyifs/B481Ro1/F0 36SPIxhqIPnsOZx2PPL7wecd/eulbtztS8ee6pqbT7ngzW2CXc4XLnr/ECNnLw2VZM iaOnKWQGg1jpA== From: Leon Romanovsky To: Jason Gunthorpe Cc: Shun Hao , Eric Dumazet , Jakub Kicinski , linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed Subject: [PATCH mlx5-next 2/5] net/mlx5: Manage ICM type of SW encap Date: Tue, 28 Nov 2023 14:29:46 +0200 Message-ID: <37dc4fd78dfa3374ff53aa602f038a2ec76eb069.1701172481.git.leon@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Shun Hao Support allocate/deallocate the new SW encap ICM type memory. The new ICM type is used for encap context allocation managed by SW, instead FW. It can increase encap context maximum number and allocation speed Signed-off-by: Shun Hao Signed-off-by: Leon Romanovsky --- .../net/ethernet/mellanox/mlx5/core/lib/dm.c | 38 ++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c index 9482e51ac82a..7c5516b0a844 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c @@ -13,11 +13,13 @@ struct mlx5_dm { unsigned long *steering_sw_icm_alloc_blocks; unsigned long *header_modify_sw_icm_alloc_blocks; unsigned long *header_modify_pattern_sw_icm_alloc_blocks; + unsigned long *header_encap_sw_icm_alloc_blocks; }; struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev) { u64 header_modify_pattern_icm_blocks = 0; + u64 header_sw_encap_icm_blocks = 0; u64 header_modify_icm_blocks = 0; u64 steering_icm_blocks = 0; struct mlx5_dm *dm; @@ -54,6 +56,17 @@ struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev) goto err_modify_hdr; } + if (MLX5_CAP_DEV_MEM(dev, log_indirect_encap_sw_icm_size)) { + header_sw_encap_icm_blocks = + BIT(MLX5_CAP_DEV_MEM(dev, log_indirect_encap_sw_icm_size) - + MLX5_LOG_SW_ICM_BLOCK_SIZE(dev)); + + dm->header_encap_sw_icm_alloc_blocks = + bitmap_zalloc(header_sw_encap_icm_blocks, GFP_KERNEL); + if (!dm->header_encap_sw_icm_alloc_blocks) + goto err_pattern; + } + support_v2 = MLX5_CAP_FLOWTABLE_NIC_RX(dev, sw_owner_v2) && MLX5_CAP_FLOWTABLE_NIC_TX(dev, sw_owner_v2) && MLX5_CAP64_DEV_MEM(dev, header_modify_pattern_sw_icm_start_address); @@ -66,11 +79,14 @@ struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev) dm->header_modify_pattern_sw_icm_alloc_blocks = bitmap_zalloc(header_modify_pattern_icm_blocks, GFP_KERNEL); if (!dm->header_modify_pattern_sw_icm_alloc_blocks) - goto err_pattern; + goto err_sw_encap; } return dm; +err_sw_encap: + bitmap_free(dm->header_encap_sw_icm_alloc_blocks); + err_pattern: bitmap_free(dm->header_modify_sw_icm_alloc_blocks); @@ -105,6 +121,14 @@ void mlx5_dm_cleanup(struct mlx5_core_dev *dev) bitmap_free(dm->header_modify_sw_icm_alloc_blocks); } + if (dm->header_encap_sw_icm_alloc_blocks) { + WARN_ON(!bitmap_empty(dm->header_encap_sw_icm_alloc_blocks, + BIT(MLX5_CAP_DEV_MEM(dev, + log_indirect_encap_sw_icm_size) - + MLX5_LOG_SW_ICM_BLOCK_SIZE(dev)))); + bitmap_free(dm->header_encap_sw_icm_alloc_blocks); + } + if (dm->header_modify_pattern_sw_icm_alloc_blocks) { WARN_ON(!bitmap_empty(dm->header_modify_pattern_sw_icm_alloc_blocks, BIT(MLX5_CAP_DEV_MEM(dev, @@ -164,6 +188,13 @@ int mlx5_dm_sw_icm_alloc(struct mlx5_core_dev *dev, enum mlx5_sw_icm_type type, log_header_modify_pattern_sw_icm_size); block_map = dm->header_modify_pattern_sw_icm_alloc_blocks; break; + case MLX5_SW_ICM_TYPE_SW_ENCAP: + icm_start_addr = MLX5_CAP64_DEV_MEM(dev, + indirect_encap_sw_icm_start_address); + log_icm_size = MLX5_CAP_DEV_MEM(dev, + log_indirect_encap_sw_icm_size); + block_map = dm->header_encap_sw_icm_alloc_blocks; + break; default: return -EINVAL; } @@ -242,6 +273,11 @@ int mlx5_dm_sw_icm_dealloc(struct mlx5_core_dev *dev, enum mlx5_sw_icm_type type header_modify_pattern_sw_icm_start_address); block_map = dm->header_modify_pattern_sw_icm_alloc_blocks; break; + case MLX5_SW_ICM_TYPE_SW_ENCAP: + icm_start_addr = MLX5_CAP64_DEV_MEM(dev, + indirect_encap_sw_icm_start_address); + block_map = dm->header_encap_sw_icm_alloc_blocks; + break; default: return -EINVAL; } From patchwork Tue Nov 28 12:29:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13471093 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D21F656442; Tue, 28 Nov 2023 12:30:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gimlyG7V" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29A6DC433C9; Tue, 28 Nov 2023 12:30:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701174615; bh=u0ZAwcbliV7EY/YM1uCFqSS4XX+prKe43jKs26Lzh0k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gimlyG7Vgs02JizX/HAVSx8IInHjgDHqJ1ubCZNODTiRB/3TQ7ekKhEHKPLBtkdAj 44iD/iHr0FsfwvT3/Z47e9hHjxh6ZGpvrfjDKKDeq7FGPlEG/6hOwB0Ph5rzXJZ+9B mVHclco+5LBiSOrLlU6QamQJG4fhSZ5WicLdLu/jIIDwezJ44Cn7FZpic95EAk4e1E tF83M1BlpZxWLTtzu/FelFx+1T5+tARBQf2V3k7MCrQmAxz7jQ/xBVOu5xZQw+OFFu uEHIb2ONf+ivGa18ruUG4VMGkedk5R1qc9AYZVqVsn6kIuk/J2zVoABSCC/XkBHtmt WAZ+PCt5Ht8/A== From: Leon Romanovsky To: Jason Gunthorpe Cc: Shun Hao , Eric Dumazet , Jakub Kicinski , linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed Subject: [PATCH mlx5-next 3/5] RDMA/mlx5: Support handling of SW encap ICM area Date: Tue, 28 Nov 2023 14:29:47 +0200 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Shun Hao New type for this ICM area, now the user can allocate/deallocate the new type of SW encap ICM memory, to store the encap header data which are managed by SW. Signed-off-by: Shun Hao Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/dm.c | 5 +++++ drivers/infiniband/hw/mlx5/mr.c | 1 + include/linux/mlx5/driver.h | 1 + include/uapi/rdma/mlx5_user_ioctl_verbs.h | 1 + 4 files changed, 8 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/dm.c b/drivers/infiniband/hw/mlx5/dm.c index 3669c90b2dad..b4c97fb62abf 100644 --- a/drivers/infiniband/hw/mlx5/dm.c +++ b/drivers/infiniband/hw/mlx5/dm.c @@ -341,6 +341,8 @@ static enum mlx5_sw_icm_type get_icm_type(int uapi_type) return MLX5_SW_ICM_TYPE_HEADER_MODIFY; case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_PATTERN_SW_ICM: return MLX5_SW_ICM_TYPE_HEADER_MODIFY_PATTERN; + case MLX5_IB_UAPI_DM_TYPE_ENCAP_SW_ICM: + return MLX5_SW_ICM_TYPE_SW_ENCAP; case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM: default: return MLX5_SW_ICM_TYPE_STEERING; @@ -364,6 +366,7 @@ static struct ib_dm *handle_alloc_dm_sw_icm(struct ib_ucontext *ctx, switch (type) { case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM: + case MLX5_IB_UAPI_DM_TYPE_ENCAP_SW_ICM: if (!(MLX5_CAP_FLOWTABLE_NIC_RX(dev, sw_owner) || MLX5_CAP_FLOWTABLE_NIC_TX(dev, sw_owner) || MLX5_CAP_FLOWTABLE_NIC_RX(dev, sw_owner_v2) || @@ -438,6 +441,7 @@ struct ib_dm *mlx5_ib_alloc_dm(struct ib_device *ibdev, case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_PATTERN_SW_ICM: + case MLX5_IB_UAPI_DM_TYPE_ENCAP_SW_ICM: return handle_alloc_dm_sw_icm(context, attr, attrs, type); default: return ERR_PTR(-EOPNOTSUPP); @@ -491,6 +495,7 @@ static int mlx5_ib_dealloc_dm(struct ib_dm *ibdm, case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_PATTERN_SW_ICM: + case MLX5_IB_UAPI_DM_TYPE_ENCAP_SW_ICM: return mlx5_dm_icm_dealloc(ctx, to_icm(ibdm)); default: return -EOPNOTSUPP; diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 7b7891116b89..12bca6ca4760 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1349,6 +1349,7 @@ struct ib_mr *mlx5_ib_reg_dm_mr(struct ib_pd *pd, struct ib_dm *dm, case MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM: case MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_PATTERN_SW_ICM: + case MLX5_IB_UAPI_DM_TYPE_ENCAP_SW_ICM: if (attr->access_flags & ~MLX5_IB_DM_SW_ICM_ALLOWED_ACCESS) return ERR_PTR(-EINVAL); diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 4c576d19d927..cfcfa06bae18 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -688,6 +688,7 @@ enum mlx5_sw_icm_type { MLX5_SW_ICM_TYPE_STEERING, MLX5_SW_ICM_TYPE_HEADER_MODIFY, MLX5_SW_ICM_TYPE_HEADER_MODIFY_PATTERN, + MLX5_SW_ICM_TYPE_SW_ENCAP, }; #define MLX5_MAX_RESERVED_GIDS 8 diff --git a/include/uapi/rdma/mlx5_user_ioctl_verbs.h b/include/uapi/rdma/mlx5_user_ioctl_verbs.h index 7af9e09ea556..3189c7f08d17 100644 --- a/include/uapi/rdma/mlx5_user_ioctl_verbs.h +++ b/include/uapi/rdma/mlx5_user_ioctl_verbs.h @@ -64,6 +64,7 @@ enum mlx5_ib_uapi_dm_type { MLX5_IB_UAPI_DM_TYPE_STEERING_SW_ICM, MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_SW_ICM, MLX5_IB_UAPI_DM_TYPE_HEADER_MODIFY_PATTERN_SW_ICM, + MLX5_IB_UAPI_DM_TYPE_ENCAP_SW_ICM, }; enum mlx5_ib_uapi_devx_create_event_channel_flags { From patchwork Tue Nov 28 12:29:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13471094 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11AB557884; Tue, 28 Nov 2023 12:30:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dzoeXu76" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E980C433C7; Tue, 28 Nov 2023 12:30:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701174619; bh=o88IFEZmKb0zSYDLYIuSIbrM45xCOBW/YrPsSXa96i0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dzoeXu76e95YohI7t3MUzuEUkIra5Ta/l61fMSc2k15M1EHk8UCmsFjqfd4l4kmpN S80LFQjVBBI+JDgGIloH2Yzuxx7Q03IwZ7MQS9ZCwLtGFp0tJC1NNBr2sR1ZCZxj80 7Wu65uVokuyG4f2LQFTiFRxhNOdiNmrxopQGFBvLcL16lpDOo8mq7kKo540Xr+5Zo9 q2cY+c/uqtKYXjzWlnFEM4p+dtNQRm3Za/ozY6u/Kwg3rE3vxjO4yOYeW+UWnqKQNP 5MBvrchibr9Es0PVdqzo7kuOu5McC9uieDSyCXgoesPn08zdAmJNWoY4ZIm6+9EvnS wsGjo/Zmng9SA== From: Leon Romanovsky To: Jason Gunthorpe Cc: Mark Bloch , Eric Dumazet , Jakub Kicinski , linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Shun Hao Subject: [PATCH mlx5-next 4/5] net/mlx5: E-Switch, expose eswitch manager vport Date: Tue, 28 Nov 2023 14:29:48 +0200 Message-ID: <8bb596d1f6dbd77fb611c33349bbaef08ad10052.1701172481.git.leon@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Mark Bloch Expose the ability the query the eswitch manager vport number. Next patch will utilize this capability to reveal the correct register C0 value to the users. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 7 ------- include/linux/mlx5/eswitch.h | 8 ++++++++ 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 37ab66e7b403..60a9a6cba0b1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -616,13 +616,6 @@ static inline bool mlx5_esw_allowed(const struct mlx5_eswitch *esw) return esw && MLX5_ESWITCH_MANAGER(esw->dev); } -/* The returned number is valid only when the dev is eswitch manager. */ -static inline u16 mlx5_eswitch_manager_vport(struct mlx5_core_dev *dev) -{ - return mlx5_core_is_ecpf_esw_manager(dev) ? - MLX5_VPORT_ECPF : MLX5_VPORT_PF; -} - static inline bool mlx5_esw_is_manager_vport(const struct mlx5_eswitch *esw, u16 vport_num) { diff --git a/include/linux/mlx5/eswitch.h b/include/linux/mlx5/eswitch.h index 950d2431a53c..df73a2ccc9af 100644 --- a/include/linux/mlx5/eswitch.h +++ b/include/linux/mlx5/eswitch.h @@ -7,6 +7,7 @@ #define _MLX5_ESWITCH_ #include +#include #include #define MLX5_ESWITCH_MANAGER(mdev) MLX5_CAP_GEN(mdev, eswitch_manager) @@ -210,4 +211,11 @@ static inline bool is_mdev_switchdev_mode(struct mlx5_core_dev *dev) return mlx5_eswitch_mode(dev) == MLX5_ESWITCH_OFFLOADS; } +/* The returned number is valid only when the dev is eswitch manager. */ +static inline u16 mlx5_eswitch_manager_vport(struct mlx5_core_dev *dev) +{ + return mlx5_core_is_ecpf_esw_manager(dev) ? + MLX5_VPORT_ECPF : MLX5_VPORT_PF; +} + #endif From patchwork Tue Nov 28 12:29:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13471095 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4695E5646A; Tue, 28 Nov 2023 12:30:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dk5H+MT/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 54049C433C7; Tue, 28 Nov 2023 12:30:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701174624; bh=o4z3H0SdMr/XqkuOyTB0EV1y8u6oSsiRuqBA0j4ZH6M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dk5H+MT/kYXXY8rf3LmBD58z22yEDAjemQs4JpDg4UiFX8E+6e68w7RnUAFPYCAse aIRfkfWFuqeY13kmwceT3bvkP+vDO1aVSITlyCOlymVDZMzwIjRkBG41wceJgaV+79 254bRsW0YCQRkaYxH8iKfZ/PeWtSVOKl6L6tNesbbFj+CSG0y1SKiOHi3mOoeSWE// KEXHcoXJUo6cYRN5Gcl/wk3iurtMngEsJOhftPIsurWODpnoxj8hCqAIx4bCGyuL6i qvNU3ZZ8VdlatPmU/Bv9yiEkgq4FKVs4sEiqDkVl/p9NFtCVK/TO4b7mjyE1B5rtEH dYAdkgjkAvJyw== From: Leon Romanovsky To: Jason Gunthorpe Cc: Mark Bloch , "David S. Miller" , Eric Dumazet , Jakub Kicinski , linux-rdma@vger.kernel.org, Michael Guralnik , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Shun Hao Subject: [PATCH mlx5-next 5/5] RDMA/mlx5: Expose register c0 for RDMA device Date: Tue, 28 Nov 2023 14:29:49 +0200 Message-ID: <3c7e589ae069616414630ab88c1d283d79754496.1701172481.git.leon@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mark Bloch This patch introduces improvements for matching egress traffic sent by the local device. When applicable, all egress traffic from the local vport is now tagged with the provided value. This enhancement is particularly useful for FDB steering purposes. The primary focus of this update is facilitating the transmission of traffic from the hypervisor to a VF. To achieve this, one must initiate an SQ on the hypervisor and subsequently create a rule in the FDB that matches on the eswitch manager vport and the SQN of the aforementioned SQ. Obtaining the SQN can be had from SQ opened, and the eswitch manager vport match can be substituted with the register c0 value exposed by this patch. Signed-off-by: Mark Bloch Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 24 ++++++++++++++++++++++++ include/uapi/rdma/mlx5-abi.h | 2 ++ 2 files changed, 26 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 10c4b5e0aab5..5de19bfccbed 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -849,6 +849,17 @@ static int mlx5_query_node_desc(struct mlx5_ib_dev *dev, char *node_desc) MLX5_REG_NODE_DESC, 0, 0); } +static void fill_esw_mgr_reg_c0(struct mlx5_core_dev *mdev, + struct mlx5_ib_query_device_resp *resp) +{ + struct mlx5_eswitch *esw = mdev->priv.eswitch; + u16 vport = mlx5_eswitch_manager_vport(mdev); + + resp->reg_c0.value = mlx5_eswitch_get_vport_metadata_for_match(esw, + vport); + resp->reg_c0.mask = mlx5_eswitch_get_vport_metadata_mask(); +} + static int mlx5_ib_query_device(struct ib_device *ibdev, struct ib_device_attr *props, struct ib_udata *uhw) @@ -1240,6 +1251,19 @@ static int mlx5_ib_query_device(struct ib_device *ibdev, MLX5_CAP_GEN(mdev, log_max_dci_errored_streams); } + if (offsetofend(typeof(resp), reserved) <= uhw_outlen) + resp.response_length += sizeof(resp.reserved); + + if (offsetofend(typeof(resp), reg_c0) <= uhw_outlen) { + struct mlx5_eswitch *esw = mdev->priv.eswitch; + + resp.response_length += sizeof(resp.reg_c0); + + if (mlx5_eswitch_mode(mdev) == MLX5_ESWITCH_OFFLOADS && + mlx5_eswitch_vport_match_metadata_enabled(esw)) + fill_esw_mgr_reg_c0(mdev, &resp); + } + if (uhw_outlen) { err = ib_copy_to_udata(uhw, &resp, resp.response_length); diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h index a96b7d2770e1..d4f6a36dffb0 100644 --- a/include/uapi/rdma/mlx5-abi.h +++ b/include/uapi/rdma/mlx5-abi.h @@ -37,6 +37,7 @@ #include #include /* For ETH_ALEN. */ #include +#include enum { MLX5_QP_FLAG_SIGNATURE = 1 << 0, @@ -275,6 +276,7 @@ struct mlx5_ib_query_device_resp { __u32 tunnel_offloads_caps; /* enum mlx5_ib_tunnel_offloads */ struct mlx5_ib_dci_streams_caps dci_streams_caps; __u16 reserved; + struct mlx5_ib_uapi_reg reg_c0; }; enum mlx5_ib_create_cq_flags {