From patchwork Mon Apr 10 13:07:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13206353 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75AE2C76196 for ; Mon, 10 Apr 2023 13:08:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229816AbjDJNIJ (ORCPT ); Mon, 10 Apr 2023 09:08:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229614AbjDJNIE (ORCPT ); Mon, 10 Apr 2023 09:08:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5E6F4C22; Mon, 10 Apr 2023 06:08:03 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6000E60EAE; Mon, 10 Apr 2023 13:08:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 479E5C433D2; Mon, 10 Apr 2023 13:08:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681132082; bh=MrbKnwMP/OE9+5a4SofG80CqCNv4fWAszpOSe63CP/c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Pzo9si47SMuPQXhveafH22cX52V24DVfgAthDeNGKgm+5KsWRAfz+1jWV14dHVVYj /Y15ESOJRKcPbNYYHuLK/iReZ1nT3QrsCzFQPEYmMsjh4RxAy3i7ZuLtNMeXNzzBcu HFUl9Pf2Ywf20JWbGrwopkxVJGe6nCt2uwSpaxuG+2r+R0cADfgXe3b9/HWKtj+I9F 51Aku58jAo80QAXdMxRm0XX6z+vyyome6XfbHDDIYeNt+FAsRrrraJS/p8ZBzYuL0F 49a8+m33AegDeraj0t0W173E6TB6M3S7HhWUeYYZPOQcuM8zY7Tbs5i0dl83Q5yqnh BOFI4uYinapuQ== From: Leon Romanovsky To: Jason Gunthorpe Cc: Avihai Horon , Eric Dumazet , Jakub Kicinski , linux-rdma@vger.kernel.org, Meir Lichtinger , Michael Guralnik , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Shay Drory Subject: [PATCH mlx5-next 1/4] RDMA/mlx5: Remove pcie_relaxed_ordering_enabled() check for RO write Date: Mon, 10 Apr 2023 16:07:50 +0300 Message-Id: <7e8f55e31572c1702d69cae015a395d3a824a38a.1681131553.git.leon@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Avihai Horon pcie_relaxed_ordering_enabled() check was added to avoid a syndrome when creating a MKey with relaxed ordering (RO) enabled when the driver's relaxed_ordering_{read,write} HCA capabilities are out of sync with FW. While this can happen with relaxed_ordering_read, it can't happen with relaxed_ordering_write as it's set if the device supports RO write, regardless of RO in PCI config space, and thus can't change during runtime. Therefore, drop the pcie_relaxed_ordering_enabled() check for relaxed_ordering_write while keeping it for relaxed_ordering_read. Doing so will also allow the usage of RO write in VFs and VMs (where RO in PCI config space is not reported/emulated properly). Signed-off-by: Avihai Horon Reviewed-by: Shay Drory Signed-off-by: Leon Romanovsky Reviewed-by: Jacob Keller --- drivers/infiniband/hw/mlx5/mr.c | 6 +++--- drivers/net/ethernet/mellanox/mlx5/core/en/params.c | 3 +-- drivers/net/ethernet/mellanox/mlx5/core/en_common.c | 2 +- 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 347000d30cec..bb8f318bd5a5 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -69,11 +69,11 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr, MLX5_SET(mkc, mkc, lw, !!(acc & IB_ACCESS_LOCAL_WRITE)); MLX5_SET(mkc, mkc, lr, 1); - if ((acc & IB_ACCESS_RELAXED_ORDERING) && - pcie_relaxed_ordering_enabled(dev->mdev->pdev)) { + if (acc & IB_ACCESS_RELAXED_ORDERING) { if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write)) MLX5_SET(mkc, mkc, relaxed_ordering_write, 1); - if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read)) + if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) && + pcie_relaxed_ordering_enabled(dev->mdev->pdev)) MLX5_SET(mkc, mkc, relaxed_ordering_read, 1); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index a21bd1179477..d840a59aec88 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -867,8 +867,7 @@ static void mlx5e_build_rx_cq_param(struct mlx5_core_dev *mdev, static u8 rq_end_pad_mode(struct mlx5_core_dev *mdev, struct mlx5e_params *params) { bool lro_en = params->packet_merge.type == MLX5E_PACKET_MERGE_LRO; - bool ro = pcie_relaxed_ordering_enabled(mdev->pdev) && - MLX5_CAP_GEN(mdev, relaxed_ordering_write); + bool ro = MLX5_CAP_GEN(mdev, relaxed_ordering_write); return ro && lro_en ? MLX5_WQ_END_PAD_MODE_NONE : MLX5_WQ_END_PAD_MODE_ALIGN; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c index 4c9a3210600c..993af4c12d90 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c @@ -44,7 +44,7 @@ void mlx5e_mkey_set_relaxed_ordering(struct mlx5_core_dev *mdev, void *mkc) bool ro_read = MLX5_CAP_GEN(mdev, relaxed_ordering_read); MLX5_SET(mkc, mkc, relaxed_ordering_read, ro_pci_enable && ro_read); - MLX5_SET(mkc, mkc, relaxed_ordering_write, ro_pci_enable && ro_write); + MLX5_SET(mkc, mkc, relaxed_ordering_write, ro_write); } int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn, u32 *mkey) From patchwork Mon Apr 10 13:07:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13206356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6722EC77B70 for ; Mon, 10 Apr 2023 13:08:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229798AbjDJNIb (ORCPT ); Mon, 10 Apr 2023 09:08:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229828AbjDJNIZ (ORCPT ); Mon, 10 Apr 2023 09:08:25 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E879586BF; Mon, 10 Apr 2023 06:08:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 77AD960F55; Mon, 10 Apr 2023 13:08:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 63147C433EF; Mon, 10 Apr 2023 13:08:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681132094; bh=WMRsTNuhJXYmIoY4hiVBFzSXG5imY3zKDMApu1L1yZA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O5kVBltc15hXuNexhhzD3x7R8JfieNEvZAv0JurPVPWCrmDmp1olUvVk2UUALEKEa GI3OsHULJwhQlZEN1cY2Gvy7WW2eCSzPtqCGPh2t2U9uqUbolNvNjxli1gKCAOL+77 XIXZUTXcKXW71Axw7f9jMDMbYouGyWVmRAZkZv/fehIRmWlDZUYaHm2iZr0sPYzqw2 FSg7sFMoZcGmemBCrDZ97eZU1aFqQAPzdgZ233BvBIf5uyxTJq8CV1G3tBNncYiHor /fnY2IPakHjYL6FiV5bv0SASLwCuk4V771ebllsbMj8D5iFEE7cuTsF62ImzELtNfi Em2OGTXQeH+bQ== From: Leon Romanovsky To: Jason Gunthorpe Cc: Avihai Horon , "David S. Miller" , Eric Dumazet , Jakub Kicinski , linux-rdma@vger.kernel.org, Meir Lichtinger , Michael Guralnik , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Shay Drory Subject: [PATCH rdma-next 2/4] RDMA/mlx5: Check pcie_relaxed_ordering_enabled() in UMR Date: Mon, 10 Apr 2023 16:07:51 +0300 Message-Id: <8d39eb8317e7bed1a354311a20ae707788fd94ed.1681131553.git.leon@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Avihai Horon relaxed_ordering_read HCA capability is set if both the device supports relaxed ordering (RO) read and RO is set in PCI config space. RO in PCI config space can change during runtime. This will change the value of relaxed_ordering_read HCA capability in FW, but the driver will not see it since it queries the capabilities only once. This can lead to the following scenario: 1. RO in PCI config space is enabled. 2. User creates MKey without RO. 3. RO in PCI config space is disabled. As a result, relaxed_ordering_read HCA capability is turned off in FW but remains on in driver copy of the capabilities. 4. User requests to reconfig the MKey with RO via UMR. 5. Driver will try to reconfig the MKey with RO read although it shouldn't (as relaxed_ordering_read HCA capability is really off). To fix this, check pcie_relaxed_ordering_enabled() before setting RO read in UMR. Fixes: 896ec9735336 ("RDMA/mlx5: Set mkey relaxed ordering by UMR with ConnectX-7") Signed-off-by: Avihai Horon Reviewed-by: Shay Drory Signed-off-by: Leon Romanovsky Reviewed-by: Jacob Keller --- drivers/infiniband/hw/mlx5/umr.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c index 55f4e048d947..c9e176e8ced4 100644 --- a/drivers/infiniband/hw/mlx5/umr.c +++ b/drivers/infiniband/hw/mlx5/umr.c @@ -380,6 +380,9 @@ static void mlx5r_umr_set_access_flags(struct mlx5_ib_dev *dev, struct mlx5_mkey_seg *seg, unsigned int access_flags) { + bool ro_read = (access_flags & IB_ACCESS_RELAXED_ORDERING) && + pcie_relaxed_ordering_enabled(dev->mdev->pdev); + MLX5_SET(mkc, seg, a, !!(access_flags & IB_ACCESS_REMOTE_ATOMIC)); MLX5_SET(mkc, seg, rw, !!(access_flags & IB_ACCESS_REMOTE_WRITE)); MLX5_SET(mkc, seg, rr, !!(access_flags & IB_ACCESS_REMOTE_READ)); @@ -387,8 +390,7 @@ static void mlx5r_umr_set_access_flags(struct mlx5_ib_dev *dev, MLX5_SET(mkc, seg, lr, 1); MLX5_SET(mkc, seg, relaxed_ordering_write, !!(access_flags & IB_ACCESS_RELAXED_ORDERING)); - MLX5_SET(mkc, seg, relaxed_ordering_read, - !!(access_flags & IB_ACCESS_RELAXED_ORDERING)); + MLX5_SET(mkc, seg, relaxed_ordering_read, ro_read); } int mlx5r_umr_rereg_pd_access(struct mlx5_ib_mr *mr, struct ib_pd *pd, From patchwork Mon Apr 10 13:07:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13206354 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2672BC77B70 for ; Mon, 10 Apr 2023 13:08:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229804AbjDJNIP (ORCPT ); Mon, 10 Apr 2023 09:08:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbjDJNIK (ORCPT ); Mon, 10 Apr 2023 09:08:10 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BE0C93C8; Mon, 10 Apr 2023 06:08:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 66BCD60D37; Mon, 10 Apr 2023 13:08:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 51743C433EF; Mon, 10 Apr 2023 13:08:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681132086; bh=wVQBcleMDZi2qFubbCjY6UzljtovnvgS87AJpJ4tVUs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XL+RXcYt90+1igBHoCB5Mr8BKYdq88Sd+We8yd0PxhdmJ5r11tlBwzl4kYD5L+ebd y9f6At6xULbbQOESN2T5l8JQd2KZjy97vjcIUXYQmEV29/o+19ixx+CURKn0AaVcMo ntVwk+PnJXv2SdyVpRs5w9vJF484OQrFPfwMcYI4ipeehn/ZnxDcAlngTbjuq0UfV6 1FVzYSEYHfA0Gd6JtSKFmiylpWo04ax891eOaTmrkWhN1x6s0epmrpmiedG3uXYW1C fWsz5+RQapRKrTGNq97aGRHFZOqwqE9fEkuv68CJhXZlXgOHIoTopB5q+0sEGl7qD9 KzTAAZxxaKaBQ== From: Leon Romanovsky To: Jason Gunthorpe Cc: Avihai Horon , Eric Dumazet , Jakub Kicinski , linux-rdma@vger.kernel.org, Meir Lichtinger , Michael Guralnik , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Shay Drory Subject: [PATCH mlx5-next 3/4] net/mlx5: Update relaxed ordering read HCA capabilities Date: Mon, 10 Apr 2023 16:07:52 +0300 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Avihai Horon Rename existing HCA capability relaxed_ordering_read to relaxed_ordering_read_pci_enabled. This is in accordance with recent PRM change to better describe the capability, as it's set only if both the device supports relaxed ordering (RO) read and RO is enabled in PCI config space. In addition, add new HCA capability relaxed_ordering_read which is set if the device supports RO read, regardless of RO in PCI config space. This will be used in the following patch to allow RO in VFs and VMs. Signed-off-by: Avihai Horon Reviewed-by: Shay Drory Signed-off-by: Leon Romanovsky Reviewed-by: Jacob Keller --- drivers/infiniband/hw/mlx5/mr.c | 5 +++-- drivers/infiniband/hw/mlx5/umr.h | 2 +- drivers/net/ethernet/mellanox/mlx5/core/en_common.c | 2 +- include/linux/mlx5/mlx5_ifc.h | 5 +++-- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index bb8f318bd5a5..a7f0119cc959 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -72,7 +72,8 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr, if (acc & IB_ACCESS_RELAXED_ORDERING) { if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write)) MLX5_SET(mkc, mkc, relaxed_ordering_write, 1); - if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) && + if (MLX5_CAP_GEN(dev->mdev, + relaxed_ordering_read_pci_enabled) && pcie_relaxed_ordering_enabled(dev->mdev->pdev)) MLX5_SET(mkc, mkc, relaxed_ordering_read, 1); } @@ -793,7 +794,7 @@ static int get_unchangeable_access_flags(struct mlx5_ib_dev *dev, ret |= IB_ACCESS_RELAXED_ORDERING; if ((access_flags & IB_ACCESS_RELAXED_ORDERING) && - MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) && + MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_pci_enabled) && !MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_umr)) ret |= IB_ACCESS_RELAXED_ORDERING; diff --git a/drivers/infiniband/hw/mlx5/umr.h b/drivers/infiniband/hw/mlx5/umr.h index c9d0021381a2..e12ecd7e079c 100644 --- a/drivers/infiniband/hw/mlx5/umr.h +++ b/drivers/infiniband/hw/mlx5/umr.h @@ -62,7 +62,7 @@ static inline bool mlx5r_umr_can_reconfig(struct mlx5_ib_dev *dev, return false; if ((diffs & IB_ACCESS_RELAXED_ORDERING) && - MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) && + MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_pci_enabled) && !MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_umr)) return false; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c index 993af4c12d90..3c765a1f91a5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c @@ -41,7 +41,7 @@ void mlx5e_mkey_set_relaxed_ordering(struct mlx5_core_dev *mdev, void *mkc) { bool ro_pci_enable = pcie_relaxed_ordering_enabled(mdev->pdev); bool ro_write = MLX5_CAP_GEN(mdev, relaxed_ordering_write); - bool ro_read = MLX5_CAP_GEN(mdev, relaxed_ordering_read); + bool ro_read = MLX5_CAP_GEN(mdev, relaxed_ordering_read_pci_enabled); MLX5_SET(mkc, mkc, relaxed_ordering_read, ro_pci_enable && ro_read); MLX5_SET(mkc, mkc, relaxed_ordering_write, ro_write); diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index e4306cd87cd7..b54339a1b1c6 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1511,7 +1511,7 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 log_max_eq_sz[0x8]; u8 relaxed_ordering_write[0x1]; - u8 relaxed_ordering_read[0x1]; + u8 relaxed_ordering_read_pci_enabled[0x1]; u8 log_max_mkey[0x6]; u8 reserved_at_f0[0x6]; u8 terminate_scatter_list_mkey[0x1]; @@ -1727,7 +1727,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_320[0x3]; u8 log_max_transport_domain[0x5]; - u8 reserved_at_328[0x3]; + u8 reserved_at_328[0x2]; + u8 relaxed_ordering_read[0x1]; u8 log_max_pd[0x5]; u8 reserved_at_330[0x9]; u8 q_counter_aggregation[0x1]; From patchwork Mon Apr 10 13:07:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13206355 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14468C76196 for ; Mon, 10 Apr 2023 13:08:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229818AbjDJNIX (ORCPT ); Mon, 10 Apr 2023 09:08:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229829AbjDJNIR (ORCPT ); Mon, 10 Apr 2023 09:08:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27BAC8A48; Mon, 10 Apr 2023 06:08:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7F19D61BFB; Mon, 10 Apr 2023 13:08:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 65DBBC433EF; Mon, 10 Apr 2023 13:08:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681132090; bh=sMV/A2TujUcyuHApI+uvfzJpsNr+uQyDBLlsO5Clx6M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O+tfqksk67YDdTtmuMAhDY03HNm/YmHpjQKUQgiYGEmciB3mlZBMYX72YdzZGHHDf 7QPIQfMOPub4qFZ/i8gv6wDbbBBdxCXa4UPvL9BSW9a77ypXRrQLVVnKAF7Q8VnBbt 21I+6nVghJwJbUPypbWzl6/B3V+iKj6HXjKmxn7lTHpoejmemdxwkRTcRGMKou/b/M OSbqzspmIQV96BCd9bF2iLUDW4VXzGbACwzODhm9yRuelB8L3Llayhhqaym6EUJcup q5p+3ekTq8C7zdCnpdVX3aRiD8QQg0ZJ9O1P5Ykf5DcbKTL+jRKQvIdlKOpCaKfDt8 GBt3BZ1enyLGA== From: Leon Romanovsky To: Jason Gunthorpe Cc: Avihai Horon , Aya Levin , Eric Dumazet , Jakub Kicinski , linux-rdma@vger.kernel.org, Meir Lichtinger , Michael Guralnik , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Shay Drory Subject: [PATCH mlx5-next 4/4] RDMA/mlx5: Allow relaxed ordering read in VFs and VMs Date: Mon, 10 Apr 2023 16:07:53 +0300 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Avihai Horon According to PCIe spec, Enable Relaxed Ordering value in the VF's PCI config space is wired to 0 and PF relaxed ordering (RO) setting should be applied to the VF. In QEMU (and maybe others), when assigning VFs, the RO bit in PCI config space is not emulated properly and is always set to 0. Therefore, pcie_relaxed_ordering_enabled() always returns 0 for VFs and VMs and thus MKeys can't be created with RO read even if the PF supports it. pcie_relaxed_ordering_enabled() check was added to avoid a syndrome when creating a MKey with relaxed ordering (RO) enabled when the driver's relaxed_ordering_read_pci_enabled HCA capability is out of sync with FW. With the new relaxed_ordering_read capability this can't happen, as it's set regardless of RO value in PCI config space and thus can't change during runtime. Hence, to allow RO read in VFs and VMs, use the new HCA capability relaxed_ordering_read without checking pcie_relaxed_ordering_enabled(). The old capability checks are kept for backward compatibility with older FWs. Allowing RO in VFs and VMs is valuable since it can greatly improve performance on some setups. For example, testing throughput of a VF on an AMD EPYC 7763 and ConnectX-6 Dx setup showed roughly 60% performance improvement. Signed-off-by: Avihai Horon Reviewed-by: Shay Drory Reviewed-by: Aya Levin Signed-off-by: Leon Romanovsky Reviewed-by: Jacob Keller --- drivers/infiniband/hw/mlx5/mr.c | 11 +++++++---- drivers/infiniband/hw/mlx5/umr.c | 3 ++- drivers/infiniband/hw/mlx5/umr.h | 3 ++- drivers/net/ethernet/mellanox/mlx5/core/en_common.c | 7 ++++--- 4 files changed, 15 insertions(+), 9 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index a7f0119cc959..1ce48e485c5b 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -72,9 +72,11 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr, if (acc & IB_ACCESS_RELAXED_ORDERING) { if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write)) MLX5_SET(mkc, mkc, relaxed_ordering_write, 1); - if (MLX5_CAP_GEN(dev->mdev, - relaxed_ordering_read_pci_enabled) && - pcie_relaxed_ordering_enabled(dev->mdev->pdev)) + + if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) || + (MLX5_CAP_GEN(dev->mdev, + relaxed_ordering_read_pci_enabled) && + pcie_relaxed_ordering_enabled(dev->mdev->pdev))) MLX5_SET(mkc, mkc, relaxed_ordering_read, 1); } @@ -794,7 +796,8 @@ static int get_unchangeable_access_flags(struct mlx5_ib_dev *dev, ret |= IB_ACCESS_RELAXED_ORDERING; if ((access_flags & IB_ACCESS_RELAXED_ORDERING) && - MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_pci_enabled) && + (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) || + MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_pci_enabled)) && !MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_umr)) ret |= IB_ACCESS_RELAXED_ORDERING; diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c index c9e176e8ced4..234bf30db731 100644 --- a/drivers/infiniband/hw/mlx5/umr.c +++ b/drivers/infiniband/hw/mlx5/umr.c @@ -381,7 +381,8 @@ static void mlx5r_umr_set_access_flags(struct mlx5_ib_dev *dev, unsigned int access_flags) { bool ro_read = (access_flags & IB_ACCESS_RELAXED_ORDERING) && - pcie_relaxed_ordering_enabled(dev->mdev->pdev); + (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) || + pcie_relaxed_ordering_enabled(dev->mdev->pdev)); MLX5_SET(mkc, seg, a, !!(access_flags & IB_ACCESS_REMOTE_ATOMIC)); MLX5_SET(mkc, seg, rw, !!(access_flags & IB_ACCESS_REMOTE_WRITE)); diff --git a/drivers/infiniband/hw/mlx5/umr.h b/drivers/infiniband/hw/mlx5/umr.h index e12ecd7e079c..3799bb758e49 100644 --- a/drivers/infiniband/hw/mlx5/umr.h +++ b/drivers/infiniband/hw/mlx5/umr.h @@ -62,7 +62,8 @@ static inline bool mlx5r_umr_can_reconfig(struct mlx5_ib_dev *dev, return false; if ((diffs & IB_ACCESS_RELAXED_ORDERING) && - MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_pci_enabled) && + (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read) || + MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_pci_enabled)) && !MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_umr)) return false; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c index 3c765a1f91a5..1f90594499c6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c @@ -39,11 +39,12 @@ void mlx5e_mkey_set_relaxed_ordering(struct mlx5_core_dev *mdev, void *mkc) { - bool ro_pci_enable = pcie_relaxed_ordering_enabled(mdev->pdev); bool ro_write = MLX5_CAP_GEN(mdev, relaxed_ordering_write); - bool ro_read = MLX5_CAP_GEN(mdev, relaxed_ordering_read_pci_enabled); + bool ro_read = MLX5_CAP_GEN(mdev, relaxed_ordering_read) || + (pcie_relaxed_ordering_enabled(mdev->pdev) && + MLX5_CAP_GEN(mdev, relaxed_ordering_read_pci_enabled)); - MLX5_SET(mkc, mkc, relaxed_ordering_read, ro_pci_enable && ro_read); + MLX5_SET(mkc, mkc, relaxed_ordering_read, ro_read); MLX5_SET(mkc, mkc, relaxed_ordering_write, ro_write); }