diff mbox series

[rdma-next,2/4] RDMA/mlx5: Check pcie_relaxed_ordering_enabled() in UMR

Message ID 8d39eb8317e7bed1a354311a20ae707788fd94ed.1681131553.git.leon@kernel.org (mailing list archive)
State Accepted
Headers show
Series Allow relaxed ordering read in VFs and VMs | expand

Commit Message

Leon Romanovsky April 10, 2023, 1:07 p.m. UTC
From: Avihai Horon <avihaih@nvidia.com>

relaxed_ordering_read HCA capability is set if both the device supports
relaxed ordering (RO) read and RO is set in PCI config space.

RO in PCI config space can change during runtime. This will change the
value of relaxed_ordering_read HCA capability in FW, but the driver will
not see it since it queries the capabilities only once.

This can lead to the following scenario:
1. RO in PCI config space is enabled.
2. User creates MKey without RO.
3. RO in PCI config space is disabled.
   As a result, relaxed_ordering_read HCA capability is turned off in FW
   but remains on in driver copy of the capabilities.
4. User requests to reconfig the MKey with RO via UMR.
5. Driver will try to reconfig the MKey with RO read although it
   shouldn't (as relaxed_ordering_read HCA capability is really off).

To fix this, check pcie_relaxed_ordering_enabled() before setting RO
read in UMR.

Fixes: 896ec9735336 ("RDMA/mlx5: Set mkey relaxed ordering by UMR with ConnectX-7")
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/umr.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Jacob Keller April 11, 2023, 11:18 p.m. UTC | #1
On 4/10/2023 6:07 AM, Leon Romanovsky wrote:
> From: Avihai Horon <avihaih@nvidia.com>
> 
> relaxed_ordering_read HCA capability is set if both the device supports
> relaxed ordering (RO) read and RO is set in PCI config space.
> 
> RO in PCI config space can change during runtime. This will change the
> value of relaxed_ordering_read HCA capability in FW, but the driver will
> not see it since it queries the capabilities only once.
> 
> This can lead to the following scenario:
> 1. RO in PCI config space is enabled.
> 2. User creates MKey without RO.
> 3. RO in PCI config space is disabled.
>    As a result, relaxed_ordering_read HCA capability is turned off in FW
>    but remains on in driver copy of the capabilities.
> 4. User requests to reconfig the MKey with RO via UMR.
> 5. Driver will try to reconfig the MKey with RO read although it
>    shouldn't (as relaxed_ordering_read HCA capability is really off).
> 
> To fix this, check pcie_relaxed_ordering_enabled() before setting RO
> read in UMR.
> 
> Fixes: 896ec9735336 ("RDMA/mlx5: Set mkey relaxed ordering by UMR with ConnectX-7")
> Signed-off-by: Avihai Horon <avihaih@nvidia.com>
> Reviewed-by: Shay Drory <shayd@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---


Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
diff mbox series

Patch

diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
index 55f4e048d947..c9e176e8ced4 100644
--- a/drivers/infiniband/hw/mlx5/umr.c
+++ b/drivers/infiniband/hw/mlx5/umr.c
@@ -380,6 +380,9 @@  static void mlx5r_umr_set_access_flags(struct mlx5_ib_dev *dev,
 				       struct mlx5_mkey_seg *seg,
 				       unsigned int access_flags)
 {
+	bool ro_read = (access_flags & IB_ACCESS_RELAXED_ORDERING) &&
+		       pcie_relaxed_ordering_enabled(dev->mdev->pdev);
+
 	MLX5_SET(mkc, seg, a, !!(access_flags & IB_ACCESS_REMOTE_ATOMIC));
 	MLX5_SET(mkc, seg, rw, !!(access_flags & IB_ACCESS_REMOTE_WRITE));
 	MLX5_SET(mkc, seg, rr, !!(access_flags & IB_ACCESS_REMOTE_READ));
@@ -387,8 +390,7 @@  static void mlx5r_umr_set_access_flags(struct mlx5_ib_dev *dev,
 	MLX5_SET(mkc, seg, lr, 1);
 	MLX5_SET(mkc, seg, relaxed_ordering_write,
 		 !!(access_flags & IB_ACCESS_RELAXED_ORDERING));
-	MLX5_SET(mkc, seg, relaxed_ordering_read,
-		 !!(access_flags & IB_ACCESS_RELAXED_ORDERING));
+	MLX5_SET(mkc, seg, relaxed_ordering_read, ro_read);
 }
 
 int mlx5r_umr_rereg_pd_access(struct mlx5_ib_mr *mr, struct ib_pd *pd,