diff mbox

[rdma-rc,08/12] IB/mlx5: Wait for all async command completions to complete

Message ID 1477575407-20562-9-git-send-email-leon@kernel.org (mailing list archive)
State Accepted
Headers show

Commit Message

Leon Romanovsky Oct. 27, 2016, 1:36 p.m. UTC
From: Eli Cohen <eli@mellanox.com>

Wait before continuing unload till all pending mkey async creation requests
are done.

Fixes: e126ba97dba9 ('mlx5: Add driver for Mellanox Connect-IB adapters')
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
---
 drivers/infiniband/hw/mlx5/mr.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

Comments

Or Gerlitz Oct. 28, 2016, 1:04 p.m. UTC | #1
On Thu, Oct 27, 2016 at 4:36 PM, Leon Romanovsky <leon@kernel.org> wrote:


> +static void wait_for_async_commands(struct mlx5_ib_dev *dev)
> +{
> +       struct mlx5_mr_cache *cache = &dev->cache;
> +       struct mlx5_cache_ent *ent;
> +       int total = 0;
> +       int i;
> +       int j;
> +
> +       for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) {
> +               ent = &cache->ent[i];
> +               for (j = 0 ; j < 1000; j++) {
> +                       if (!ent->pending)
> +                               break;
> +                       msleep(50);
> +               }

you had another patch on this series which change a hard coded
constant into a define, why this patch add two new hard coded
constants, so all to all, we're not making progress on that
no-hard-coded-constants front... better decide where you want to go
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 4e90124..55e2fee 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -646,6 +646,33 @@  int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
 	return 0;
 }
 
+static void wait_for_async_commands(struct mlx5_ib_dev *dev)
+{
+	struct mlx5_mr_cache *cache = &dev->cache;
+	struct mlx5_cache_ent *ent;
+	int total = 0;
+	int i;
+	int j;
+
+	for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) {
+		ent = &cache->ent[i];
+		for (j = 0 ; j < 1000; j++) {
+			if (!ent->pending)
+				break;
+			msleep(50);
+		}
+	}
+	for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) {
+		ent = &cache->ent[i];
+		total += ent->pending;
+	}
+
+	if (total)
+		mlx5_ib_warn(dev, "aborted while there are %d pending mr requests\n", total);
+	else
+		mlx5_ib_warn(dev, "done with all pending requests\n");
+}
+
 int mlx5_mr_cache_cleanup(struct mlx5_ib_dev *dev)
 {
 	int i;
@@ -659,6 +686,7 @@  int mlx5_mr_cache_cleanup(struct mlx5_ib_dev *dev)
 		clean_keys(dev, i);
 
 	destroy_workqueue(dev->cache.wq);
+	wait_for_async_commands(dev);
 	del_timer_sync(&dev->delay_timer);
 
 	return 0;