From patchwork Wed Oct 25 17:49:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13436501 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDBDDC0032E for ; Wed, 25 Oct 2023 17:50:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234346AbjJYRuQ (ORCPT ); Wed, 25 Oct 2023 13:50:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234182AbjJYRuQ (ORCPT ); Wed, 25 Oct 2023 13:50:16 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0A5B18B for ; Wed, 25 Oct 2023 10:50:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 17FC3C433C7; Wed, 25 Oct 2023 17:50:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1698256213; bh=TB0PXzH1sZqVZOGmiKJwcBlC20b2JtKw3tOB1AHQagw=; h=From:To:Cc:Subject:Date:From; b=Zg6B/WeHocG4eNgInxFinXE4ZYN2me5SlZYAJqXKxzDdm26mPH2/IahWP475STFnJ kR3nx5AKjL9k/WPh1aO84YLVUfi0k4+MyqJjLd8++WENrYG1T2KZnGL9FjHA1T+wPg G+X1+lb+DoVzCz3wm3Mn/tp7By8bFvzU75DwUa1UX+d6GtCw3tXIEu+c+DZVobbShf 1iTiU3XU26ckDhRMuRHkfaoQUiqnpoF4AeTAkhYBNa7/KpBcRmSM3fBlvkikvzbgrU EU4ii7MnOkby87rIjE3vdR2/HdSmQLIopYV16wCrl5qC9fT67ue8UI396ADCx846dU LAaU0hiLZW2Og== From: Leon Romanovsky To: Jason Gunthorpe Cc: Moshe Shemesh , linux-rdma@vger.kernel.org, Michael Guralnik , Shay Drory Subject: [PATCH rdma-rc] RDMA/mlx5: Fix mkey cache WQ flush Date: Wed, 25 Oct 2023 20:49:59 +0300 Message-ID: X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Moshe Shemesh The cited patch tries to ensure no pending works on the mkey cache workqueue by disabling adding new works and call flush_workqueue(). But this workqueue also has delayed works which might still be pending the delay time to be queued. Add cancel_delayed_work() for the delayed works which waits to be queued and then the flush_workqueue() will flush all works which are already queued and running. Fixes: 374012b00457 ("RDMA/mlx5: Fix mkey cache possible deadlock on cleanup") Signed-off-by: Moshe Shemesh Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/mr.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 8a3762d9ff58..e0629898c3c0 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1026,11 +1026,13 @@ void mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev) return; mutex_lock(&dev->cache.rb_lock); + cancel_delayed_work(&dev->cache.remove_ent_dwork); for (node = rb_first(root); node; node = rb_next(node)) { ent = rb_entry(node, struct mlx5_cache_ent, node); xa_lock_irq(&ent->mkeys); ent->disabled = true; xa_unlock_irq(&ent->mkeys); + cancel_delayed_work(&ent->dwork); } mutex_unlock(&dev->cache.rb_lock);