From patchwork Fri Jun 2 08:50:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geert Uytterhoeven X-Patchwork-Id: 13264908 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD0BAC7EE2A for ; Fri, 2 Jun 2023 08:52:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235031AbjFBIwt (ORCPT ); Fri, 2 Jun 2023 04:52:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234933AbjFBIwC (ORCPT ); Fri, 2 Jun 2023 04:52:02 -0400 Received: from laurent.telenet-ops.be (laurent.telenet-ops.be [IPv6:2a02:1800:110:4::f00:19]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72B56E51 for ; Fri, 2 Jun 2023 01:51:16 -0700 (PDT) Received: from ramsan.of.borg ([IPv6:2a02:1810:ac12:ed30:158c:2ccf:1f70:e136]) by laurent.telenet-ops.be with bizsmtp id 48qo2A00F1tRZS8018qoyb; Fri, 02 Jun 2023 10:51:14 +0200 Received: from rox.of.borg ([192.168.97.57]) by ramsan.of.borg with esmtp (Exim 4.95) (envelope-from ) id 1q50UP-00BhYW-0P; Fri, 02 Jun 2023 10:50:48 +0200 Received: from geert by rox.of.borg with local (Exim 4.95) (envelope-from ) id 1q50Ui-00APxr-3g; Fri, 02 Jun 2023 10:50:48 +0200 From: Geert Uytterhoeven To: Michael Turquette , Stephen Boyd , Yoshihiro Shimoda , Magnus Damm , Joerg Roedel , Robin Murphy Cc: Tomasz Figa , Sylwester Nawrocki , Will Deacon , Arnd Bergmann , Wolfram Sang , Dejin Zheng , Kai-Heng Feng , Nicholas Piggin , Heiko Carstens , Peter Zijlstra , Russell King , John Stultz , Thomas Gleixner , Tony Lindgren , Krzysztof Kozlowski , Tero Kristo , Ulf Hansson , "Rafael J . Wysocki" , Vincent Guittot , linux-clk@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-renesas-soc@vger.kernel.org, linux-pm@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, Geert Uytterhoeven Subject: [PATCH v3 7/7] iommu/ipmmu-vmsa: Convert to read_poll_timeout_atomic() Date: Fri, 2 Jun 2023 10:50:42 +0200 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Use read_poll_timeout_atomic() instead of open-coding the same operation. Signed-off-by: Geert Uytterhoeven --- v3: - New. --- drivers/iommu/ipmmu-vmsa.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index 9f64c5c9f5b90ace..3b58a8ea3bdef190 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -253,17 +254,13 @@ static void ipmmu_imuctr_write(struct ipmmu_vmsa_device *mmu, /* Wait for any pending TLB invalidations to complete */ static void ipmmu_tlb_sync(struct ipmmu_vmsa_domain *domain) { - unsigned int count = 0; + u32 val; - while (ipmmu_ctx_read_root(domain, IMCTR) & IMCTR_FLUSH) { - cpu_relax(); - if (++count == TLB_LOOP_TIMEOUT) { - dev_err_ratelimited(domain->mmu->dev, + if (read_poll_timeout_atomic(ipmmu_ctx_read_root, val, + !(val & IMCTR_FLUSH), 1, TLB_LOOP_TIMEOUT, + false, domain, IMCTR)) + dev_err_ratelimited(domain->mmu->dev, "TLB sync timed out -- MMU may be deadlocked\n"); - return; - } - udelay(1); - } } static void ipmmu_tlb_invalidate(struct ipmmu_vmsa_domain *domain)