From patchwork Tue Nov 8 01:35:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Vasut X-Patchwork-Id: 13035678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4D61C4332F for ; Tue, 8 Nov 2022 01:35:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233426AbiKHBf5 (ORCPT ); Mon, 7 Nov 2022 20:35:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233497AbiKHBfs (ORCPT ); Mon, 7 Nov 2022 20:35:48 -0500 Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EAA11B791; Mon, 7 Nov 2022 17:35:40 -0800 (PST) Received: from tr.lan (ip-86-49-120-218.bb.vodafone.cz [86.49.120.218]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: marex@denx.de) by phobos.denx.de (Postfix) with ESMTPSA id CBC1184EC7; Tue, 8 Nov 2022 02:35:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=denx.de; s=phobos-20191101; t=1667871338; bh=kU4FGbS7/Tk0OfhvxTBr4MZprgUfQjdp0Gn5ygDXIlw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xgqIKHdERpjHYfh0sN2T5VwZ0LvKMzevCxfeB3lo3g5YCiazQ+dahnUNpwVJz5TKu MfoJkQF4zhPdeBKJt2JhHHNwdv8DOhxAR3GejL3MkWT8bWQ2MqFk9+fMVdtl59bxoN wILE/txKWrqnBjip7TrQx8j3iYF2wSd0zR5Nc4knTY15E9UDWqf6RqMb+xGPAB8ZEN zuZqWzaVurOzztJ3JNbkf4A7iK/JvyIFDvvgFx7hhtujKsBPzsCXlTHBY3ayV6CKzK 8RhHohAVuuf2VRSA7uDJkVJGPOi5JvLo7BiaFwdt9Rq/DmYqkRpi6/FwSRrxJiCMD0 kdzPt5aX4zg1A== From: Marek Vasut To: linux-pm@vger.kernel.org Cc: Marek Vasut , Adam Ford , Fabio Estevam , Greg Kroah-Hartman , Jacky Bai , Kevin Hilman , Laurent Pinchart , Len Brown , Liam Girdwood , Lucas Stach , Mark Brown , Martin Kepplinger , Pavel Machek , Peng Fan , Pengutronix Kernel Team , Philipp Zabel , "Rafael J . Wysocki" , Sascha Hauer , Shawn Guo , Shengjiu Wang , Stephen Boyd , Ulf Hansson , linux-clk@vger.kernel.org, linux-imx@nxp.com Subject: [PATCH 2/3] [RFC] soc: imx: gpcv2: Split clock prepare from clock enable in the domain Date: Tue, 8 Nov 2022 02:35:16 +0100 Message-Id: <20221108013517.749665-2-marex@denx.de> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108013517.749665-1-marex@denx.de> References: <20221108013517.749665-1-marex@denx.de> MIME-Version: 1.0 X-Virus-Scanned: clamav-milter 0.103.6 at phobos.denx.de X-Virus-Status: Clean Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org It is possible for clk_disable_unused() to trigger lockdep warning regarding lock ordering in this driver. This happens in case of the following conditions: A) clock core clk_disable_unused() triggers the following sequence in a driver which also uses GPCv2 domain: - clk_prepare_lock() -> obtains clock core prepare_lock - pm_runtime_get*() -> obtains &blk_ctrl_genpd_lock_class B) driver powers up a power domain and triggers the following sequence in GPCv2: - pm_runtime_get_sync() -> obtains &blk_ctrl_genpd_lock_class - clk_bulk_prepare_enable() -> obtains clock core prepare_lock This can lead to a deadlock in case A and B runs on separate CPUs. To avoid the deadlock, split clk_*prepare() from clk_*enable() and call the former in power_pre_on() callback, before pm_runtime_get_sync(). The reverse is implemented in the power_off_post() callback in the same way. This way, the GPCv2 driver always claims the prepare_lock before blk_ctrl_genpd_lock_class and the deadlock is avoided. Signed-off-by: Marek Vasut --- Cc: Adam Ford Cc: Fabio Estevam Cc: Greg Kroah-Hartman Cc: Jacky Bai Cc: Kevin Hilman Cc: Laurent Pinchart Cc: Len Brown Cc: Liam Girdwood Cc: Lucas Stach Cc: Marek Vasut Cc: Mark Brown Cc: Martin Kepplinger Cc: Pavel Machek Cc: Peng Fan Cc: Pengutronix Kernel Team Cc: Philipp Zabel Cc: Rafael J. Wysocki Cc: Sascha Hauer Cc: Shawn Guo Cc: Shengjiu Wang Cc: Stephen Boyd Cc: Ulf Hansson Cc: linux-clk@vger.kernel.org Cc: linux-imx@nxp.com Cc: linux-pm@vger.kernel.org To: linux-arm-kernel@lists.infradead.org --- drivers/soc/imx/gpcv2.c | 74 ++++++++++++++++++++++++++++++++++++----- 1 file changed, 66 insertions(+), 8 deletions(-) diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c index 7a47d14fde445..8d27a227ba02d 100644 --- a/drivers/soc/imx/gpcv2.c +++ b/drivers/soc/imx/gpcv2.c @@ -298,6 +298,8 @@ struct imx_pgc_domain { unsigned int pgc_sw_pup_reg; unsigned int pgc_sw_pdn_reg; + + int enabled; }; struct imx_pgc_domain_data { @@ -313,6 +315,52 @@ to_imx_pgc_domain(struct generic_pm_domain *genpd) return container_of(genpd, struct imx_pgc_domain, genpd); } +static int imx_pgc_power_pre_up(struct generic_pm_domain *genpd) +{ + struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); + int ret; + + ret = clk_bulk_prepare(domain->num_clks, domain->clks); + if (ret) + dev_err(domain->dev, "failed to prepare reset clocks\n"); + + return ret; +} + +static int imx_pgc_power_post_up(struct generic_pm_domain *genpd) +{ + struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); + + if (!domain->keep_clocks && domain->enabled) + clk_bulk_unprepare(domain->num_clks, domain->clks); + + return 0; +} + +static int imx_pgc_power_down_pre(struct generic_pm_domain *genpd) +{ + struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); + int ret; + + if (!domain->keep_clocks || !domain->enabled) { + ret = clk_bulk_prepare(domain->num_clks, domain->clks); + if (ret) + dev_err(domain->dev, "failed to prepare reset clocks\n"); + } + + return ret; +} + +static int imx_pgc_power_down_post(struct generic_pm_domain *genpd) +{ + struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); + + if (!domain->keep_clocks || !domain->enabled) + clk_bulk_unprepare(domain->num_clks, domain->clks); + + return 0; +} + static int imx_pgc_power_up(struct generic_pm_domain *genpd) { struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); @@ -338,7 +386,7 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd) reset_control_assert(domain->reset); /* Enable reset clocks for all devices in the domain */ - ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks); + ret = clk_bulk_enable(domain->num_clks, domain->clks); if (ret) { dev_err(domain->dev, "failed to enable reset clocks\n"); goto out_regulator_disable; @@ -397,12 +445,14 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd) /* Disable reset clocks for all devices in the domain */ if (!domain->keep_clocks) - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); + clk_bulk_disable(domain->num_clks, domain->clks); + + domain->enabled++; return 0; out_clk_disable: - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); + clk_bulk_disable(domain->num_clks, domain->clks); out_regulator_disable: if (!IS_ERR(domain->regulator)) regulator_disable(domain->regulator); @@ -420,7 +470,7 @@ static int imx_pgc_power_down(struct generic_pm_domain *genpd) /* Enable reset clocks for all devices in the domain */ if (!domain->keep_clocks) { - ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks); + ret = clk_bulk_enable(domain->num_clks, domain->clks); if (ret) { dev_err(domain->dev, "failed to enable reset clocks\n"); return ret; @@ -467,7 +517,7 @@ static int imx_pgc_power_down(struct generic_pm_domain *genpd) } /* Disable reset clocks for all devices in the domain */ - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); + clk_bulk_disable(domain->num_clks, domain->clks); if (!IS_ERR(domain->regulator)) { ret = regulator_disable(domain->regulator); @@ -479,13 +529,17 @@ static int imx_pgc_power_down(struct generic_pm_domain *genpd) } } + domain->enabled--; + pm_runtime_put_sync_suspend(domain->dev); return 0; out_clk_disable: if (!domain->keep_clocks) - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); + clk_bulk_disable(domain->num_clks, domain->clks); + + domain->enabled--; return ret; } @@ -1514,8 +1568,12 @@ static int imx_gpcv2_probe(struct platform_device *pdev) domain->regmap = regmap; domain->regs = domain_data->pgc_regs; - domain->genpd.power_on = imx_pgc_power_up; - domain->genpd.power_off = imx_pgc_power_down; + domain->genpd.power_pre_on = imx_pgc_power_pre_up; + domain->genpd.power_on = imx_pgc_power_up; + domain->genpd.power_post_on = imx_pgc_power_post_up; + domain->genpd.power_off_pre = imx_pgc_power_down_pre; + domain->genpd.power_off = imx_pgc_power_down; + domain->genpd.power_off_post = imx_pgc_power_down_post; pd_pdev->dev.parent = dev; pd_pdev->dev.of_node = np;