From patchwork Tue Nov 8 01:35:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Vasut X-Patchwork-Id: 13035679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84164C4321E for ; Tue, 8 Nov 2022 01:35:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233423AbiKHBf6 (ORCPT ); Mon, 7 Nov 2022 20:35:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233502AbiKHBfs (ORCPT ); Mon, 7 Nov 2022 20:35:48 -0500 Received: from phobos.denx.de (phobos.denx.de [85.214.62.61]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06DC41B1C9; Mon, 7 Nov 2022 17:35:39 -0800 (PST) Received: from tr.lan (ip-86-49-120-218.bb.vodafone.cz [86.49.120.218]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: marex@denx.de) by phobos.denx.de (Postfix) with ESMTPSA id BB91D84DF9; Tue, 8 Nov 2022 02:35:36 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=denx.de; s=phobos-20191101; t=1667871337; bh=mk53bEXFmT8Z8hA3JrwDO7z2xvGqQ7cSbF9bJCtkDg4=; h=From:To:Cc:Subject:Date:From; b=LX+LYXHuoNYsO7cLRtBleHOOS4+La3GywxY8oBrIb0y1EDeuQNylIigDc5s0q1Ob5 kcI3V+Htu4HKGGl2W8OdNy5mMgX0P2Gmwv9V5eZCfwSey0BDnAkSUYfEyWmlb3pVha 2iqH7I0K84OrguxR19LPFER5Iv4EeeKqHPbE3ECdhJ1/vQyWab0UOn3IrvpFP5jjNo vF3RoG3xqo97NDZdWI5zytHfG9dyCyE/GSo10gw7UrhBZDJxW6jmfS/8JPRHzVDNbr J7drfrsnDnwhqSPDHcMH5TbkhA+q1cQKGyvbyEIC33k02ORR9raVdr31Nf18tfiYX/ 5G5T7cU8PjRRQ== From: Marek Vasut To: linux-pm@vger.kernel.org Cc: Marek Vasut , Adam Ford , Fabio Estevam , Greg Kroah-Hartman , Jacky Bai , Kevin Hilman , Laurent Pinchart , Len Brown , Liam Girdwood , Lucas Stach , Mark Brown , Martin Kepplinger , Pavel Machek , Peng Fan , Pengutronix Kernel Team , Philipp Zabel , "Rafael J . Wysocki" , Sascha Hauer , Shawn Guo , Shengjiu Wang , Stephen Boyd , Ulf Hansson , linux-clk@vger.kernel.org, linux-imx@nxp.com Subject: [PATCH 1/3] [RFC] PM: domains: Introduce .power_pre/post_on/off callbacks Date: Tue, 8 Nov 2022 02:35:15 +0100 Message-Id: <20221108013517.749665-1-marex@denx.de> X-Mailer: git-send-email 2.35.1 MIME-Version: 1.0 X-Virus-Scanned: clamav-milter 0.103.6 at phobos.denx.de X-Virus-Status: Clean Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Currently it is possible that a power domain power on or off would claim the genpd lock first and clock core prepare_lock second, while another thread could do the reverse, and this would trigger lockdep warning. Introduce new callbacks, .power_pre/post_on() and .power_off_pre/post(), which are triggered before the genpd_lock() and after genpd_unlock() respectively in case the domain is powered on and off. Those are meant to let drivers claim clock core prepare_lock via clk_*prepare() call and release the lock via clk_*unprepare() call to always assure that the clock and genpd lock ordering is correct. Signed-off-by: Marek Vasut --- Cc: Adam Ford Cc: Fabio Estevam Cc: Greg Kroah-Hartman Cc: Jacky Bai Cc: Kevin Hilman Cc: Laurent Pinchart Cc: Len Brown Cc: Liam Girdwood Cc: Lucas Stach Cc: Marek Vasut Cc: Mark Brown Cc: Martin Kepplinger Cc: Pavel Machek Cc: Peng Fan Cc: Pengutronix Kernel Team Cc: Philipp Zabel Cc: Rafael J. Wysocki Cc: Sascha Hauer Cc: Shawn Guo Cc: Shengjiu Wang Cc: Stephen Boyd Cc: Ulf Hansson Cc: linux-clk@vger.kernel.org Cc: linux-imx@nxp.com Cc: linux-pm@vger.kernel.org To: linux-arm-kernel@lists.infradead.org --- drivers/base/power/domain.c | 103 ++++++++++++++++++++++++++++++++---- include/linux/pm_domain.h | 4 ++ 2 files changed, 97 insertions(+), 10 deletions(-) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 6471b559230e9..df2a93d0674e4 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -494,6 +494,22 @@ void dev_pm_genpd_set_next_wakeup(struct device *dev, ktime_t next) } EXPORT_SYMBOL_GPL(dev_pm_genpd_set_next_wakeup); +static int genpd_power_pre_on(struct generic_pm_domain *genpd) +{ + if (!genpd->power_pre_on) + return 0; + + return genpd->power_pre_on(genpd); +} + +static int genpd_power_post_on(struct generic_pm_domain *genpd) +{ + if (!genpd->power_post_on) + return 0; + + return genpd->power_post_on(genpd); +} + static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) { unsigned int state_idx = genpd->state_idx; @@ -544,6 +560,22 @@ static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) return ret; } +static int genpd_power_off_pre(struct generic_pm_domain *genpd) +{ + if (!genpd->power_off_pre) + return 0; + + return genpd->power_off_pre(genpd); +} + +static int genpd_power_off_post(struct generic_pm_domain *genpd) +{ + if (!genpd->power_off_post) + return 0; + + return genpd->power_off_post(genpd); +} + static int _genpd_power_off(struct generic_pm_domain *genpd, bool timed) { unsigned int state_idx = genpd->state_idx; @@ -816,12 +848,18 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb, static void genpd_power_off_work_fn(struct work_struct *work) { struct generic_pm_domain *genpd; + int ret; genpd = container_of(work, struct generic_pm_domain, power_off_work); + ret = genpd_power_off_pre(genpd); + if (ret) + return; genpd_lock(genpd); genpd_power_off(genpd, false, 0); genpd_unlock(genpd); + ret = genpd_power_off_post(genpd); + WARN_ON_ONCE(ret); } /** @@ -938,12 +976,14 @@ static int genpd_runtime_suspend(struct device *dev) if (irq_safe_dev_in_sleep_domain(dev, genpd)) return 0; + ret = genpd_power_off_pre(genpd); + if (ret) + return ret; genpd_lock(genpd); gpd_data->rpm_pstate = genpd_drop_performance_state(dev); genpd_power_off(genpd, true, 0); genpd_unlock(genpd); - - return 0; + return genpd_power_off_post(genpd); } /** @@ -977,12 +1017,21 @@ static int genpd_runtime_resume(struct device *dev) if (irq_safe_dev_in_sleep_domain(dev, genpd)) goto out; + ret = genpd_power_pre_on(genpd); + if (ret) + return ret; genpd_lock(genpd); ret = genpd_power_on(genpd, 0); if (!ret) genpd_restore_performance_state(dev, gpd_data->rpm_pstate); genpd_unlock(genpd); + if (ret) { + genpd_power_post_on(genpd); + return ret; + } + + ret = genpd_power_post_on(genpd); if (ret) return ret; @@ -1017,10 +1066,13 @@ static int genpd_runtime_resume(struct device *dev) genpd_stop_dev(genpd, dev); err_poweroff: if (!pm_runtime_is_irq_safe(dev) || genpd_is_irq_safe(genpd)) { - genpd_lock(genpd); - gpd_data->rpm_pstate = genpd_drop_performance_state(dev); - genpd_power_off(genpd, true, 0); - genpd_unlock(genpd); + if (!genpd_power_off_pre(genpd)) { + genpd_lock(genpd); + gpd_data->rpm_pstate = genpd_drop_performance_state(dev); + genpd_power_off(genpd, true, 0); + genpd_unlock(genpd); + genpd_power_off_post(genpd); + } } return ret; @@ -1225,12 +1277,14 @@ static int genpd_finish_suspend(struct device *dev, bool poweroff) } } + ret = genpd_power_off_pre(genpd); + if (ret) + return ret; genpd_lock(genpd); genpd->suspended_count++; genpd_sync_power_off(genpd, true, 0); genpd_unlock(genpd); - - return 0; + return genpd_power_off_post(genpd); } /** @@ -1267,10 +1321,16 @@ static int genpd_resume_noirq(struct device *dev) if (device_wakeup_path(dev) && genpd_is_active_wakeup(genpd)) return pm_generic_resume_noirq(dev); + ret = genpd_power_pre_on(genpd); + if (ret) + return ret; genpd_lock(genpd); genpd_sync_power_on(genpd, true, 0); genpd->suspended_count--; genpd_unlock(genpd); + ret = genpd_power_post_on(genpd); + if (ret) + return ret; if (genpd->dev_ops.stop && genpd->dev_ops.start && !pm_runtime_status_suspended(dev)) { @@ -1378,6 +1438,9 @@ static int genpd_restore_noirq(struct device *dev) * At this point suspended_count == 0 means we are being run for the * first time for the given domain in the present cycle. */ + ret = genpd_power_pre_on(genpd); + if (ret) + return ret; genpd_lock(genpd); if (genpd->suspended_count++ == 0) { /* @@ -1390,6 +1453,9 @@ static int genpd_restore_noirq(struct device *dev) genpd_sync_power_on(genpd, true, 0); genpd_unlock(genpd); + ret = genpd_power_post_on(genpd); + if (ret) + return ret; if (genpd->dev_ops.stop && genpd->dev_ops.start && !pm_runtime_status_suspended(dev)) { @@ -1413,6 +1479,7 @@ static int genpd_restore_noirq(struct device *dev) static void genpd_complete(struct device *dev) { struct generic_pm_domain *genpd; + int ret; dev_dbg(dev, "%s()\n", __func__); @@ -1435,6 +1502,7 @@ static void genpd_switch_state(struct device *dev, bool suspend) { struct generic_pm_domain *genpd; bool use_lock; + int ret; genpd = dev_to_genpd_safe(dev); if (!genpd) @@ -1442,8 +1510,13 @@ static void genpd_switch_state(struct device *dev, bool suspend) use_lock = genpd_is_irq_safe(genpd); - if (use_lock) + if (use_lock) { + ret = suspend ? genpd_power_off_pre(genpd) : + genpd_power_pre_on(genpd); + if (ret) + return; genpd_lock(genpd); + } if (suspend) { genpd->suspended_count++; @@ -1453,8 +1526,12 @@ static void genpd_switch_state(struct device *dev, bool suspend) genpd->suspended_count--; } - if (use_lock) + if (use_lock) { genpd_unlock(genpd); + ret = suspend ? genpd_power_off_post(genpd) : + genpd_power_post_on(genpd); + WARN_ON_ONCE(ret); + } } /** @@ -2750,9 +2827,15 @@ static int __genpd_dev_pm_attach(struct device *dev, struct device *base_dev, dev->pm_domain->sync = genpd_dev_pm_sync; if (power_on) { + ret = genpd_power_pre_on(pd); + if (ret) + return ret; genpd_lock(pd); ret = genpd_power_on(pd, 0); genpd_unlock(pd); + ret = genpd_power_post_on(pd); + if (ret) + return ret; } if (ret) { diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index ebc3516980907..3cf231a27cb1b 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -134,8 +134,12 @@ struct generic_pm_domain { unsigned int prepared_count; /* Suspend counter of prepared devices */ unsigned int performance_state; /* Aggregated max performance state */ cpumask_var_t cpus; /* A cpumask of the attached CPUs */ + int (*power_off_pre)(struct generic_pm_domain *domain); int (*power_off)(struct generic_pm_domain *domain); + int (*power_off_post)(struct generic_pm_domain *domain); + int (*power_pre_on)(struct generic_pm_domain *domain); int (*power_on)(struct generic_pm_domain *domain); + int (*power_post_on)(struct generic_pm_domain *domain); struct raw_notifier_head power_notifiers; /* Power on/off notifiers */ struct opp_table *opp_table; /* OPP table of the genpd */ unsigned int (*opp_to_performance_state)(struct generic_pm_domain *genpd, From patchwork Tue Nov 8 01:35:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Vasut X-Patchwork-Id: 13035678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4D61C4332F for ; Tue, 8 Nov 2022 01:35:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233426AbiKHBf5 (ORCPT ); Mon, 7 Nov 2022 20:35:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233497AbiKHBfs (ORCPT ); Mon, 7 Nov 2022 20:35:48 -0500 Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EAA11B791; Mon, 7 Nov 2022 17:35:40 -0800 (PST) Received: from tr.lan (ip-86-49-120-218.bb.vodafone.cz [86.49.120.218]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: marex@denx.de) by phobos.denx.de (Postfix) with ESMTPSA id CBC1184EC7; Tue, 8 Nov 2022 02:35:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=denx.de; s=phobos-20191101; t=1667871338; bh=kU4FGbS7/Tk0OfhvxTBr4MZprgUfQjdp0Gn5ygDXIlw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xgqIKHdERpjHYfh0sN2T5VwZ0LvKMzevCxfeB3lo3g5YCiazQ+dahnUNpwVJz5TKu MfoJkQF4zhPdeBKJt2JhHHNwdv8DOhxAR3GejL3MkWT8bWQ2MqFk9+fMVdtl59bxoN wILE/txKWrqnBjip7TrQx8j3iYF2wSd0zR5Nc4knTY15E9UDWqf6RqMb+xGPAB8ZEN zuZqWzaVurOzztJ3JNbkf4A7iK/JvyIFDvvgFx7hhtujKsBPzsCXlTHBY3ayV6CKzK 8RhHohAVuuf2VRSA7uDJkVJGPOi5JvLo7BiaFwdt9Rq/DmYqkRpi6/FwSRrxJiCMD0 kdzPt5aX4zg1A== From: Marek Vasut To: linux-pm@vger.kernel.org Cc: Marek Vasut , Adam Ford , Fabio Estevam , Greg Kroah-Hartman , Jacky Bai , Kevin Hilman , Laurent Pinchart , Len Brown , Liam Girdwood , Lucas Stach , Mark Brown , Martin Kepplinger , Pavel Machek , Peng Fan , Pengutronix Kernel Team , Philipp Zabel , "Rafael J . Wysocki" , Sascha Hauer , Shawn Guo , Shengjiu Wang , Stephen Boyd , Ulf Hansson , linux-clk@vger.kernel.org, linux-imx@nxp.com Subject: [PATCH 2/3] [RFC] soc: imx: gpcv2: Split clock prepare from clock enable in the domain Date: Tue, 8 Nov 2022 02:35:16 +0100 Message-Id: <20221108013517.749665-2-marex@denx.de> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108013517.749665-1-marex@denx.de> References: <20221108013517.749665-1-marex@denx.de> MIME-Version: 1.0 X-Virus-Scanned: clamav-milter 0.103.6 at phobos.denx.de X-Virus-Status: Clean Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org It is possible for clk_disable_unused() to trigger lockdep warning regarding lock ordering in this driver. This happens in case of the following conditions: A) clock core clk_disable_unused() triggers the following sequence in a driver which also uses GPCv2 domain: - clk_prepare_lock() -> obtains clock core prepare_lock - pm_runtime_get*() -> obtains &blk_ctrl_genpd_lock_class B) driver powers up a power domain and triggers the following sequence in GPCv2: - pm_runtime_get_sync() -> obtains &blk_ctrl_genpd_lock_class - clk_bulk_prepare_enable() -> obtains clock core prepare_lock This can lead to a deadlock in case A and B runs on separate CPUs. To avoid the deadlock, split clk_*prepare() from clk_*enable() and call the former in power_pre_on() callback, before pm_runtime_get_sync(). The reverse is implemented in the power_off_post() callback in the same way. This way, the GPCv2 driver always claims the prepare_lock before blk_ctrl_genpd_lock_class and the deadlock is avoided. Signed-off-by: Marek Vasut --- Cc: Adam Ford Cc: Fabio Estevam Cc: Greg Kroah-Hartman Cc: Jacky Bai Cc: Kevin Hilman Cc: Laurent Pinchart Cc: Len Brown Cc: Liam Girdwood Cc: Lucas Stach Cc: Marek Vasut Cc: Mark Brown Cc: Martin Kepplinger Cc: Pavel Machek Cc: Peng Fan Cc: Pengutronix Kernel Team Cc: Philipp Zabel Cc: Rafael J. Wysocki Cc: Sascha Hauer Cc: Shawn Guo Cc: Shengjiu Wang Cc: Stephen Boyd Cc: Ulf Hansson Cc: linux-clk@vger.kernel.org Cc: linux-imx@nxp.com Cc: linux-pm@vger.kernel.org To: linux-arm-kernel@lists.infradead.org --- drivers/soc/imx/gpcv2.c | 74 ++++++++++++++++++++++++++++++++++++----- 1 file changed, 66 insertions(+), 8 deletions(-) diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c index 7a47d14fde445..8d27a227ba02d 100644 --- a/drivers/soc/imx/gpcv2.c +++ b/drivers/soc/imx/gpcv2.c @@ -298,6 +298,8 @@ struct imx_pgc_domain { unsigned int pgc_sw_pup_reg; unsigned int pgc_sw_pdn_reg; + + int enabled; }; struct imx_pgc_domain_data { @@ -313,6 +315,52 @@ to_imx_pgc_domain(struct generic_pm_domain *genpd) return container_of(genpd, struct imx_pgc_domain, genpd); } +static int imx_pgc_power_pre_up(struct generic_pm_domain *genpd) +{ + struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); + int ret; + + ret = clk_bulk_prepare(domain->num_clks, domain->clks); + if (ret) + dev_err(domain->dev, "failed to prepare reset clocks\n"); + + return ret; +} + +static int imx_pgc_power_post_up(struct generic_pm_domain *genpd) +{ + struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); + + if (!domain->keep_clocks && domain->enabled) + clk_bulk_unprepare(domain->num_clks, domain->clks); + + return 0; +} + +static int imx_pgc_power_down_pre(struct generic_pm_domain *genpd) +{ + struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); + int ret; + + if (!domain->keep_clocks || !domain->enabled) { + ret = clk_bulk_prepare(domain->num_clks, domain->clks); + if (ret) + dev_err(domain->dev, "failed to prepare reset clocks\n"); + } + + return ret; +} + +static int imx_pgc_power_down_post(struct generic_pm_domain *genpd) +{ + struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); + + if (!domain->keep_clocks || !domain->enabled) + clk_bulk_unprepare(domain->num_clks, domain->clks); + + return 0; +} + static int imx_pgc_power_up(struct generic_pm_domain *genpd) { struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); @@ -338,7 +386,7 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd) reset_control_assert(domain->reset); /* Enable reset clocks for all devices in the domain */ - ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks); + ret = clk_bulk_enable(domain->num_clks, domain->clks); if (ret) { dev_err(domain->dev, "failed to enable reset clocks\n"); goto out_regulator_disable; @@ -397,12 +445,14 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd) /* Disable reset clocks for all devices in the domain */ if (!domain->keep_clocks) - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); + clk_bulk_disable(domain->num_clks, domain->clks); + + domain->enabled++; return 0; out_clk_disable: - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); + clk_bulk_disable(domain->num_clks, domain->clks); out_regulator_disable: if (!IS_ERR(domain->regulator)) regulator_disable(domain->regulator); @@ -420,7 +470,7 @@ static int imx_pgc_power_down(struct generic_pm_domain *genpd) /* Enable reset clocks for all devices in the domain */ if (!domain->keep_clocks) { - ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks); + ret = clk_bulk_enable(domain->num_clks, domain->clks); if (ret) { dev_err(domain->dev, "failed to enable reset clocks\n"); return ret; @@ -467,7 +517,7 @@ static int imx_pgc_power_down(struct generic_pm_domain *genpd) } /* Disable reset clocks for all devices in the domain */ - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); + clk_bulk_disable(domain->num_clks, domain->clks); if (!IS_ERR(domain->regulator)) { ret = regulator_disable(domain->regulator); @@ -479,13 +529,17 @@ static int imx_pgc_power_down(struct generic_pm_domain *genpd) } } + domain->enabled--; + pm_runtime_put_sync_suspend(domain->dev); return 0; out_clk_disable: if (!domain->keep_clocks) - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); + clk_bulk_disable(domain->num_clks, domain->clks); + + domain->enabled--; return ret; } @@ -1514,8 +1568,12 @@ static int imx_gpcv2_probe(struct platform_device *pdev) domain->regmap = regmap; domain->regs = domain_data->pgc_regs; - domain->genpd.power_on = imx_pgc_power_up; - domain->genpd.power_off = imx_pgc_power_down; + domain->genpd.power_pre_on = imx_pgc_power_pre_up; + domain->genpd.power_on = imx_pgc_power_up; + domain->genpd.power_post_on = imx_pgc_power_post_up; + domain->genpd.power_off_pre = imx_pgc_power_down_pre; + domain->genpd.power_off = imx_pgc_power_down; + domain->genpd.power_off_post = imx_pgc_power_down_post; pd_pdev->dev.parent = dev; pd_pdev->dev.of_node = np; From patchwork Tue Nov 8 01:35:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Vasut X-Patchwork-Id: 13035680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92DFCC43217 for ; Tue, 8 Nov 2022 01:36:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233430AbiKHBf6 (ORCPT ); Mon, 7 Nov 2022 20:35:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233500AbiKHBfs (ORCPT ); Mon, 7 Nov 2022 20:35:48 -0500 Received: from phobos.denx.de (phobos.denx.de [85.214.62.61]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD94B1B9F0; Mon, 7 Nov 2022 17:35:40 -0800 (PST) Received: from tr.lan (ip-86-49-120-218.bb.vodafone.cz [86.49.120.218]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: marex@denx.de) by phobos.denx.de (Postfix) with ESMTPSA id 9467C84F1C; Tue, 8 Nov 2022 02:35:38 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=denx.de; s=phobos-20191101; t=1667871339; bh=aD2mQVPfdWqzQxFkatpPuRDlb/RB7uncjpAoGFchjLM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QwN2OMmYAami925k9uoqFsKsHFnqHU6UoR0+3A1Q7jCMHgoVlp3ZKsI9LbdO2y0Fi FLVkyvmRnEq1uS8MCV416wMmX7TNX5equBQytJSb0ePDrxS7d1tmwtaOvtvJ9vk/w5 oU2bmkjpSPIjn/TiS67eleNObkAhfOwSpUX27uFlosBXic4fXUMxaWyRn7V4OOBlm1 /1m9fhqok31zLjToZjWs5gFtMd9e5kqVpIAjwuAYFpfFrqrmvXtBoHrJ4MMdCfy6Vp y33s0Fy9leUcQBf6BPsgPuqRkTuYKWo+eaSsAqe9WYaaNyyaYNBqeT1HkF5k3sWEK5 bw5bRdC15u8og== From: Marek Vasut To: linux-pm@vger.kernel.org Cc: Marek Vasut , Adam Ford , Fabio Estevam , Greg Kroah-Hartman , Jacky Bai , Kevin Hilman , Laurent Pinchart , Len Brown , Liam Girdwood , Lucas Stach , Mark Brown , Martin Kepplinger , Pavel Machek , Peng Fan , Pengutronix Kernel Team , Philipp Zabel , "Rafael J . Wysocki" , Sascha Hauer , Shawn Guo , Shengjiu Wang , Stephen Boyd , Ulf Hansson , linux-clk@vger.kernel.org, linux-imx@nxp.com Subject: [PATCH 3/3] [RFC] soc: imx: imx8m-blk-ctrl: Split clock prepare from clock enable in the domain Date: Tue, 8 Nov 2022 02:35:17 +0100 Message-Id: <20221108013517.749665-3-marex@denx.de> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221108013517.749665-1-marex@denx.de> References: <20221108013517.749665-1-marex@denx.de> MIME-Version: 1.0 X-Virus-Scanned: clamav-milter 0.103.6 at phobos.denx.de X-Virus-Status: Clean Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org It is possible for clk_disable_unused() to trigger lockdep warning regarding lock ordering in this driver. This happens in case of the following conditions: A) clock core clk_disable_unused() triggers the following sequence in a driver which also uses blkctrl domain: - clk_prepare_lock() -> obtains clock core prepare_lock - pm_runtime_get*() -> obtains &blk_ctrl_genpd_lock_class B) driver powers up a power domain and triggers the following sequence in blkctrl: - pm_runtime_get_sync() -> obtains &blk_ctrl_genpd_lock_class - clk_bulk_prepare_enable() -> obtains clock core prepare_lock This can lead to a deadlock in case A and B runs on separate CPUs. To avoid the deadlock, split clk_*prepare() from clk_*enable() and call the former in power_pre_on() callback, before pm_runtime_get_sync(). The reverse is implemented in the power_off_post() callback in the same way. This way, the blkctrl driver always claims the prepare_lock before blk_ctrl_genpd_lock_class and the deadlock is avoided. Signed-off-by: Marek Vasut --- Cc: Adam Ford Cc: Fabio Estevam Cc: Greg Kroah-Hartman Cc: Jacky Bai Cc: Kevin Hilman Cc: Laurent Pinchart Cc: Len Brown Cc: Liam Girdwood Cc: Lucas Stach Cc: Marek Vasut Cc: Mark Brown Cc: Martin Kepplinger Cc: Pavel Machek Cc: Peng Fan Cc: Pengutronix Kernel Team Cc: Philipp Zabel Cc: Rafael J. Wysocki Cc: Sascha Hauer Cc: Shawn Guo Cc: Shengjiu Wang Cc: Stephen Boyd Cc: Ulf Hansson Cc: linux-clk@vger.kernel.org Cc: linux-imx@nxp.com Cc: linux-pm@vger.kernel.org To: linux-arm-kernel@lists.infradead.org --- drivers/soc/imx/imx8mp-blk-ctrl.c | 38 +++++++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 5 deletions(-) diff --git a/drivers/soc/imx/imx8mp-blk-ctrl.c b/drivers/soc/imx/imx8mp-blk-ctrl.c index ca4366e264783..844039d4e6bd2 100644 --- a/drivers/soc/imx/imx8mp-blk-ctrl.c +++ b/drivers/soc/imx/imx8mp-blk-ctrl.c @@ -408,6 +408,30 @@ static const struct imx8mp_blk_ctrl_data imx8mp_hdmi_blk_ctl_dev_data = { .num_domains = ARRAY_SIZE(imx8mp_hdmi_domain_data), }; +static int imx8mp_blk_ctrl_power_pre_on(struct generic_pm_domain *genpd) +{ + struct imx8mp_blk_ctrl_domain *domain = to_imx8mp_blk_ctrl_domain(genpd); + const struct imx8mp_blk_ctrl_domain_data *data = domain->data; + struct imx8mp_blk_ctrl *bc = domain->bc; + int ret; + + ret = clk_bulk_prepare(data->num_clks, domain->clks); + if (ret) + dev_err(bc->dev, "failed to enable clocks\n"); + + return ret; +} + +static int imx8mp_blk_ctrl_power_off_post(struct generic_pm_domain *genpd) +{ + struct imx8mp_blk_ctrl_domain *domain = to_imx8mp_blk_ctrl_domain(genpd); + const struct imx8mp_blk_ctrl_domain_data *data = domain->data; + + clk_bulk_unprepare(data->num_clks, domain->clks); + + return 0; +} + static int imx8mp_blk_ctrl_power_on(struct generic_pm_domain *genpd) { struct imx8mp_blk_ctrl_domain *domain = to_imx8mp_blk_ctrl_domain(genpd); @@ -423,7 +447,7 @@ static int imx8mp_blk_ctrl_power_on(struct generic_pm_domain *genpd) } /* enable upstream clocks */ - ret = clk_bulk_prepare_enable(data->num_clks, domain->clks); + ret = clk_bulk_enable(data->num_clks, domain->clks); if (ret) { dev_err(bc->dev, "failed to enable clocks\n"); goto bus_put; @@ -443,12 +467,12 @@ static int imx8mp_blk_ctrl_power_on(struct generic_pm_domain *genpd) if (ret) dev_err(bc->dev, "failed to set icc bw\n"); - clk_bulk_disable_unprepare(data->num_clks, domain->clks); + clk_bulk_disable(data->num_clks, domain->clks); return 0; clk_disable: - clk_bulk_disable_unprepare(data->num_clks, domain->clks); + clk_bulk_disable(data->num_clks, domain->clks); bus_put: pm_runtime_put(bc->bus_power_dev); @@ -462,7 +486,7 @@ static int imx8mp_blk_ctrl_power_off(struct generic_pm_domain *genpd) struct imx8mp_blk_ctrl *bc = domain->bc; int ret; - ret = clk_bulk_prepare_enable(data->num_clks, domain->clks); + ret = clk_bulk_enable(data->num_clks, domain->clks); if (ret) { dev_err(bc->dev, "failed to enable clocks\n"); return ret; @@ -471,7 +495,7 @@ static int imx8mp_blk_ctrl_power_off(struct generic_pm_domain *genpd) /* domain specific blk-ctrl manipulation */ bc->power_off(bc, domain); - clk_bulk_disable_unprepare(data->num_clks, domain->clks); + clk_bulk_disable(data->num_clks, domain->clks); /* power down upstream GPC domain */ pm_runtime_put(domain->power_dev); @@ -585,8 +609,12 @@ static int imx8mp_blk_ctrl_probe(struct platform_device *pdev) dev_set_name(domain->power_dev, "%s", data->name); domain->genpd.name = data->name; + domain->genpd.power_pre_on = imx8mp_blk_ctrl_power_pre_on; domain->genpd.power_on = imx8mp_blk_ctrl_power_on; + domain->genpd.power_post_on = imx8mp_blk_ctrl_power_off_post; + domain->genpd.power_off_pre = imx8mp_blk_ctrl_power_pre_on; domain->genpd.power_off = imx8mp_blk_ctrl_power_off; + domain->genpd.power_off_post = imx8mp_blk_ctrl_power_off_post; domain->bc = bc; domain->id = i;