From patchwork Wed Jun 28 14:56:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Krzysztof Kozlowski X-Patchwork-Id: 9814451 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1473C60365 for ; Wed, 28 Jun 2017 14:58:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 082FA2845D for ; Wed, 28 Jun 2017 14:58:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F13C728571; Wed, 28 Jun 2017 14:58:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A1C0F2845D for ; Wed, 28 Jun 2017 14:58:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751630AbdF1O5S (ORCPT ); Wed, 28 Jun 2017 10:57:18 -0400 Received: from mail.kernel.org ([198.145.29.99]:56396 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751612AbdF1O5P (ORCPT ); Wed, 28 Jun 2017 10:57:15 -0400 Received: from localhost.localdomain (pub082136089155.dh-hfc.datazug.ch [82.136.89.155]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DEBB022BD9; Wed, 28 Jun 2017 14:57:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DEBB022BD9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=krzk@kernel.org From: Krzysztof Kozlowski To: "Rafael J. Wysocki" , Kevin Hilman , Ulf Hansson , Len Brown , Pavel Machek , Greg Kroah-Hartman , linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Krzysztof Kozlowski Subject: [RFC v3 8/8] PM / Domains: Add asserts for PM domain locks Date: Wed, 28 Jun 2017 16:56:23 +0200 Message-Id: <20170628145623.20716-9-krzk@kernel.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170628145623.20716-1-krzk@kernel.org> References: <20170628145623.20716-1-krzk@kernel.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add lockdep checks for holding domain lock in few places where this is required. This might expose misuse even though only file-scope functions use it for now. Regular lockdep asserts can be entirely discarded by preprocessor, however domain code uses mixed type of lock: spinlock or mutex. This means that these asserts will not be thrown away entirely. Instead, always at least two pointer dereferences (p->lock_ops->assert_held) and probably one function call (assert_held()) will be made. Signed-off-by: Krzysztof Kozlowski --- drivers/base/power/domain.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 1a47c5ff6a2f..1c3bd7434675 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -40,12 +40,18 @@ static LIST_HEAD(gpd_list); static DEFINE_MUTEX(gpd_list_lock); struct genpd_lock_ops { + void (*assert_held)(struct generic_pm_domain *genpd); void (*lock)(struct generic_pm_domain *genpd); void (*lock_nested)(struct generic_pm_domain *genpd, int depth); int (*lock_interruptible)(struct generic_pm_domain *genpd); void (*unlock)(struct generic_pm_domain *genpd); }; +static void genpd_assert_held_mtx(struct generic_pm_domain *genpd) +{ + lockdep_assert_held(&genpd->mlock); +} + static void genpd_lock_mtx(struct generic_pm_domain *genpd) { mutex_lock(&genpd->mlock); @@ -68,12 +74,18 @@ static void genpd_unlock_mtx(struct generic_pm_domain *genpd) } static const struct genpd_lock_ops genpd_mtx_ops = { + .assert_held = genpd_assert_held_mtx, .lock = genpd_lock_mtx, .lock_nested = genpd_lock_nested_mtx, .lock_interruptible = genpd_lock_interruptible_mtx, .unlock = genpd_unlock_mtx, }; +static void genpd_assert_held_spin(struct generic_pm_domain *genpd) +{ + lockdep_assert_held(&genpd->slock); +} + static void genpd_lock_spin(struct generic_pm_domain *genpd) __acquires(&genpd->slock) { @@ -110,12 +122,14 @@ static void genpd_unlock_spin(struct generic_pm_domain *genpd) } static const struct genpd_lock_ops genpd_spin_ops = { + .assert_held = genpd_assert_held_spin, .lock = genpd_lock_spin, .lock_nested = genpd_lock_nested_spin, .lock_interruptible = genpd_lock_interruptible_spin, .unlock = genpd_unlock_spin, }; +#define genpd_assert_held(p) p->lock_ops->assert_held(p) #define genpd_lock(p) p->lock_ops->lock(p) #define genpd_lock_nested(p, d) p->lock_ops->lock_nested(p, d) #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) @@ -299,6 +313,8 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on, struct gpd_link *link; unsigned int not_suspended = 0; + genpd_assert_held(genpd); + /* * Do not try to power off the domain in the following situations: * (1) The domain is already in the "power off" state. @@ -385,6 +401,8 @@ static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth) struct gpd_link *link; int ret = 0; + genpd_assert_held(genpd); + if (genpd_status_on(genpd)) return 0; @@ -766,6 +784,9 @@ static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock, { struct gpd_link *link; + if (use_lock) + genpd_assert_held(genpd); + if (!genpd_status_on(genpd) || genpd_is_always_on(genpd)) return; @@ -808,6 +829,9 @@ static void genpd_sync_power_on(struct generic_pm_domain *genpd, bool use_lock, { struct gpd_link *link; + if (use_lock) + genpd_assert_held(genpd); + if (genpd_status_on(genpd)) return;