From patchwork Thu Dec 8 13:45:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 9466425 X-Patchwork-Delegate: rjw@sisk.pl Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 75E31607D3 for ; Thu, 8 Dec 2016 13:53:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E688284F6 for ; Thu, 8 Dec 2016 13:53:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 52C45284EC; Thu, 8 Dec 2016 13:53:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0FDF4284EC for ; Thu, 8 Dec 2016 13:53:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752292AbcLHNxB (ORCPT ); Thu, 8 Dec 2016 08:53:01 -0500 Received: from mail-wm0-f54.google.com ([74.125.82.54]:37925 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753251AbcLHNw6 (ORCPT ); Thu, 8 Dec 2016 08:52:58 -0500 Received: by mail-wm0-f54.google.com with SMTP id f82so26446154wmf.1 for ; Thu, 08 Dec 2016 05:52:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kZcEVFY3KXKkM/rQRjOu5Dp0QONEDm7bJt3W7kYeBiE=; b=GphCWliAQ7QNsk8hHl1o42hi5VOT4JEO1peEhZZVq7t1RJJ+D5S8wXHlHmYZ9ZVdrS DkSkowUfcSWht4PuBS7+HO8M0LgDXYtmwcjNBuaqD3rS1tife5IiRGWaAnrOgKMAPHXq MbGXc9gILyiVuwrVJtVzo7zz+KWVg82co9x+I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kZcEVFY3KXKkM/rQRjOu5Dp0QONEDm7bJt3W7kYeBiE=; b=HQUMFlwRSuoNILfj1+rii9CQkxOweWRKJ3+Bl7hwxOZ4xekF7YJ2JfdOorjRD/UIz6 Ae7JMkpeXcxLJt8jgEzDVYAwjdWbUZ4ldbUUBT0FlcDWTB/3/2ZHvRXlT1mE0lM37GpG KOwaRPCLlXULGheNkUTUCQDQiS6zubHYrjah0b1fvrqjZPpmxhSu63D9H4u20zAvpPkB jFRJFVk5/JPgLlkad6F0GZEvuSwZ0cOl2o++QDiH3zClpotRPoWO8tEUB588128bNCyS fps/c4zHp/toSdbcgKAiTF30egTxVFV19ss/dkCXF9rOZX78+ejm6vHqXOhZoYOx1CU8 BQLA== X-Gm-Message-State: AKaTC03YNmJxQ8oqWPaTJ32arePftImgobC2tonCFgFrRpatG6K9VhyIHZRkpZ9atGI/PI9H X-Received: by 10.25.154.199 with SMTP id c190mr20948565lfe.76.1481204732959; Thu, 08 Dec 2016 05:45:32 -0800 (PST) Received: from uffe-Latitude-E6430s.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id v26sm5675307lja.30.2016.12.08.05.45.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 08 Dec 2016 05:45:31 -0800 (PST) From: Ulf Hansson To: "Rafael J. Wysocki" , Ulf Hansson , linux-pm@vger.kernel.org Cc: Len Brown , Pavel Machek , Kevin Hilman , Geert Uytterhoeven , Lina Iyer , Jon Hunter , Marek Szyprowski , Brian Norris Subject: [PATCH 2/2] PM / Domains: Fix asynchronous execution of *noirq() callbacks Date: Thu, 8 Dec 2016 14:45:21 +0100 Message-Id: <1481204721-11431-3-git-send-email-ulf.hansson@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1481204721-11431-1-git-send-email-ulf.hansson@linaro.org> References: <1481204721-11431-1-git-send-email-ulf.hansson@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As the PM core may invoke the *noirq() callbacks asynchronously, the current lock-less approach in genpd doesn't work. The consequence is that we may find concurrent operations racing to power on/off the PM domain. As of now, no immediate errors has been reported, but it's probably only a matter time. Therefor let's fix the problem now before this becomes a real issue, by deploying the locking scheme to the relevant functions. Reported-by: Brian Norris Signed-off-by: Ulf Hansson --- drivers/base/power/domain.c | 62 ++++++++++++++++++++++++--------------------- 1 file changed, 33 insertions(+), 29 deletions(-) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index abca652..0a278d0 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -727,16 +727,17 @@ static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd, /** * genpd_sync_power_off - Synchronously power off a PM domain and its masters. * @genpd: PM domain to power off, if possible. + * @depth: nesting count for lockdep. * * Check if the given PM domain can be powered off (during system suspend or * hibernation) and do that if so. Also, in that case propagate to its masters. * * This function is only called in "noirq" and "syscore" stages of system power - * transitions, so it need not acquire locks (all of the "noirq" callbacks are - * executed sequentially, so it is guaranteed that it will never run twice in - * parallel). + * transitions, but since the "noirq" callbacks may be executed asynchronously, + * the lock must be held. */ -static void genpd_sync_power_off(struct generic_pm_domain *genpd) +static void genpd_sync_power_off(struct generic_pm_domain *genpd, + unsigned int depth) { struct gpd_link *link; @@ -755,20 +756,24 @@ static void genpd_sync_power_off(struct generic_pm_domain *genpd) list_for_each_entry(link, &genpd->slave_links, slave_node) { genpd_sd_counter_dec(link->master); - genpd_sync_power_off(link->master); + + genpd_lock_nested(link->master, depth + 1); + genpd_sync_power_off(link->master, depth + 1); + genpd_unlock(link->master); } } /** * genpd_sync_power_on - Synchronously power on a PM domain and its masters. * @genpd: PM domain to power on. + * @depth: nesting count for lockdep. * * This function is only called in "noirq" and "syscore" stages of system power - * transitions, so it need not acquire locks (all of the "noirq" callbacks are - * executed sequentially, so it is guaranteed that it will never run twice in - * parallel). + * transitions, but since the "noirq" callbacks may be executed asynchronously, + * the lock must be held. */ -static void genpd_sync_power_on(struct generic_pm_domain *genpd) +static void genpd_sync_power_on(struct generic_pm_domain *genpd, + unsigned int depth) { struct gpd_link *link; @@ -776,8 +781,11 @@ static void genpd_sync_power_on(struct generic_pm_domain *genpd) return; list_for_each_entry(link, &genpd->slave_links, slave_node) { - genpd_sync_power_on(link->master); genpd_sd_counter_inc(link->master); + + genpd_lock_nested(link->master, depth + 1); + genpd_sync_power_on(link->master, depth + 1); + genpd_unlock(link->master); } _genpd_power_on(genpd, false); @@ -886,13 +894,10 @@ static int pm_genpd_suspend_noirq(struct device *dev) return ret; } - /* - * Since all of the "noirq" callbacks are executed sequentially, it is - * guaranteed that this function will never run twice in parallel for - * the same PM domain, so it is not necessary to use locking here. - */ + genpd_lock(genpd); genpd->suspended_count++; - genpd_sync_power_off(genpd); + genpd_sync_power_off(genpd, 0); + genpd_unlock(genpd); return 0; } @@ -917,13 +922,10 @@ static int pm_genpd_resume_noirq(struct device *dev) if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) return 0; - /* - * Since all of the "noirq" callbacks are executed sequentially, it is - * guaranteed that this function will never run twice in parallel for - * the same PM domain, so it is not necessary to use locking here. - */ - genpd_sync_power_on(genpd); + genpd_lock(genpd); + genpd_sync_power_on(genpd, 0); genpd->suspended_count--; + genpd_unlock(genpd); if (genpd->dev_ops.stop && genpd->dev_ops.start) ret = pm_runtime_force_resume(dev); @@ -1000,13 +1002,10 @@ static int pm_genpd_restore_noirq(struct device *dev) return -EINVAL; /* - * Since all of the "noirq" callbacks are executed sequentially, it is - * guaranteed that this function will never run twice in parallel for - * the same PM domain, so it is not necessary to use locking here. - * * At this point suspended_count == 0 means we are being run for the * first time for the given domain in the present cycle. */ + genpd_lock(genpd); if (genpd->suspended_count++ == 0) /* * The boot kernel might put the domain into arbitrary state, @@ -1015,7 +1014,8 @@ static int pm_genpd_restore_noirq(struct device *dev) */ genpd->status = GPD_STATE_POWER_OFF; - genpd_sync_power_on(genpd); + genpd_sync_power_on(genpd, 0); + genpd_unlock(genpd); if (genpd->dev_ops.stop && genpd->dev_ops.start) ret = pm_runtime_force_resume(dev); @@ -1068,13 +1068,17 @@ static void genpd_syscore_switch(struct device *dev, bool suspend) if (!pm_genpd_present(genpd)) return; + genpd_lock(genpd); + if (suspend) { genpd->suspended_count++; - genpd_sync_power_off(genpd); + genpd_sync_power_off(genpd, 0); } else { - genpd_sync_power_on(genpd); + genpd_sync_power_on(genpd, 0); genpd->suspended_count--; } + + genpd_unlock(genpd); } void pm_genpd_syscore_poweroff(struct device *dev)