From patchwork Fri Nov 22 12:43:55 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 3222371 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E4A239F3AE for ; Fri, 22 Nov 2013 12:44:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 83EFC207A3 for ; Fri, 22 Nov 2013 12:44:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 82D302079E for ; Fri, 22 Nov 2013 12:44:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755531Ab3KVMoI (ORCPT ); Fri, 22 Nov 2013 07:44:08 -0500 Received: from mail-lb0-f173.google.com ([209.85.217.173]:59228 "EHLO mail-lb0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755490Ab3KVMoH (ORCPT ); Fri, 22 Nov 2013 07:44:07 -0500 Received: by mail-lb0-f173.google.com with SMTP id u14so884479lbd.18 for ; Fri, 22 Nov 2013 04:44:05 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ZdCb7ht1S2+w8jo7XzmCNJIg0pE2RcHU5NL68FVgaVo=; b=d8XU2CZbo0ZWL7zhVnPZXfoczDJuHx6YH+j30tXXMPXfGtDZ5/3PtKqIcq/KAEjlgW 3HDRoV7ga4wVj+bdRAWuc5dz0ZGIQDJeLgya1rICnuvJA17IMJw3AMiKFRsXisM2Ghvo bz2f+ffZwLf0UapaTgXNCYJXLV6wMWcR+3oRem7pv7M6KOYJWfNSE/uf/cKHBFXYKwcu EL0iEaplTabmpzct+7bSvlnvUNC2Oi6Eeq1vuSRtojZp6SZ7w1gczIryL5gx1ph0x0wh zaTr/ykhD3JzDL7B0cxbfBZb7qd5vZ44wFWbl4nHgb2mS8by2FKVnDvo7dkUwYmUGm0c 7n3Q== X-Gm-Message-State: ALoCoQk33hwKCN4hBPdu8fg/ZhHKGs23wwvdB/p1xvi98cSnPNGUBOX3ghOSLZ91HMnMC9N+ihCj X-Received: by 10.112.134.71 with SMTP id pi7mr985898lbb.44.1385124245245; Fri, 22 Nov 2013 04:44:05 -0800 (PST) Received: from linaro-ulf.lan (90-231-160-185-no158.tbcn.telia.com. [90.231.160.185]) by mx.google.com with ESMTPSA id mq10sm25601604lbb.12.2013.11.22.04.44.00 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 22 Nov 2013 04:44:04 -0800 (PST) From: Ulf Hansson To: "Rafael J. Wysocki" , Len Brown , Pavel Machek , linux-pm@vger.kernel.org Cc: Greg Kroah-Hartman , linux-pci@vger.kernel.org, linux-usb@vger.kernel.org, Ulf Hansson , Kevin Hilman , Alan Stern Subject: [PATCH] PM / Sleep: Add pm_generic functions to re-use runtime PM callbacks Date: Fri, 22 Nov 2013 13:43:55 +0100 Message-Id: <1385124235-25484-1-git-send-email-ulf.hansson@linaro.org> X-Mailer: git-send-email 1.7.9.5 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To put devices into low power state during sleep, it sometimes makes sense at subsystem-level to re-use device's runtime PM callbacks. The PM core will at device_suspend_late disable runtime PM, after that we can safely operate on these callbacks. At suspend_late the device will be put into low power state by invoking the device's runtime_suspend callback, unless the runtime status is already suspended. At resume_early the state is restored by invoking the device's runtime_resume callback. Soon after the PM core will re-enable runtime PM before returning from device_resume_early. These are new pm_generic functions, which are supposed to be used in pairs: - pm_generic_suspend_late_runtime / pm_generic_resume_early_runtime - pm_generic_freeze_late_runtime / pm_generic_thaw_early_runtime - pm_generic_poweroff_late_runtime / pm_generic_restore_early_runtime Do note that these new pm_generic late and early callbacks will work smothly even with and without CONFIG_PM_RUNTIME, as long as the runtime PM callbacks are implemented inside CONFIG_PM instead of CONFIG_PM_RUNTIME. A special thanks to Alan Stern who came up with this idea. Cc: Kevin Hilman Cc: Alan Stern Signed-off-by: Ulf Hansson --- drivers/base/power/generic_ops.c | 130 ++++++++++++++++++++++++++++++++++++++ include/linux/pm.h | 6 ++ 2 files changed, 136 insertions(+) diff --git a/drivers/base/power/generic_ops.c b/drivers/base/power/generic_ops.c index 5ee030a..b8d31f6 100644 --- a/drivers/base/power/generic_ops.c +++ b/drivers/base/power/generic_ops.c @@ -93,6 +93,49 @@ int pm_generic_suspend_late(struct device *dev) EXPORT_SYMBOL_GPL(pm_generic_suspend_late); /** + * pm_generic_suspend_late_runtime - Generic suspend_late callback for + * subsystems that wants to use runtime_suspend callbacks at suspend_late. + * @dev: Device to suspend. + */ +int pm_generic_suspend_late_runtime(struct device *dev) +{ + int (*callback)(struct device *); + int ret = 0; + + /* + * PM core has disabled runtime PM in device_suspend_late, thus we can + * safely check the device's runtime status and decide whether + * additional actions are needed to put the device into low power state. + * If so, we invoke the device's runtime_suspend callback. + * For the !CONFIG_PM_RUNTIME case, pm_runtime_status_suspended() always + * returns false and therefore the runtime_suspend callback will be + * invoked. + */ + if (pm_runtime_status_suspended(dev)) + return 0; + + if (dev->pm_domain) + callback = dev->pm_domain->ops.runtime_suspend; + else if (dev->type && dev->type->pm) + callback = dev->type->pm->runtime_suspend; + else if (dev->class && dev->class->pm) + callback = dev->class->pm->runtime_suspend; + else if (dev->bus && dev->bus->pm) + callback = dev->bus->pm->runtime_suspend; + else + callback = NULL; + + if (!callback && dev->driver && dev->driver->pm) + callback = dev->driver->pm->runtime_suspend; + + if (callback) + ret = callback(dev); + + return ret; +} +EXPORT_SYMBOL_GPL(pm_generic_suspend_late_runtime); + +/** * pm_generic_suspend - Generic suspend callback for subsystems. * @dev: Device to suspend. */ @@ -129,6 +172,17 @@ int pm_generic_freeze_late(struct device *dev) EXPORT_SYMBOL_GPL(pm_generic_freeze_late); /** + * pm_generic_freeze_late_runtime - Generic freeze_late callback for + * subsystems that wants to use runtime_suspend callbacks at freeze_late. + * @dev: Device to freeze. + */ +int pm_generic_freeze_late_runtime(struct device *dev) +{ + return pm_generic_suspend_late_runtime(dev); +} +EXPORT_SYMBOL_GPL(pm_generic_freeze_late_runtime); + +/** * pm_generic_freeze - Generic freeze callback for subsystems. * @dev: Device to freeze. */ @@ -165,6 +219,17 @@ int pm_generic_poweroff_late(struct device *dev) EXPORT_SYMBOL_GPL(pm_generic_poweroff_late); /** + * pm_generic_poweroff_late_runtime - Generic poweroff_late callback for + * subsystems that wants to use runtime_suspend callbacks at poweroff_late. + * @dev: Device to handle. + */ +int pm_generic_poweroff_late_runtime(struct device *dev) +{ + return pm_generic_suspend_late_runtime(dev); +} +EXPORT_SYMBOL_GPL(pm_generic_poweroff_late_runtime); + +/** * pm_generic_poweroff - Generic poweroff callback for subsystems. * @dev: Device to handle. */ @@ -201,6 +266,17 @@ int pm_generic_thaw_early(struct device *dev) EXPORT_SYMBOL_GPL(pm_generic_thaw_early); /** + * pm_generic_thaw_early_runtime - Generic thaw_early callback for subsystems + * that wants to use runtime_resume callbacks at thaw_early. + * @dev: Device to thaw. + */ +int pm_generic_thaw_early_runtime(struct device *dev) +{ + return pm_generic_resume_early_runtime(dev); +} +EXPORT_SYMBOL_GPL(pm_generic_thaw_early_runtime); + +/** * pm_generic_thaw - Generic thaw callback for subsystems. * @dev: Device to thaw. */ @@ -237,6 +313,49 @@ int pm_generic_resume_early(struct device *dev) EXPORT_SYMBOL_GPL(pm_generic_resume_early); /** + * pm_generic_resume_early_runtime - Generic resume_early callback for + * subsystems that wants to use runtime_resume callbacks at resume_early. + * @dev: Device to resume. + */ +int pm_generic_resume_early_runtime(struct device *dev) +{ + int (*callback)(struct device *); + int ret = 0; + + /* + * PM core has not yet enabled runtime PM in device_resume_early, + * thus we can safely check the device's runtime status and restore the + * previous state we had in device_suspend_late. If restore is needed + * we invoke the device's runtime_resume callback. + * For the !CONFIG_PM_RUNTIME case, pm_runtime_status_suspended() always + * returns false and therefore the runtime_resume callback will be + * invoked. + */ + if (pm_runtime_status_suspended(dev)) + return 0; + + if (dev->pm_domain) + callback = dev->pm_domain->ops.runtime_resume; + else if (dev->type && dev->type->pm) + callback = dev->type->pm->runtime_resume; + else if (dev->class && dev->class->pm) + callback = dev->class->pm->runtime_resume; + else if (dev->bus && dev->bus->pm) + callback = dev->bus->pm->runtime_resume; + else + callback = NULL; + + if (!callback && dev->driver && dev->driver->pm) + callback = dev->driver->pm->runtime_resume; + + if (callback) + ret = callback(dev); + + return ret; +} +EXPORT_SYMBOL_GPL(pm_generic_resume_early_runtime); + +/** * pm_generic_resume - Generic resume callback for subsystems. * @dev: Device to resume. */ @@ -273,6 +392,17 @@ int pm_generic_restore_early(struct device *dev) EXPORT_SYMBOL_GPL(pm_generic_restore_early); /** + * pm_generic_restore_early_runtime - Generic restore_early callback for + * subsystems that wants to use runtime_resume callbacks at restore_early. + * @dev: Device to restore + */ +int pm_generic_restore_early_runtime(struct device *dev) +{ + return pm_generic_resume_early_runtime(dev); +} +EXPORT_SYMBOL_GPL(pm_generic_restore_early_runtime); + +/** * pm_generic_restore - Generic restore callback for subsystems. * @dev: Device to restore. */ diff --git a/include/linux/pm.h b/include/linux/pm.h index a224c7f..c7c2db7 100644 --- a/include/linux/pm.h +++ b/include/linux/pm.h @@ -656,22 +656,28 @@ extern void dpm_for_each_dev(void *data, void (*fn)(struct device *, void *)); extern int pm_generic_prepare(struct device *dev); extern int pm_generic_suspend_late(struct device *dev); +extern int pm_generic_suspend_late_runtime(struct device *dev); extern int pm_generic_suspend_noirq(struct device *dev); extern int pm_generic_suspend(struct device *dev); extern int pm_generic_resume_early(struct device *dev); +extern int pm_generic_resume_early_runtime(struct device *dev); extern int pm_generic_resume_noirq(struct device *dev); extern int pm_generic_resume(struct device *dev); extern int pm_generic_freeze_noirq(struct device *dev); extern int pm_generic_freeze_late(struct device *dev); +extern int pm_generic_freeze_late_runtime(struct device *dev); extern int pm_generic_freeze(struct device *dev); extern int pm_generic_thaw_noirq(struct device *dev); extern int pm_generic_thaw_early(struct device *dev); +extern int pm_generic_thaw_early_runtime(struct device *dev); extern int pm_generic_thaw(struct device *dev); extern int pm_generic_restore_noirq(struct device *dev); extern int pm_generic_restore_early(struct device *dev); +extern int pm_generic_restore_early_runtime(struct device *dev); extern int pm_generic_restore(struct device *dev); extern int pm_generic_poweroff_noirq(struct device *dev); extern int pm_generic_poweroff_late(struct device *dev); +extern int pm_generic_poweroff_late_runtime(struct device *dev); extern int pm_generic_poweroff(struct device *dev); extern void pm_generic_complete(struct device *dev);