From patchwork Wed Jul 25 22:26:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10544875 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA83D9093 for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B4E682A9E7 for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A90CD2AA02; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9464D2A9E7 for ; Wed, 25 Jul 2018 22:26:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731334AbeGYXjx (ORCPT ); Wed, 25 Jul 2018 19:39:53 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:33909 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731259AbeGYXjw (ORCPT ); Wed, 25 Jul 2018 19:39:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1532557568; x=1564093568; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=OUAvGymZPruQeXXBbFGrwC5i2z8EH/P4AtzwAsn6WVQ=; b=dAblotQDb9SE2bVA7pxJ2ro8qpPPHVYpk9trDGdNpCgwjfRGgm1AtKCI jBUYGs0oT4XXcGCl83m2AkK17VSi983TGUJvFgRLh7gDnv+KT+ky+j2gV HvfY95cX/kqr46kTDUY9Slpa1JVEF3sjqKRK9f+a403NNm2ioF8bn6LKT pOdbx0i9jReIdmhQ9+JP3lca6mo+pea+KuraMKQmJijRNLzj7hqaKoKUY AqFgQCEYT3O9VwmjSBzcVB4OdKNUteLSLYlqmj/o4pe5B2mcvcCqHRuLh PKsWbcgOTLjZuvGh3aRj5ytrMCbOj8t/kY7wmhz+nHGX9YTZFpRha6+a9 g==; X-IronPort-AV: E=Sophos;i="5.51,402,1526313600"; d="scan'208";a="84987663" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 26 Jul 2018 06:26:08 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 25 Jul 2018 15:14:44 -0700 Received: from thinkpad-bart.sdcorp.global.sandisk.com ([10.111.67.248]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Jul 2018 15:26:08 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Alan Stern Subject: [PATCH v2 1/5] block: Fix a comment in a header file Date: Wed, 25 Jul 2018 15:26:03 -0700 Message-Id: <20180725222607.8854-2-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180725222607.8854-1-bart.vanassche@wdc.com> References: <20180725222607.8854-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Since the REQ_PREEMPT flag has been renamed into RQF_PREEMPT, update the comment that refers to that flag. Signed-off-by: Bart Van Assche Reviewed-by: Johannes Thumshirn Cc: Christoph Hellwig Cc: Ming Lei Cc: Alan Stern --- include/linux/blkdev.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 331a6cb8805f..763482037c9b 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -700,7 +700,7 @@ struct request_queue { #define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */ #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ -#define QUEUE_FLAG_PREEMPT_ONLY 29 /* only process REQ_PREEMPT requests */ +#define QUEUE_FLAG_PREEMPT_ONLY 29 /* only process RQF_PREEMPT requests */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_SAME_COMP) | \ From patchwork Wed Jul 25 22:26:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10544883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1ECAC9093 for ; Wed, 25 Jul 2018 22:26:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B42F285CC for ; Wed, 25 Jul 2018 22:26:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F33A12A9FC; Wed, 25 Jul 2018 22:26:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 88A922A9F3 for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731360AbeGYXjy (ORCPT ); Wed, 25 Jul 2018 19:39:54 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:33909 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731174AbeGYXjy (ORCPT ); Wed, 25 Jul 2018 19:39:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1532557568; x=1564093568; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=jETKxtp5RSBJbMge9tfERUdhdOg9ON3e4xvRtfwf02k=; b=JIXYHCF3j5i6wRJTtlT7zPULWGc5+ZIiGXj0wxFXbmPXgP9SQhrkQtwv jXO6yMM1I/RzyxSjHcV+sUTzadrDjUrdTFcBwwcw/04EnNYtLadG7Vfor Zu8T/rZVdHQIXvu/c5fhq7XyobfbWpj+GK6WlAgIcr6Y93tBYbwK3Or1r deitNRWzWZpUJPskggMCVoyC02CxNO7Vl1xq0Z0bmiHTNgKHxHBYA/tI+ GgWqQcCAWDWBc6UbJUfu8obXfsTbZ4Yf3431MtsMkuD0csW80ZhcqOEQ7 s7So1XFI1vmdLsqLm/nIuxssnkzDn/yHoKW1+Y9kGZ7mRFKAFAFuT9qvx Q==; X-IronPort-AV: E=Sophos;i="5.51,402,1526313600"; d="scan'208";a="84987664" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 26 Jul 2018 06:26:08 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 25 Jul 2018 15:14:44 -0700 Received: from thinkpad-bart.sdcorp.global.sandisk.com ([10.111.67.248]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Jul 2018 15:26:08 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Johannes Thumshirn , Alan Stern Subject: [PATCH v2 2/5] block: Move power management functions into new source files Date: Wed, 25 Jul 2018 15:26:04 -0700 Message-Id: <20180725222607.8854-3-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180725222607.8854-1-bart.vanassche@wdc.com> References: <20180725222607.8854-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the code for runtime power management from blk-core.c into the new source file blk-pm.c. Move the declarations of the moved functions from into . For CONFIG_PM=n, leave out the declarations of the functions that are not used in that mode. This patch not only reduces the number of #ifdefs in the block layer core code but also reduces the size of header file and hence should help to reduce the build time of the Linux kernel if CONFIG_PM is not defined. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Johannes Thumshirn Cc: Alan Stern --- block/Makefile | 1 + block/blk-core.c | 183 ---------------------------------------- block/blk-pm.c | 186 +++++++++++++++++++++++++++++++++++++++++ drivers/scsi/scsi_pm.c | 1 + drivers/scsi/sd.c | 1 + drivers/scsi/sr.c | 1 + include/linux/blk-pm.h | 21 +++++ include/linux/blkdev.h | 23 ----- 8 files changed, 211 insertions(+), 206 deletions(-) create mode 100644 block/blk-pm.c create mode 100644 include/linux/blk-pm.h diff --git a/block/Makefile b/block/Makefile index 572b33f32c07..9dac9393bd4e 100644 --- a/block/Makefile +++ b/block/Makefile @@ -37,3 +37,4 @@ obj-$(CONFIG_BLK_WBT) += blk-wbt.o obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o +obj-$(CONFIG_PM) += blk-pm.o diff --git a/block/blk-core.c b/block/blk-core.c index 03a4ea93a5f3..14c28197ea42 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -3745,189 +3745,6 @@ void blk_finish_plug(struct blk_plug *plug) } EXPORT_SYMBOL(blk_finish_plug); -#ifdef CONFIG_PM -/** - * blk_pm_runtime_init - Block layer runtime PM initialization routine - * @q: the queue of the device - * @dev: the device the queue belongs to - * - * Description: - * Initialize runtime-PM-related fields for @q and start auto suspend for - * @dev. Drivers that want to take advantage of request-based runtime PM - * should call this function after @dev has been initialized, and its - * request queue @q has been allocated, and runtime PM for it can not happen - * yet(either due to disabled/forbidden or its usage_count > 0). In most - * cases, driver should call this function before any I/O has taken place. - * - * This function takes care of setting up using auto suspend for the device, - * the autosuspend delay is set to -1 to make runtime suspend impossible - * until an updated value is either set by user or by driver. Drivers do - * not need to touch other autosuspend settings. - * - * The block layer runtime PM is request based, so only works for drivers - * that use request as their IO unit instead of those directly use bio's. - */ -void blk_pm_runtime_init(struct request_queue *q, struct device *dev) -{ - /* not support for RQF_PM and ->rpm_status in blk-mq yet */ - if (q->mq_ops) - return; - - q->dev = dev; - q->rpm_status = RPM_ACTIVE; - pm_runtime_set_autosuspend_delay(q->dev, -1); - pm_runtime_use_autosuspend(q->dev); -} -EXPORT_SYMBOL(blk_pm_runtime_init); - -/** - * blk_pre_runtime_suspend - Pre runtime suspend check - * @q: the queue of the device - * - * Description: - * This function will check if runtime suspend is allowed for the device - * by examining if there are any requests pending in the queue. If there - * are requests pending, the device can not be runtime suspended; otherwise, - * the queue's status will be updated to SUSPENDING and the driver can - * proceed to suspend the device. - * - * For the not allowed case, we mark last busy for the device so that - * runtime PM core will try to autosuspend it some time later. - * - * This function should be called near the start of the device's - * runtime_suspend callback. - * - * Return: - * 0 - OK to runtime suspend the device - * -EBUSY - Device should not be runtime suspended - */ -int blk_pre_runtime_suspend(struct request_queue *q) -{ - int ret = 0; - - if (!q->dev) - return ret; - - spin_lock_irq(q->queue_lock); - if (q->nr_pending) { - ret = -EBUSY; - pm_runtime_mark_last_busy(q->dev); - } else { - q->rpm_status = RPM_SUSPENDING; - } - spin_unlock_irq(q->queue_lock); - return ret; -} -EXPORT_SYMBOL(blk_pre_runtime_suspend); - -/** - * blk_post_runtime_suspend - Post runtime suspend processing - * @q: the queue of the device - * @err: return value of the device's runtime_suspend function - * - * Description: - * Update the queue's runtime status according to the return value of the - * device's runtime suspend function and mark last busy for the device so - * that PM core will try to auto suspend the device at a later time. - * - * This function should be called near the end of the device's - * runtime_suspend callback. - */ -void blk_post_runtime_suspend(struct request_queue *q, int err) -{ - if (!q->dev) - return; - - spin_lock_irq(q->queue_lock); - if (!err) { - q->rpm_status = RPM_SUSPENDED; - } else { - q->rpm_status = RPM_ACTIVE; - pm_runtime_mark_last_busy(q->dev); - } - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_post_runtime_suspend); - -/** - * blk_pre_runtime_resume - Pre runtime resume processing - * @q: the queue of the device - * - * Description: - * Update the queue's runtime status to RESUMING in preparation for the - * runtime resume of the device. - * - * This function should be called near the start of the device's - * runtime_resume callback. - */ -void blk_pre_runtime_resume(struct request_queue *q) -{ - if (!q->dev) - return; - - spin_lock_irq(q->queue_lock); - q->rpm_status = RPM_RESUMING; - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_pre_runtime_resume); - -/** - * blk_post_runtime_resume - Post runtime resume processing - * @q: the queue of the device - * @err: return value of the device's runtime_resume function - * - * Description: - * Update the queue's runtime status according to the return value of the - * device's runtime_resume function. If it is successfully resumed, process - * the requests that are queued into the device's queue when it is resuming - * and then mark last busy and initiate autosuspend for it. - * - * This function should be called near the end of the device's - * runtime_resume callback. - */ -void blk_post_runtime_resume(struct request_queue *q, int err) -{ - if (!q->dev) - return; - - spin_lock_irq(q->queue_lock); - if (!err) { - q->rpm_status = RPM_ACTIVE; - __blk_run_queue(q); - pm_runtime_mark_last_busy(q->dev); - pm_request_autosuspend(q->dev); - } else { - q->rpm_status = RPM_SUSPENDED; - } - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_post_runtime_resume); - -/** - * blk_set_runtime_active - Force runtime status of the queue to be active - * @q: the queue of the device - * - * If the device is left runtime suspended during system suspend the resume - * hook typically resumes the device and corrects runtime status - * accordingly. However, that does not affect the queue runtime PM status - * which is still "suspended". This prevents processing requests from the - * queue. - * - * This function can be used in driver's resume hook to correct queue - * runtime PM status and re-enable peeking requests from the queue. It - * should be called before first request is added to the queue. - */ -void blk_set_runtime_active(struct request_queue *q) -{ - spin_lock_irq(q->queue_lock); - q->rpm_status = RPM_ACTIVE; - pm_runtime_mark_last_busy(q->dev); - pm_request_autosuspend(q->dev); - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_set_runtime_active); -#endif - int __init blk_dev_init(void) { BUILD_BUG_ON(REQ_OP_LAST >= (1 << REQ_OP_BITS)); diff --git a/block/blk-pm.c b/block/blk-pm.c new file mode 100644 index 000000000000..08d7222d4757 --- /dev/null +++ b/block/blk-pm.c @@ -0,0 +1,186 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include + +/** + * blk_pm_runtime_init - Block layer runtime PM initialization routine + * @q: the queue of the device + * @dev: the device the queue belongs to + * + * Description: + * Initialize runtime-PM-related fields for @q and start auto suspend for + * @dev. Drivers that want to take advantage of request-based runtime PM + * should call this function after @dev has been initialized, and its + * request queue @q has been allocated, and runtime PM for it can not happen + * yet(either due to disabled/forbidden or its usage_count > 0). In most + * cases, driver should call this function before any I/O has taken place. + * + * This function takes care of setting up using auto suspend for the device, + * the autosuspend delay is set to -1 to make runtime suspend impossible + * until an updated value is either set by user or by driver. Drivers do + * not need to touch other autosuspend settings. + * + * The block layer runtime PM is request based, so only works for drivers + * that use request as their IO unit instead of those directly use bio's. + */ +void blk_pm_runtime_init(struct request_queue *q, struct device *dev) +{ + /* not support for RQF_PM and ->rpm_status in blk-mq yet */ + if (q->mq_ops) + return; + + q->dev = dev; + q->rpm_status = RPM_ACTIVE; + pm_runtime_set_autosuspend_delay(q->dev, -1); + pm_runtime_use_autosuspend(q->dev); +} +EXPORT_SYMBOL(blk_pm_runtime_init); + +/** + * blk_pre_runtime_suspend - Pre runtime suspend check + * @q: the queue of the device + * + * Description: + * This function will check if runtime suspend is allowed for the device + * by examining if there are any requests pending in the queue. If there + * are requests pending, the device can not be runtime suspended; otherwise, + * the queue's status will be updated to SUSPENDING and the driver can + * proceed to suspend the device. + * + * For the not allowed case, we mark last busy for the device so that + * runtime PM core will try to autosuspend it some time later. + * + * This function should be called near the start of the device's + * runtime_suspend callback. + * + * Return: + * 0 - OK to runtime suspend the device + * -EBUSY - Device should not be runtime suspended + */ +int blk_pre_runtime_suspend(struct request_queue *q) +{ + int ret = 0; + + if (!q->dev) + return ret; + + spin_lock_irq(q->queue_lock); + if (q->nr_pending) { + ret = -EBUSY; + pm_runtime_mark_last_busy(q->dev); + } else { + q->rpm_status = RPM_SUSPENDING; + } + spin_unlock_irq(q->queue_lock); + return ret; +} +EXPORT_SYMBOL(blk_pre_runtime_suspend); + +/** + * blk_post_runtime_suspend - Post runtime suspend processing + * @q: the queue of the device + * @err: return value of the device's runtime_suspend function + * + * Description: + * Update the queue's runtime status according to the return value of the + * device's runtime suspend function and mark last busy for the device so + * that PM core will try to auto suspend the device at a later time. + * + * This function should be called near the end of the device's + * runtime_suspend callback. + */ +void blk_post_runtime_suspend(struct request_queue *q, int err) +{ + if (!q->dev) + return; + + spin_lock_irq(q->queue_lock); + if (!err) { + q->rpm_status = RPM_SUSPENDED; + } else { + q->rpm_status = RPM_ACTIVE; + pm_runtime_mark_last_busy(q->dev); + } + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_post_runtime_suspend); + +/** + * blk_pre_runtime_resume - Pre runtime resume processing + * @q: the queue of the device + * + * Description: + * Update the queue's runtime status to RESUMING in preparation for the + * runtime resume of the device. + * + * This function should be called near the start of the device's + * runtime_resume callback. + */ +void blk_pre_runtime_resume(struct request_queue *q) +{ + if (!q->dev) + return; + + spin_lock_irq(q->queue_lock); + q->rpm_status = RPM_RESUMING; + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_pre_runtime_resume); + +/** + * blk_post_runtime_resume - Post runtime resume processing + * @q: the queue of the device + * @err: return value of the device's runtime_resume function + * + * Description: + * Update the queue's runtime status according to the return value of the + * device's runtime_resume function. If it is successfully resumed, process + * the requests that are queued into the device's queue when it is resuming + * and then mark last busy and initiate autosuspend for it. + * + * This function should be called near the end of the device's + * runtime_resume callback. + */ +void blk_post_runtime_resume(struct request_queue *q, int err) +{ + if (!q->dev) + return; + + spin_lock_irq(q->queue_lock); + if (!err) { + q->rpm_status = RPM_ACTIVE; + __blk_run_queue(q); + pm_runtime_mark_last_busy(q->dev); + pm_request_autosuspend(q->dev); + } else { + q->rpm_status = RPM_SUSPENDED; + } + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_post_runtime_resume); + +/** + * blk_set_runtime_active - Force runtime status of the queue to be active + * @q: the queue of the device + * + * If the device is left runtime suspended during system suspend the resume + * hook typically resumes the device and corrects runtime status + * accordingly. However, that does not affect the queue runtime PM status + * which is still "suspended". This prevents processing requests from the + * queue. + * + * This function can be used in driver's resume hook to correct queue + * runtime PM status and re-enable peeking requests from the queue. It + * should be called before first request is added to the queue. + */ +void blk_set_runtime_active(struct request_queue *q) +{ + spin_lock_irq(q->queue_lock); + q->rpm_status = RPM_ACTIVE; + pm_runtime_mark_last_busy(q->dev); + pm_request_autosuspend(q->dev); + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_set_runtime_active); diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c index b44c1bb687a2..a2b4179bfdf7 100644 --- a/drivers/scsi/scsi_pm.c +++ b/drivers/scsi/scsi_pm.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 9421d9877730..e60cd8480a03 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -45,6 +45,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c index 3f3cb72e0c0c..de4413e66eca 100644 --- a/drivers/scsi/sr.c +++ b/drivers/scsi/sr.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include #include diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h new file mode 100644 index 000000000000..fe3f4e8efbe9 --- /dev/null +++ b/include/linux/blk-pm.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _BLK_PM_H_ +#define _BLK_PM_H_ + +/* + * block layer runtime pm functions + */ +#ifdef CONFIG_PM +extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev); +extern int blk_pre_runtime_suspend(struct request_queue *q); +extern void blk_post_runtime_suspend(struct request_queue *q, int err); +extern void blk_pre_runtime_resume(struct request_queue *q); +extern void blk_post_runtime_resume(struct request_queue *q, int err); +extern void blk_set_runtime_active(struct request_queue *q); +#else +static inline void blk_pm_runtime_init(struct request_queue *q, + struct device *dev) {} +#endif + +#endif /* _BLK_PM_H_ */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 763482037c9b..a0cf2e352a31 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1282,29 +1282,6 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, extern void blk_put_queue(struct request_queue *); extern void blk_set_queue_dying(struct request_queue *); -/* - * block layer runtime pm functions - */ -#ifdef CONFIG_PM -extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev); -extern int blk_pre_runtime_suspend(struct request_queue *q); -extern void blk_post_runtime_suspend(struct request_queue *q, int err); -extern void blk_pre_runtime_resume(struct request_queue *q); -extern void blk_post_runtime_resume(struct request_queue *q, int err); -extern void blk_set_runtime_active(struct request_queue *q); -#else -static inline void blk_pm_runtime_init(struct request_queue *q, - struct device *dev) {} -static inline int blk_pre_runtime_suspend(struct request_queue *q) -{ - return -ENOSYS; -} -static inline void blk_post_runtime_suspend(struct request_queue *q, int err) {} -static inline void blk_pre_runtime_resume(struct request_queue *q) {} -static inline void blk_post_runtime_resume(struct request_queue *q, int err) {} -static inline void blk_set_runtime_active(struct request_queue *q) {} -#endif - /* * blk_plug permits building a queue of related requests by holding the I/O * fragments for a short period. This allows merging of sequential requests From patchwork Wed Jul 25 22:26:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10544877 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DF345A639 for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE5D32A9FC for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C2AD32AA06; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E15B285CC for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731357AbeGYXjx (ORCPT ); Wed, 25 Jul 2018 19:39:53 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:40585 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731259AbeGYXjx (ORCPT ); Wed, 25 Jul 2018 19:39:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1532557569; x=1564093569; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=/T54FybXL4xsLGbhX6Wt7Y2fsIvMt1mvZu0Z+8YOJW4=; b=neKx71UPL5Y7kj2lbunnKR1PZ+rwnePRwDzX/4w2BglgKS7zs6TOlXf3 fL0wu/gncbDMQtPl+B7LVA58a6HlELC22EaPGw5JXCfFvN+b7pWT3hZT/ LvIaYIAn2DiX7v8H8SZjXKYw30Gt5THXTSMUkpShz6v3zPNF7KFq5+kNP HCbzNJo63VqLZeSU0mI4SD8f0rV6mKLUi6G8R98aVOSp8VCgPT2Z1ksLp GWriE9SOCilJ3uZKMY8L+n2SJyqZ6j/gsjFrtOO5lVj/gwAXOQEjRanr4 F8Ze7PZDIHqdlmgMZD9pF0yNd18bqTybqvcljFgetJl7OcFcIQqQBAzEH g==; X-IronPort-AV: E=Sophos;i="5.51,402,1526313600"; d="scan'208";a="84987665" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 26 Jul 2018 06:26:08 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 25 Jul 2018 15:14:44 -0700 Received: from thinkpad-bart.sdcorp.global.sandisk.com ([10.111.67.248]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Jul 2018 15:26:08 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Johannes Thumshirn , Alan Stern Subject: [PATCH v2 3/5] block: Serialize queue freezing and blk_pre_runtime_suspend() Date: Wed, 25 Jul 2018 15:26:05 -0700 Message-Id: <20180725222607.8854-4-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180725222607.8854-1-bart.vanassche@wdc.com> References: <20180725222607.8854-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Serialize these operations because the next patch will add code into blk_pre_runtime_suspend() that should not run concurrently with queue freezing nor unfreezing. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Johannes Thumshirn Cc: Alan Stern --- block/blk-core.c | 5 +++++ block/blk-mq.c | 3 +++ block/blk-pm.c | 40 ++++++++++++++++++++++++++++++++++++++++ include/linux/blk-pm.h | 9 +++++++++ include/linux/blkdev.h | 4 ++++ 5 files changed, 61 insertions(+) diff --git a/block/blk-core.c b/block/blk-core.c index 14c28197ea42..feac2b4d3b90 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -694,6 +695,7 @@ void blk_set_queue_dying(struct request_queue *q) * prevent I/O from crossing blk_queue_enter(). */ blk_freeze_queue_start(q); + blk_pm_runtime_unlock(q); if (q->mq_ops) blk_mq_wake_waiters(q); @@ -754,6 +756,7 @@ void blk_cleanup_queue(struct request_queue *q) * prevent that q->request_fn() gets invoked after draining finished. */ blk_freeze_queue(q); + blk_pm_runtime_unlock(q); spin_lock_irq(lock); queue_flag_set(QUEUE_FLAG_DEAD, q); spin_unlock_irq(lock); @@ -1043,6 +1046,8 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, #ifdef CONFIG_BLK_DEV_IO_TRACE mutex_init(&q->blk_trace_mutex); #endif + blk_pm_init(q); + mutex_init(&q->sysfs_lock); spin_lock_init(&q->__queue_lock); diff --git a/block/blk-mq.c b/block/blk-mq.c index c92ce06fd565..8d845872ea02 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -138,6 +139,7 @@ void blk_freeze_queue_start(struct request_queue *q) { int freeze_depth; + blk_pm_runtime_lock(q); freeze_depth = atomic_inc_return(&q->mq_freeze_depth); if (freeze_depth == 1) { percpu_ref_kill(&q->q_usage_counter); @@ -201,6 +203,7 @@ void blk_mq_unfreeze_queue(struct request_queue *q) percpu_ref_reinit(&q->q_usage_counter); wake_up_all(&q->mq_freeze_wq); } + blk_pm_runtime_unlock(q); } EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue); diff --git a/block/blk-pm.c b/block/blk-pm.c index 08d7222d4757..7dc9375a2f46 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -3,6 +3,41 @@ #include #include #include +#include + +/* + * Initialize the request queue members used by blk_pm_runtime_lock() and + * blk_pm_runtime_unlock(). + */ +void blk_pm_init(struct request_queue *q) +{ + spin_lock_init(&q->rpm_lock); + init_waitqueue_head(&q->rpm_wq); + q->rpm_owner = NULL; + q->rpm_nesting_level = 0; +} + +void blk_pm_runtime_lock(struct request_queue *q) +{ + spin_lock(&q->rpm_lock); + wait_event_interruptible_locked(q->rpm_wq, + q->rpm_owner == NULL || q->rpm_owner == current); + if (q->rpm_owner == NULL) + q->rpm_owner = current; + q->rpm_nesting_level++; + spin_unlock(&q->rpm_lock); +} + +void blk_pm_runtime_unlock(struct request_queue *q) +{ + spin_lock(&q->rpm_lock); + WARN_ON_ONCE(q->rpm_nesting_level <= 0); + if (--q->rpm_nesting_level == 0) { + q->rpm_owner = NULL; + wake_up(&q->rpm_wq); + } + spin_unlock(&q->rpm_lock); +} /** * blk_pm_runtime_init - Block layer runtime PM initialization routine @@ -66,6 +101,8 @@ int blk_pre_runtime_suspend(struct request_queue *q) if (!q->dev) return ret; + blk_pm_runtime_lock(q); + spin_lock_irq(q->queue_lock); if (q->nr_pending) { ret = -EBUSY; @@ -74,6 +111,9 @@ int blk_pre_runtime_suspend(struct request_queue *q) q->rpm_status = RPM_SUSPENDING; } spin_unlock_irq(q->queue_lock); + + blk_pm_runtime_unlock(q); + return ret; } EXPORT_SYMBOL(blk_pre_runtime_suspend); diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h index fe3f4e8efbe9..aafcc7877e53 100644 --- a/include/linux/blk-pm.h +++ b/include/linux/blk-pm.h @@ -3,10 +3,16 @@ #ifndef _BLK_PM_H_ #define _BLK_PM_H_ +struct device; +struct request_queue; + /* * block layer runtime pm functions */ #ifdef CONFIG_PM +extern void blk_pm_init(struct request_queue *q); +extern void blk_pm_runtime_lock(struct request_queue *q); +extern void blk_pm_runtime_unlock(struct request_queue *q); extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev); extern int blk_pre_runtime_suspend(struct request_queue *q); extern void blk_post_runtime_suspend(struct request_queue *q, int err); @@ -14,6 +20,9 @@ extern void blk_pre_runtime_resume(struct request_queue *q); extern void blk_post_runtime_resume(struct request_queue *q, int err); extern void blk_set_runtime_active(struct request_queue *q); #else +static inline void blk_pm_init(struct request_queue *q) {} +static inline void blk_pm_runtime_lock(struct request_queue *q) {} +static inline void blk_pm_runtime_unlock(struct request_queue *q) {} static inline void blk_pm_runtime_init(struct request_queue *q, struct device *dev) {} #endif diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index a0cf2e352a31..3a8c20eafe58 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -544,6 +544,10 @@ struct request_queue { struct device *dev; int rpm_status; unsigned int nr_pending; + spinlock_t rpm_lock; + wait_queue_head_t rpm_wq; + struct task_struct *rpm_owner; + int rpm_nesting_level; #endif /* From patchwork Wed Jul 25 22:26:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10544881 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CAEB0A639 for ; Wed, 25 Jul 2018 22:26:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5DA1285CC for ; Wed, 25 Jul 2018 22:26:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AA6352AA00; Wed, 25 Jul 2018 22:26:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CCA69285CC for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731174AbeGYXjy (ORCPT ); Wed, 25 Jul 2018 19:39:54 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:40585 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731358AbeGYXjy (ORCPT ); Wed, 25 Jul 2018 19:39:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1532557569; x=1564093569; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=iU3WvkutcaSgIx5DrUfOScOJk4irR+cf0Bn8g0OMBlo=; b=K82EpokUYwOPGihbE36g2ndmnHhTjyS0A+t6reQCZRKZ/29CiXStB4K9 xjle7cfFhmSNuFGkCtzskycrnKMjbAKIYggm3jpPGLZfIZFSbPcHXQ3DR Db39NjZUo7a0TBwUm2U4X65l673pBJJxrlEG1GqfaQBLQcZdpKZvdRwis xfrAxtD0NjwL/3s9cyLIfenvO0CNGu1/Ngrx+kBxuPy7orRzBP0sNj+og 9tyeJmV88zBa82e+XEU75EjWUVUw1vM85ZfFv6Wyd43OqH0AVMx8KhEWi CjIcjXPqzlCGhyKLJ4cNr++hSPJKiuadRfvCVwn3dzAyoFqkrUWxmJpw6 g==; X-IronPort-AV: E=Sophos;i="5.51,402,1526313600"; d="scan'208";a="84987666" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 26 Jul 2018 06:26:08 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 25 Jul 2018 15:14:45 -0700 Received: from thinkpad-bart.sdcorp.global.sandisk.com ([10.111.67.248]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Jul 2018 15:26:08 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , "Martin K . Petersen" , Ming Lei , Alan Stern , Johannes Thumshirn Subject: [PATCH v2 4/5] block, scsi: Rework runtime power management Date: Wed, 25 Jul 2018 15:26:06 -0700 Message-Id: <20180725222607.8854-5-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180725222607.8854-1-bart.vanassche@wdc.com> References: <20180725222607.8854-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of allowing requests that are not power management requests to enter the queue in runtime suspended status (RPM_SUSPENDED), make the blk_get_request() caller block. This change fixes a starvation issue: it is now guaranteed that power management requests will be executed no matter how many blk_get_request() callers are waiting. Instead of maintaining the q->nr_pending counter, rely on q->q_usage_counter. Call pm_runtime_mark_last_busy() every time a request finishes instead of only if the queue depth drops to zero. Use RQF_PREEMPT to mark power management requests instead of RQF_PM. This is safe because the power management core serializes system-wide suspend/resume and runtime power management state changes. Signed-off-by: Bart Van Assche Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Cc: Alan Stern Cc: Johannes Thumshirn --- block/blk-core.c | 49 +++++++++++++++------------------------ block/blk-mq-debugfs.c | 1 - block/blk-pm.c | 19 +++++++++++++-- block/elevator.c | 11 +-------- drivers/scsi/sd.c | 4 ++-- drivers/scsi/ufs/ufshcd.c | 10 ++++---- include/linux/blkdev.h | 7 ++---- 7 files changed, 46 insertions(+), 55 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index feac2b4d3b90..195a99de7c7e 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -689,6 +689,16 @@ void blk_set_queue_dying(struct request_queue *q) { blk_queue_flag_set(QUEUE_FLAG_DYING, q); +#ifdef CONFIG_PM + /* + * Avoid that runtime power management tries to modify the state of + * q->q_usage_counter after that counter has been transitioned to the + * "dead" state. + */ + if (q->dev) + pm_runtime_dont_use_autosuspend(q->dev); +#endif + /* * When queue DYING flag is set, we need to block new req * entering queue, so we call blk_freeze_queue_start() to @@ -1728,7 +1738,7 @@ EXPORT_SYMBOL_GPL(part_round_stats); #ifdef CONFIG_PM static void blk_pm_put_request(struct request *rq) { - if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending) + if (rq->q->dev && !(rq->rq_flags & RQF_PREEMPT)) pm_runtime_mark_last_busy(rq->q->dev); } #else @@ -2745,30 +2755,6 @@ void blk_account_io_done(struct request *req, u64 now) } } -#ifdef CONFIG_PM -/* - * Don't process normal requests when queue is suspended - * or in the process of suspending/resuming - */ -static bool blk_pm_allow_request(struct request *rq) -{ - switch (rq->q->rpm_status) { - case RPM_RESUMING: - case RPM_SUSPENDING: - return rq->rq_flags & RQF_PM; - case RPM_SUSPENDED: - return false; - default: - return true; - } -} -#else -static bool blk_pm_allow_request(struct request *rq) -{ - return true; -} -#endif - void blk_account_io_start(struct request *rq, bool new_io) { struct hd_struct *part; @@ -2814,11 +2800,14 @@ static struct request *elv_next_request(struct request_queue *q) while (1) { list_for_each_entry(rq, &q->queue_head, queuelist) { - if (blk_pm_allow_request(rq)) - return rq; - - if (rq->rq_flags & RQF_SOFTBARRIER) - break; +#ifdef CONFIG_PM + /* + * If a request gets queued in state RPM_SUSPENDED + * then that's a kernel bug. + */ + WARN_ON_ONCE(q->rpm_status == RPM_SUSPENDED); +#endif + return rq; } /* diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index cb1e6cf7ac48..994bdd41feb2 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -324,7 +324,6 @@ static const char *const rqf_name[] = { RQF_NAME(ELVPRIV), RQF_NAME(IO_STAT), RQF_NAME(ALLOCED), - RQF_NAME(PM), RQF_NAME(HASHED), RQF_NAME(STATS), RQF_NAME(SPECIAL_PAYLOAD), diff --git a/block/blk-pm.c b/block/blk-pm.c index 7dc9375a2f46..9f1130381322 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -73,6 +73,13 @@ void blk_pm_runtime_init(struct request_queue *q, struct device *dev) } EXPORT_SYMBOL(blk_pm_runtime_init); +static void blk_unprepare_runtime_suspend(struct request_queue *q) +{ + blk_queue_flag_clear(QUEUE_FLAG_PREEMPT_ONLY, q); + /* Because QUEUE_FLAG_PREEMPT_ONLY has been cleared. */ + wake_up_all(&q->mq_freeze_wq); +} + /** * blk_pre_runtime_suspend - Pre runtime suspend check * @q: the queue of the device @@ -102,9 +109,11 @@ int blk_pre_runtime_suspend(struct request_queue *q) return ret; blk_pm_runtime_lock(q); + blk_set_preempt_only(q); + percpu_ref_switch_to_atomic_sync(&q->q_usage_counter); spin_lock_irq(q->queue_lock); - if (q->nr_pending) { + if (!percpu_ref_is_zero(&q->q_usage_counter)) { ret = -EBUSY; pm_runtime_mark_last_busy(q->dev); } else { @@ -112,6 +121,7 @@ int blk_pre_runtime_suspend(struct request_queue *q) } spin_unlock_irq(q->queue_lock); + percpu_ref_switch_to_percpu(&q->q_usage_counter); blk_pm_runtime_unlock(q); return ret; @@ -144,6 +154,9 @@ void blk_post_runtime_suspend(struct request_queue *q, int err) pm_runtime_mark_last_busy(q->dev); } spin_unlock_irq(q->queue_lock); + + if (err) + blk_unprepare_runtime_suspend(q); } EXPORT_SYMBOL(blk_post_runtime_suspend); @@ -191,13 +204,15 @@ void blk_post_runtime_resume(struct request_queue *q, int err) spin_lock_irq(q->queue_lock); if (!err) { q->rpm_status = RPM_ACTIVE; - __blk_run_queue(q); pm_runtime_mark_last_busy(q->dev); pm_request_autosuspend(q->dev); } else { q->rpm_status = RPM_SUSPENDED; } spin_unlock_irq(q->queue_lock); + + if (!err) + blk_unprepare_runtime_suspend(q); } EXPORT_SYMBOL(blk_post_runtime_resume); diff --git a/block/elevator.c b/block/elevator.c index fa828b5bfd4b..68174953e730 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -558,20 +558,13 @@ void elv_bio_merged(struct request_queue *q, struct request *rq, } #ifdef CONFIG_PM -static void blk_pm_requeue_request(struct request *rq) -{ - if (rq->q->dev && !(rq->rq_flags & RQF_PM)) - rq->q->nr_pending--; -} - static void blk_pm_add_request(struct request_queue *q, struct request *rq) { - if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 && + if (q->dev && !(rq->rq_flags & RQF_PREEMPT) && (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING)) pm_request_resume(q->dev); } #else -static inline void blk_pm_requeue_request(struct request *rq) {} static inline void blk_pm_add_request(struct request_queue *q, struct request *rq) { @@ -592,8 +585,6 @@ void elv_requeue_request(struct request_queue *q, struct request *rq) rq->rq_flags &= ~RQF_STARTED; - blk_pm_requeue_request(rq); - __elv_add_request(q, rq, ELEVATOR_INSERT_REQUEUE); } diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index e60cd8480a03..876401f4764e 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1628,7 +1628,7 @@ static int sd_sync_cache(struct scsi_disk *sdkp, struct scsi_sense_hdr *sshdr) * flush everything. */ res = scsi_execute(sdp, cmd, DMA_NONE, NULL, 0, NULL, sshdr, - timeout, SD_MAX_RETRIES, 0, RQF_PM, NULL); + timeout, SD_MAX_RETRIES, 0, RQF_PREEMPT, NULL); if (res == 0) break; } @@ -3488,7 +3488,7 @@ static int sd_start_stop_device(struct scsi_disk *sdkp, int start) return -ENODEV; res = scsi_execute(sdp, cmd, DMA_NONE, NULL, 0, NULL, &sshdr, - SD_TIMEOUT, SD_MAX_RETRIES, 0, RQF_PM, NULL); + SD_TIMEOUT, SD_MAX_RETRIES, 0, RQF_PREEMPT, NULL); if (res) { sd_print_result(sdkp, "Start/Stop Unit failed", res); if (driver_byte(res) & DRIVER_SENSE) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 397081d320b1..4a16d6e90e65 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -7219,7 +7219,7 @@ ufshcd_send_request_sense(struct ufs_hba *hba, struct scsi_device *sdp) ret = scsi_execute(sdp, cmd, DMA_FROM_DEVICE, buffer, UFSHCD_REQ_SENSE_SIZE, NULL, NULL, - msecs_to_jiffies(1000), 3, 0, RQF_PM, NULL); + msecs_to_jiffies(1000), 3, 0, RQF_PREEMPT, NULL); if (ret) pr_err("%s: failed with err %d\n", __func__, ret); @@ -7280,12 +7280,12 @@ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba, cmd[4] = pwr_mode << 4; /* - * Current function would be generally called from the power management - * callbacks hence set the RQF_PM flag so that it doesn't resume the - * already suspended childs. + * Current function would be generally called from the power + * management callbacks hence set the RQF_PREEMPT flag so that it + * doesn't resume the already suspended childs. */ ret = scsi_execute(sdp, cmd, DMA_NONE, NULL, 0, NULL, &sshdr, - START_STOP_TIMEOUT, 0, 0, RQF_PM, NULL); + START_STOP_TIMEOUT, 0, 0, RQF_PREEMPT, NULL); if (ret) { sdev_printk(KERN_WARNING, sdp, "START_STOP failed for power mode: %d, result %x\n", diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 3a8c20eafe58..4cee277fd1af 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -99,8 +99,8 @@ typedef __u32 __bitwise req_flags_t; #define RQF_MQ_INFLIGHT ((__force req_flags_t)(1 << 6)) /* don't call prep for this one */ #define RQF_DONTPREP ((__force req_flags_t)(1 << 7)) -/* set for "ide_preempt" requests and also for requests for which the SCSI - "quiesce" state must be ignored. */ +/* set for requests that must be processed even if QUEUE_FLAG_PREEMPT_ONLY has + been set, e.g. power management requests and "ide_preempt" requests. */ #define RQF_PREEMPT ((__force req_flags_t)(1 << 8)) /* contains copies of user pages */ #define RQF_COPY_USER ((__force req_flags_t)(1 << 9)) @@ -114,8 +114,6 @@ typedef __u32 __bitwise req_flags_t; #define RQF_IO_STAT ((__force req_flags_t)(1 << 13)) /* request came from our alloc pool */ #define RQF_ALLOCED ((__force req_flags_t)(1 << 14)) -/* runtime pm request */ -#define RQF_PM ((__force req_flags_t)(1 << 15)) /* on IO scheduler merge hash */ #define RQF_HASHED ((__force req_flags_t)(1 << 16)) /* IO stats tracking on */ @@ -543,7 +541,6 @@ struct request_queue { #ifdef CONFIG_PM struct device *dev; int rpm_status; - unsigned int nr_pending; spinlock_t rpm_lock; wait_queue_head_t rpm_wq; struct task_struct *rpm_owner; From patchwork Wed Jul 25 22:26:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10544879 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A68301822 for ; Wed, 25 Jul 2018 22:26:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 974F12AA00 for ; Wed, 25 Jul 2018 22:26:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C0A02AA02; Wed, 25 Jul 2018 22:26:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C33C2A9E7 for ; Wed, 25 Jul 2018 22:26:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731358AbeGYXjy (ORCPT ); Wed, 25 Jul 2018 19:39:54 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:33909 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731259AbeGYXjy (ORCPT ); Wed, 25 Jul 2018 19:39:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1532557570; x=1564093570; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=IoG7BMuxWjw4LIPgNPRDNw5SBSDeyspLnNmx0GuiSPs=; b=bhmARtocL09gV954077l0Gi5y/42tmYtHqJUbbQVoyKAtMHmdO7/0S8E KRuYJPOmJZpNY1p0Txh3gGDLD1BPWL4VoYCpLcjtQn5LTb6CX0a93YYEW 4Eyygx/WSdbVxo+q/Dluhn53SkGh2jG20HCkj9kLftQCD0FzJjruVcv7y JgUapj6zmLk7B5El1sBWmmoqdW4QLQaBkMXirhxyuBZ8+iIzwj++tCKD2 HDaOCymwge6m5ZaPQSMRS559iwfcgRbJW7lELlc0h7o2nQSoo5pEu3kyq jXyNSD0Yeb6Z9HuH7CAMfg2SJxpsCj/N3gdJT3OeNLmB5FZNrayXaShay g==; X-IronPort-AV: E=Sophos;i="5.51,402,1526313600"; d="scan'208";a="84987667" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 26 Jul 2018 06:26:09 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 25 Jul 2018 15:14:45 -0700 Received: from thinkpad-bart.sdcorp.global.sandisk.com ([10.111.67.248]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Jul 2018 15:26:09 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Alan Stern , Johannes Thumshirn Subject: [PATCH v2 5/5] blk-mq: Enable support for runtime power management Date: Wed, 25 Jul 2018 15:26:07 -0700 Message-Id: <20180725222607.8854-6-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180725222607.8854-1-bart.vanassche@wdc.com> References: <20180725222607.8854-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that the blk-mq core processes power management requests (marked with RQF_PREEMPT) in other states than RPM_ACTIVE, enable runtime power management for blk-mq. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Alan Stern Cc: Johannes Thumshirn --- block/blk-pm.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/block/blk-pm.c b/block/blk-pm.c index 9f1130381322..d9e64bfb3007 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -62,10 +62,6 @@ void blk_pm_runtime_unlock(struct request_queue *q) */ void blk_pm_runtime_init(struct request_queue *q, struct device *dev) { - /* not support for RQF_PM and ->rpm_status in blk-mq yet */ - if (q->mq_ops) - return; - q->dev = dev; q->rpm_status = RPM_ACTIVE; pm_runtime_set_autosuspend_delay(q->dev, -1);