From patchwork Fri Sep 21 20:31:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10610931 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9DD97161F for ; Fri, 21 Sep 2018 20:31:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8CF022E7A5 for ; Fri, 21 Sep 2018 20:31:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 80F6F2E7AF; Fri, 21 Sep 2018 20:31:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 19F792E7A5 for ; Fri, 21 Sep 2018 20:31:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391389AbeIVCW1 (ORCPT ); Fri, 21 Sep 2018 22:22:27 -0400 Received: from out002.mailprotect.be ([83.217.72.86]:51921 "EHLO out002.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391279AbeIVCW1 (ORCPT ); Fri, 21 Sep 2018 22:22:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=OEnwqqhUPtUBjNnhcJCBAZz6UArN86oPLuw+Rsf2JFs=; b=E+Ei30zQcj9v 9ehQX3Qlz+8eiDkTxnJ/c5l8YYr1dkFEswv2IIatKvdZFtF7dF8g9UhIWDMKtySiuWTlHK2qbmKq7 7bFratRo1Y4uap7SSIUbWbBnDsuId/YIcazcZ3tlFT4B4ZNkJN3Ovg8pAbGJQyFYgfgG2cI2zObxZ 0pA/ZswXpoTkPIWLshPon2gErplNROzFId8EUJzaoydJSyVl9wPqkVJ09s3JQ+Mu4OTQf0IfNFcEI iiLAFqVJFcNOb/nGbiSgjws09MXvKnQQ/xYvqxcTfTVQmoRvUv3XmOZjSiGIuX3SlbbbPtsHwXDX9 BuAEf7qGj1zZ2gyzLTcnNA==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out002.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g3S5W-0008vJ-D5; Fri, 21 Sep 2018 22:31:43 +0200 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 09547C060F; Fri, 21 Sep 2018 22:31:37 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v10 1/8] block: Move power management code into a new source file Date: Fri, 21 Sep 2018 13:31:15 -0700 Message-Id: <20180921203122.49743-2-bvanassche@acm.org> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: <20180921203122.49743-1-bvanassche@acm.org> References: <20180921203122.49743-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: SB/global_tokens (3.03999097494e-05) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5iCskoYtHmFZEMeOgRFcgnp602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9Wzw0fQtgXhURpBrVGHmPVoSYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktwTvADW575 0b8F3/kmeu/2zgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnMdRatk7KHHjyULplJGegbdxbXpCgbiKBsA+Ddi6mawer tqWJnTnoN4VUNd4gjQeVvLN728IxxZDbDiI7+RnRY0757KiUYziXyuXG4Z+FT7f2Z5FmZiYh5zrn 6SPoZlSUVqZvWGb8FLl4ADZ/Txs2SsRyUiMV0bJ59aqbfue3i8/41P+vcbHOYMtZCUK3cTvB5CC0 B0N4GDXwtimutwTDmjDptKOiJtD3jqCFxjwevNwBtK4Nz2w6q4OSZ7QXIfzRRZ1RoScDOsq2Parb hQOhvUFGnvKVJnFQQaAnDicXOB3EjzrwzpOE5mwgEdfdYo4D+vH10Kef7/oaji7+V0umCbBezpWT CcpgMNUt3YH99/LRMrw5DdwH1AXXXLvYNFIcFNl57WdwhmTsAKzUogeC9PyqFPz5OFKXMzVSDFzb LHtSOYE66SFhT6NVrQT/pWDn674K8PUeEVvDdxLV5QoWYB4kT8X9aUY7djYr1mogTEaxmF3d2RZv qUM/cwNhuECOygEwwwWe3xXNBQ8NJdm4Puz0ZaEH4EagAPQe7incqLSp9Bs6vFf0/xkS2e/oeNZr pAEntM8yKElR+Y84QVlpY8ekY5wl4Q3nPDb6eyxRMZLQiWYpjzKs/f2li8jAd+iCpKNMVj7VMvuF NvyQYijYug== X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the code for runtime power management from blk-core.c into the new source file blk-pm.c. Move the corresponding declarations from into . For CONFIG_PM=n, leave out the declarations of the functions that are not used in that mode. This patch not only reduces the number of #ifdefs in the block layer core code but also reduces the size of header file and hence should help to reduce the build time of the Linux kernel if CONFIG_PM is not defined. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern Reviewed-by: Christoph Hellwig --- block/Kconfig | 3 + block/Makefile | 1 + block/blk-core.c | 196 +---------------------------------------- block/blk-pm.c | 188 +++++++++++++++++++++++++++++++++++++++ block/blk-pm.h | 43 +++++++++ block/elevator.c | 22 +---- drivers/scsi/scsi_pm.c | 1 + drivers/scsi/sd.c | 1 + drivers/scsi/sr.c | 1 + include/linux/blk-pm.h | 24 +++++ include/linux/blkdev.h | 23 ----- 11 files changed, 264 insertions(+), 239 deletions(-) create mode 100644 block/blk-pm.c create mode 100644 block/blk-pm.h create mode 100644 include/linux/blk-pm.h diff --git a/block/Kconfig b/block/Kconfig index 1f2469a0123c..85263e7bded6 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -228,4 +228,7 @@ config BLK_MQ_RDMA depends on BLOCK && INFINIBAND default y +config BLK_PM + def_bool BLOCK && PM + source block/Kconfig.iosched diff --git a/block/Makefile b/block/Makefile index 572b33f32c07..27eac600474f 100644 --- a/block/Makefile +++ b/block/Makefile @@ -37,3 +37,4 @@ obj-$(CONFIG_BLK_WBT) += blk-wbt.o obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o +obj-$(CONFIG_BLK_PM) += blk-pm.o diff --git a/block/blk-core.c b/block/blk-core.c index 4dbc93f43b38..6d4dd176bd9d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -42,6 +42,7 @@ #include "blk.h" #include "blk-mq.h" #include "blk-mq-sched.h" +#include "blk-pm.h" #include "blk-rq-qos.h" #ifdef CONFIG_DEBUG_FS @@ -1726,16 +1727,6 @@ void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part) } EXPORT_SYMBOL_GPL(part_round_stats); -#ifdef CONFIG_PM -static void blk_pm_put_request(struct request *rq) -{ - if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending) - pm_runtime_mark_last_busy(rq->q->dev); -} -#else -static inline void blk_pm_put_request(struct request *rq) {} -#endif - void __blk_put_request(struct request_queue *q, struct request *req) { req_flags_t rq_flags = req->rq_flags; @@ -3757,191 +3748,6 @@ void blk_finish_plug(struct blk_plug *plug) } EXPORT_SYMBOL(blk_finish_plug); -#ifdef CONFIG_PM -/** - * blk_pm_runtime_init - Block layer runtime PM initialization routine - * @q: the queue of the device - * @dev: the device the queue belongs to - * - * Description: - * Initialize runtime-PM-related fields for @q and start auto suspend for - * @dev. Drivers that want to take advantage of request-based runtime PM - * should call this function after @dev has been initialized, and its - * request queue @q has been allocated, and runtime PM for it can not happen - * yet(either due to disabled/forbidden or its usage_count > 0). In most - * cases, driver should call this function before any I/O has taken place. - * - * This function takes care of setting up using auto suspend for the device, - * the autosuspend delay is set to -1 to make runtime suspend impossible - * until an updated value is either set by user or by driver. Drivers do - * not need to touch other autosuspend settings. - * - * The block layer runtime PM is request based, so only works for drivers - * that use request as their IO unit instead of those directly use bio's. - */ -void blk_pm_runtime_init(struct request_queue *q, struct device *dev) -{ - /* Don't enable runtime PM for blk-mq until it is ready */ - if (q->mq_ops) { - pm_runtime_disable(dev); - return; - } - - q->dev = dev; - q->rpm_status = RPM_ACTIVE; - pm_runtime_set_autosuspend_delay(q->dev, -1); - pm_runtime_use_autosuspend(q->dev); -} -EXPORT_SYMBOL(blk_pm_runtime_init); - -/** - * blk_pre_runtime_suspend - Pre runtime suspend check - * @q: the queue of the device - * - * Description: - * This function will check if runtime suspend is allowed for the device - * by examining if there are any requests pending in the queue. If there - * are requests pending, the device can not be runtime suspended; otherwise, - * the queue's status will be updated to SUSPENDING and the driver can - * proceed to suspend the device. - * - * For the not allowed case, we mark last busy for the device so that - * runtime PM core will try to autosuspend it some time later. - * - * This function should be called near the start of the device's - * runtime_suspend callback. - * - * Return: - * 0 - OK to runtime suspend the device - * -EBUSY - Device should not be runtime suspended - */ -int blk_pre_runtime_suspend(struct request_queue *q) -{ - int ret = 0; - - if (!q->dev) - return ret; - - spin_lock_irq(q->queue_lock); - if (q->nr_pending) { - ret = -EBUSY; - pm_runtime_mark_last_busy(q->dev); - } else { - q->rpm_status = RPM_SUSPENDING; - } - spin_unlock_irq(q->queue_lock); - return ret; -} -EXPORT_SYMBOL(blk_pre_runtime_suspend); - -/** - * blk_post_runtime_suspend - Post runtime suspend processing - * @q: the queue of the device - * @err: return value of the device's runtime_suspend function - * - * Description: - * Update the queue's runtime status according to the return value of the - * device's runtime suspend function and mark last busy for the device so - * that PM core will try to auto suspend the device at a later time. - * - * This function should be called near the end of the device's - * runtime_suspend callback. - */ -void blk_post_runtime_suspend(struct request_queue *q, int err) -{ - if (!q->dev) - return; - - spin_lock_irq(q->queue_lock); - if (!err) { - q->rpm_status = RPM_SUSPENDED; - } else { - q->rpm_status = RPM_ACTIVE; - pm_runtime_mark_last_busy(q->dev); - } - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_post_runtime_suspend); - -/** - * blk_pre_runtime_resume - Pre runtime resume processing - * @q: the queue of the device - * - * Description: - * Update the queue's runtime status to RESUMING in preparation for the - * runtime resume of the device. - * - * This function should be called near the start of the device's - * runtime_resume callback. - */ -void blk_pre_runtime_resume(struct request_queue *q) -{ - if (!q->dev) - return; - - spin_lock_irq(q->queue_lock); - q->rpm_status = RPM_RESUMING; - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_pre_runtime_resume); - -/** - * blk_post_runtime_resume - Post runtime resume processing - * @q: the queue of the device - * @err: return value of the device's runtime_resume function - * - * Description: - * Update the queue's runtime status according to the return value of the - * device's runtime_resume function. If it is successfully resumed, process - * the requests that are queued into the device's queue when it is resuming - * and then mark last busy and initiate autosuspend for it. - * - * This function should be called near the end of the device's - * runtime_resume callback. - */ -void blk_post_runtime_resume(struct request_queue *q, int err) -{ - if (!q->dev) - return; - - spin_lock_irq(q->queue_lock); - if (!err) { - q->rpm_status = RPM_ACTIVE; - __blk_run_queue(q); - pm_runtime_mark_last_busy(q->dev); - pm_request_autosuspend(q->dev); - } else { - q->rpm_status = RPM_SUSPENDED; - } - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_post_runtime_resume); - -/** - * blk_set_runtime_active - Force runtime status of the queue to be active - * @q: the queue of the device - * - * If the device is left runtime suspended during system suspend the resume - * hook typically resumes the device and corrects runtime status - * accordingly. However, that does not affect the queue runtime PM status - * which is still "suspended". This prevents processing requests from the - * queue. - * - * This function can be used in driver's resume hook to correct queue - * runtime PM status and re-enable peeking requests from the queue. It - * should be called before first request is added to the queue. - */ -void blk_set_runtime_active(struct request_queue *q) -{ - spin_lock_irq(q->queue_lock); - q->rpm_status = RPM_ACTIVE; - pm_runtime_mark_last_busy(q->dev); - pm_request_autosuspend(q->dev); - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_set_runtime_active); -#endif - int __init blk_dev_init(void) { BUILD_BUG_ON(REQ_OP_LAST >= (1 << REQ_OP_BITS)); diff --git a/block/blk-pm.c b/block/blk-pm.c new file mode 100644 index 000000000000..9b636960d285 --- /dev/null +++ b/block/blk-pm.c @@ -0,0 +1,188 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include + +/** + * blk_pm_runtime_init - Block layer runtime PM initialization routine + * @q: the queue of the device + * @dev: the device the queue belongs to + * + * Description: + * Initialize runtime-PM-related fields for @q and start auto suspend for + * @dev. Drivers that want to take advantage of request-based runtime PM + * should call this function after @dev has been initialized, and its + * request queue @q has been allocated, and runtime PM for it can not happen + * yet(either due to disabled/forbidden or its usage_count > 0). In most + * cases, driver should call this function before any I/O has taken place. + * + * This function takes care of setting up using auto suspend for the device, + * the autosuspend delay is set to -1 to make runtime suspend impossible + * until an updated value is either set by user or by driver. Drivers do + * not need to touch other autosuspend settings. + * + * The block layer runtime PM is request based, so only works for drivers + * that use request as their IO unit instead of those directly use bio's. + */ +void blk_pm_runtime_init(struct request_queue *q, struct device *dev) +{ + /* Don't enable runtime PM for blk-mq until it is ready */ + if (q->mq_ops) { + pm_runtime_disable(dev); + return; + } + + q->dev = dev; + q->rpm_status = RPM_ACTIVE; + pm_runtime_set_autosuspend_delay(q->dev, -1); + pm_runtime_use_autosuspend(q->dev); +} +EXPORT_SYMBOL(blk_pm_runtime_init); + +/** + * blk_pre_runtime_suspend - Pre runtime suspend check + * @q: the queue of the device + * + * Description: + * This function will check if runtime suspend is allowed for the device + * by examining if there are any requests pending in the queue. If there + * are requests pending, the device can not be runtime suspended; otherwise, + * the queue's status will be updated to SUSPENDING and the driver can + * proceed to suspend the device. + * + * For the not allowed case, we mark last busy for the device so that + * runtime PM core will try to autosuspend it some time later. + * + * This function should be called near the start of the device's + * runtime_suspend callback. + * + * Return: + * 0 - OK to runtime suspend the device + * -EBUSY - Device should not be runtime suspended + */ +int blk_pre_runtime_suspend(struct request_queue *q) +{ + int ret = 0; + + if (!q->dev) + return ret; + + spin_lock_irq(q->queue_lock); + if (q->nr_pending) { + ret = -EBUSY; + pm_runtime_mark_last_busy(q->dev); + } else { + q->rpm_status = RPM_SUSPENDING; + } + spin_unlock_irq(q->queue_lock); + return ret; +} +EXPORT_SYMBOL(blk_pre_runtime_suspend); + +/** + * blk_post_runtime_suspend - Post runtime suspend processing + * @q: the queue of the device + * @err: return value of the device's runtime_suspend function + * + * Description: + * Update the queue's runtime status according to the return value of the + * device's runtime suspend function and mark last busy for the device so + * that PM core will try to auto suspend the device at a later time. + * + * This function should be called near the end of the device's + * runtime_suspend callback. + */ +void blk_post_runtime_suspend(struct request_queue *q, int err) +{ + if (!q->dev) + return; + + spin_lock_irq(q->queue_lock); + if (!err) { + q->rpm_status = RPM_SUSPENDED; + } else { + q->rpm_status = RPM_ACTIVE; + pm_runtime_mark_last_busy(q->dev); + } + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_post_runtime_suspend); + +/** + * blk_pre_runtime_resume - Pre runtime resume processing + * @q: the queue of the device + * + * Description: + * Update the queue's runtime status to RESUMING in preparation for the + * runtime resume of the device. + * + * This function should be called near the start of the device's + * runtime_resume callback. + */ +void blk_pre_runtime_resume(struct request_queue *q) +{ + if (!q->dev) + return; + + spin_lock_irq(q->queue_lock); + q->rpm_status = RPM_RESUMING; + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_pre_runtime_resume); + +/** + * blk_post_runtime_resume - Post runtime resume processing + * @q: the queue of the device + * @err: return value of the device's runtime_resume function + * + * Description: + * Update the queue's runtime status according to the return value of the + * device's runtime_resume function. If it is successfully resumed, process + * the requests that are queued into the device's queue when it is resuming + * and then mark last busy and initiate autosuspend for it. + * + * This function should be called near the end of the device's + * runtime_resume callback. + */ +void blk_post_runtime_resume(struct request_queue *q, int err) +{ + if (!q->dev) + return; + + spin_lock_irq(q->queue_lock); + if (!err) { + q->rpm_status = RPM_ACTIVE; + __blk_run_queue(q); + pm_runtime_mark_last_busy(q->dev); + pm_request_autosuspend(q->dev); + } else { + q->rpm_status = RPM_SUSPENDED; + } + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_post_runtime_resume); + +/** + * blk_set_runtime_active - Force runtime status of the queue to be active + * @q: the queue of the device + * + * If the device is left runtime suspended during system suspend the resume + * hook typically resumes the device and corrects runtime status + * accordingly. However, that does not affect the queue runtime PM status + * which is still "suspended". This prevents processing requests from the + * queue. + * + * This function can be used in driver's resume hook to correct queue + * runtime PM status and re-enable peeking requests from the queue. It + * should be called before first request is added to the queue. + */ +void blk_set_runtime_active(struct request_queue *q) +{ + spin_lock_irq(q->queue_lock); + q->rpm_status = RPM_ACTIVE; + pm_runtime_mark_last_busy(q->dev); + pm_request_autosuspend(q->dev); + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_set_runtime_active); diff --git a/block/blk-pm.h b/block/blk-pm.h new file mode 100644 index 000000000000..1ffc8ef203ec --- /dev/null +++ b/block/blk-pm.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _BLOCK_BLK_PM_H_ +#define _BLOCK_BLK_PM_H_ + +#include + +#ifdef CONFIG_PM +static inline void blk_pm_requeue_request(struct request *rq) +{ + if (rq->q->dev && !(rq->rq_flags & RQF_PM)) + rq->q->nr_pending--; +} + +static inline void blk_pm_add_request(struct request_queue *q, + struct request *rq) +{ + if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 && + (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING)) + pm_request_resume(q->dev); +} + +static inline void blk_pm_put_request(struct request *rq) +{ + if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending) + pm_runtime_mark_last_busy(rq->q->dev); +} +#else +static inline void blk_pm_requeue_request(struct request *rq) +{ +} + +static inline void blk_pm_add_request(struct request_queue *q, + struct request *rq) +{ +} + +static inline void blk_pm_put_request(struct request *rq) +{ +} +#endif + +#endif /* _BLOCK_BLK_PM_H_ */ diff --git a/block/elevator.c b/block/elevator.c index 6a06b5d040e5..e18ac68626e3 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -41,6 +41,7 @@ #include "blk.h" #include "blk-mq-sched.h" +#include "blk-pm.h" #include "blk-wbt.h" static DEFINE_SPINLOCK(elv_list_lock); @@ -557,27 +558,6 @@ void elv_bio_merged(struct request_queue *q, struct request *rq, e->type->ops.sq.elevator_bio_merged_fn(q, rq, bio); } -#ifdef CONFIG_PM -static void blk_pm_requeue_request(struct request *rq) -{ - if (rq->q->dev && !(rq->rq_flags & RQF_PM)) - rq->q->nr_pending--; -} - -static void blk_pm_add_request(struct request_queue *q, struct request *rq) -{ - if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 && - (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING)) - pm_request_resume(q->dev); -} -#else -static inline void blk_pm_requeue_request(struct request *rq) {} -static inline void blk_pm_add_request(struct request_queue *q, - struct request *rq) -{ -} -#endif - void elv_requeue_request(struct request_queue *q, struct request *rq) { /* diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c index b44c1bb687a2..a2b4179bfdf7 100644 --- a/drivers/scsi/scsi_pm.c +++ b/drivers/scsi/scsi_pm.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index b79b366a94f7..64514e8359e4 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -45,6 +45,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c index d0389b20574d..4f07b3410595 100644 --- a/drivers/scsi/sr.c +++ b/drivers/scsi/sr.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include #include diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h new file mode 100644 index 000000000000..b80c65aba249 --- /dev/null +++ b/include/linux/blk-pm.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _BLK_PM_H_ +#define _BLK_PM_H_ + +struct device; +struct request_queue; + +/* + * block layer runtime pm functions + */ +#ifdef CONFIG_PM +extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev); +extern int blk_pre_runtime_suspend(struct request_queue *q); +extern void blk_post_runtime_suspend(struct request_queue *q, int err); +extern void blk_pre_runtime_resume(struct request_queue *q); +extern void blk_post_runtime_resume(struct request_queue *q, int err); +extern void blk_set_runtime_active(struct request_queue *q); +#else +static inline void blk_pm_runtime_init(struct request_queue *q, + struct device *dev) {} +#endif + +#endif /* _BLK_PM_H_ */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 6980014357d4..e25ce42c982d 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1280,29 +1280,6 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, extern void blk_put_queue(struct request_queue *); extern void blk_set_queue_dying(struct request_queue *); -/* - * block layer runtime pm functions - */ -#ifdef CONFIG_PM -extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev); -extern int blk_pre_runtime_suspend(struct request_queue *q); -extern void blk_post_runtime_suspend(struct request_queue *q, int err); -extern void blk_pre_runtime_resume(struct request_queue *q); -extern void blk_post_runtime_resume(struct request_queue *q, int err); -extern void blk_set_runtime_active(struct request_queue *q); -#else -static inline void blk_pm_runtime_init(struct request_queue *q, - struct device *dev) {} -static inline int blk_pre_runtime_suspend(struct request_queue *q) -{ - return -ENOSYS; -} -static inline void blk_post_runtime_suspend(struct request_queue *q, int err) {} -static inline void blk_pre_runtime_resume(struct request_queue *q) {} -static inline void blk_post_runtime_resume(struct request_queue *q, int err) {} -static inline void blk_set_runtime_active(struct request_queue *q) {} -#endif - /* * blk_plug permits building a queue of related requests by holding the I/O * fragments for a short period. This allows merging of sequential requests From patchwork Fri Sep 21 20:31:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10610929 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2DFBC112B for ; Fri, 21 Sep 2018 20:31:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C67E2E7A5 for ; Fri, 21 Sep 2018 20:31:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 105F92E7AF; Fri, 21 Sep 2018 20:31:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33A792E7A5 for ; Fri, 21 Sep 2018 20:31:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391296AbeIVCWY (ORCPT ); Fri, 21 Sep 2018 22:22:24 -0400 Received: from out002.mailprotect.be ([83.217.72.86]:48177 "EHLO out002.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391279AbeIVCWX (ORCPT ); Fri, 21 Sep 2018 22:22:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=KRu4a1kpuOz0MxysxfCCDBEN04YmX0+DvGCyQAkHtXc=; b=ZmPD+lvNnAZZ o9NyaGWisQIUjE+SubV/6akZsnNH5/L86PAg6yQSE9KzNF7n/BRJX6JAZudDXF57UetQYxSTUPnFY BUkJ7yYE0P2npQsrTTb2lDd31tt2XMgupkPKfqqk3kGSl9ZqQSBp6hMKaXaf7xhByuvpXSz2dRTRu ecf5/VYg0RZzwJJNPCLREV8sYwLPF1PnopO6fRWlH4oLh5dBToODuq9m2Sin0YfIcrSeqK2ladFG6 aqj9u/SksJNzRR8pNMozU+UhMCIl/zQE4MEbZ/TiToJTZl2xEsDA1y7rGjaM8cNDy97gx7stcEA7G 1+WumjqpHGBh8lrPKA5tfA==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out002.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g3S5W-0008vQ-D3; Fri, 21 Sep 2018 22:31:43 +0200 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 4E07EC05F3; Fri, 21 Sep 2018 22:31:40 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , "Martin K . Petersen" , Ming Lei , Jianchao Wang , Johannes Thumshirn , Alan Stern Subject: [PATCH v10 2/8] block, scsi: Change the preempt-only flag into a counter Date: Fri, 21 Sep 2018 13:31:16 -0700 Message-Id: <20180921203122.49743-3-bvanassche@acm.org> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: <20180921203122.49743-1-bvanassche@acm.org> References: <20180921203122.49743-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: SB/global_tokens (0.00203478771885) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5lI7X96ZmHZJvH5MC3DRdBx602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMQkpuFrILVINAX/FRRdHr3yMkm/wiy60JYYB9 uF3NAa1Nk+QSW7RQzhAm/KStEkEjjSySmAlj1sIqwltPBEyYUVLTnxxOo8WXP45qj8D05F1Et5qO iHPCfy+Mw6EsPu1Xs/fsPQmTmoa3gruSbY5MSgnn9/SnaMe5zST07YHutrmaULGiPFCY0LvWXvm2 zNEpQaXPwIlI4yu/PH1NWDdGRUeAhT7lPh8TiXTKyNEw8IF2ZhU1Rc4F7kffzmPL0kaiP069MQie 2a9t68Wak3ejIkXIVZ6y99YRDdu6F1F1/eyYLhCbPnXGS26g+fe1UPOYVcspn4OgerQuLHuZ1Ye8 XV+l30H0rpQBrPleYAQRZqwoA0VL4BOWelo892tSfwlDlhQMXA0KRqnIyXdaZxWXlkQMF99VlWOZ Vo1sbhkkbsX0KAdz5bjcKsFBSAEXFEssUCyY9VeO3wVjwgRXLzrv2lYuJCfF/LAgML2VkneCAwhf SJSoWtdxJerU8wHqrCz+oV9skJ5C+N2t2Br3nJnI4CtibyEAdp2B2MxSAAkfFa9wRJ/eXhAiot4M Y812nNgiJTV5TSpxPzARZvIqtGiBjgd2E08+eSbqai6WfhwIJ+jVrI6HLw3hVLpgrFeYMZAp++Du IQUs/5JJj4C/n4CILnhs+K1Z3lg9LHrUjeEl84ZUnyxLOaSnoRcFDT9HP5PJEIoeJLdX+qqtyHrV TeywbAcqaVfK6LpcijgHtWZeoRYbcCTHsCYe2wmYCB7xseRFOPEfDQ5kA2FAM0elrWW4h/djzQ6Y C7Heg3Xf7O1TOd58Tufmk7sU/IKQLJ13ZMp7aObTr3DucFCDsEXmp1/ZdEnr95pAuznDsAj1bCPG kqr//HRPQ9C0UzUALw446n6DPAlbDjazCbhs7qBpykynMmEC6PZjo5y2jrPCzseqiEVikzNI6G8B XCpFvNsRZaLiYA75YM/7FhgZIdhlaoyFN1BrMyfpYuyanAZpMXssfrVDPnTY9Ikk7i1CSceYv69F cUUybbZJXNzfCW/EMSSeJTV39MS5H4HvZszNarbzMxqEDB5JUzJGAb4EG8iBxenx4paFSGU9Mnbi 1yfztb5M+P3hVjYcJt6oD2gmbmk+kCk= X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The RQF_PREEMPT flag is used for three purposes: - In the SCSI core, for making sure that power management requests are executed even if a device is in the "quiesced" state. - For domain validation by SCSI drivers that use the parallel port. - In the IDE driver, for IDE preempt requests. Rename "preempt-only" into "pm-only" because the primary purpose of this mode is power management. Since the power management core may but does not have to resume a runtime suspended device before performing system-wide suspend and since a later patch will set "pm-only" mode as long as a block device is runtime suspended, make it possible to set "pm-only" mode from more than one context. Since with this change scsi_device_quiesce() is no longer idempotent, make that function return early if it is called for a quiesced queue. Signed-off-by: Bart Van Assche Cc: Martin K. Petersen Reviewed-by: Hannes Reinecke Reviewed-by: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Johannes Thumshirn Cc: Alan Stern Acked-by: Martin K. Petersen --- block/blk-core.c | 35 ++++++++++++++++++----------------- block/blk-mq-debugfs.c | 10 +++++++++- drivers/scsi/scsi_lib.c | 11 +++++++---- include/linux/blkdev.h | 14 +++++++++----- 4 files changed, 43 insertions(+), 27 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 6d4dd176bd9d..1a691f5269bb 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -422,24 +422,25 @@ void blk_sync_queue(struct request_queue *q) EXPORT_SYMBOL(blk_sync_queue); /** - * blk_set_preempt_only - set QUEUE_FLAG_PREEMPT_ONLY + * blk_set_pm_only - increment pm_only counter * @q: request queue pointer - * - * Returns the previous value of the PREEMPT_ONLY flag - 0 if the flag was not - * set and 1 if the flag was already set. */ -int blk_set_preempt_only(struct request_queue *q) +void blk_set_pm_only(struct request_queue *q) { - return blk_queue_flag_test_and_set(QUEUE_FLAG_PREEMPT_ONLY, q); + atomic_inc(&q->pm_only); } -EXPORT_SYMBOL_GPL(blk_set_preempt_only); +EXPORT_SYMBOL_GPL(blk_set_pm_only); -void blk_clear_preempt_only(struct request_queue *q) +void blk_clear_pm_only(struct request_queue *q) { - blk_queue_flag_clear(QUEUE_FLAG_PREEMPT_ONLY, q); - wake_up_all(&q->mq_freeze_wq); + int pm_only; + + pm_only = atomic_dec_return(&q->pm_only); + WARN_ON_ONCE(pm_only < 0); + if (pm_only == 0) + wake_up_all(&q->mq_freeze_wq); } -EXPORT_SYMBOL_GPL(blk_clear_preempt_only); +EXPORT_SYMBOL_GPL(blk_clear_pm_only); /** * __blk_run_queue_uncond - run a queue whether or not it has been stopped @@ -918,7 +919,7 @@ EXPORT_SYMBOL(blk_alloc_queue); */ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) { - const bool preempt = flags & BLK_MQ_REQ_PREEMPT; + const bool pm = flags & BLK_MQ_REQ_PREEMPT; while (true) { bool success = false; @@ -926,11 +927,11 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) rcu_read_lock(); if (percpu_ref_tryget_live(&q->q_usage_counter)) { /* - * The code that sets the PREEMPT_ONLY flag is - * responsible for ensuring that that flag is globally - * visible before the queue is unfrozen. + * The code that increments the pm_only counter is + * responsible for ensuring that that counter is + * globally visible before the queue is unfrozen. */ - if (preempt || !blk_queue_preempt_only(q)) { + if (pm || !blk_queue_pm_only(q)) { success = true; } else { percpu_ref_put(&q->q_usage_counter); @@ -955,7 +956,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) wait_event(q->mq_freeze_wq, (atomic_read(&q->mq_freeze_depth) == 0 && - (preempt || !blk_queue_preempt_only(q))) || + (pm || !blk_queue_pm_only(q))) || blk_queue_dying(q)); if (blk_queue_dying(q)) return -ENODEV; diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index cb1e6cf7ac48..a5ea86835fcb 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -102,6 +102,14 @@ static int blk_flags_show(struct seq_file *m, const unsigned long flags, return 0; } +static int queue_pm_only_show(void *data, struct seq_file *m) +{ + struct request_queue *q = data; + + seq_printf(m, "%d\n", atomic_read(&q->pm_only)); + return 0; +} + #define QUEUE_FLAG_NAME(name) [QUEUE_FLAG_##name] = #name static const char *const blk_queue_flag_name[] = { QUEUE_FLAG_NAME(QUEUED), @@ -132,7 +140,6 @@ static const char *const blk_queue_flag_name[] = { QUEUE_FLAG_NAME(REGISTERED), QUEUE_FLAG_NAME(SCSI_PASSTHROUGH), QUEUE_FLAG_NAME(QUIESCED), - QUEUE_FLAG_NAME(PREEMPT_ONLY), }; #undef QUEUE_FLAG_NAME @@ -209,6 +216,7 @@ static ssize_t queue_write_hint_store(void *data, const char __user *buf, static const struct blk_mq_debugfs_attr blk_mq_debugfs_queue_attrs[] = { { "poll_stat", 0400, queue_poll_stat_show }, { "requeue_list", 0400, .seq_ops = &queue_requeue_list_seq_ops }, + { "pm_only", 0600, queue_pm_only_show, NULL }, { "state", 0600, queue_state_show, queue_state_write }, { "write_hints", 0600, queue_write_hint_show, queue_write_hint_store }, { "zone_wlock", 0400, queue_zone_wlock_show, NULL }, diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index eb97d2dd3651..62348412ed1b 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -3046,11 +3046,14 @@ scsi_device_quiesce(struct scsi_device *sdev) */ WARN_ON_ONCE(sdev->quiesced_by && sdev->quiesced_by != current); - blk_set_preempt_only(q); + if (sdev->quiesced_by == current) + return 0; + + blk_set_pm_only(q); blk_mq_freeze_queue(q); /* - * Ensure that the effect of blk_set_preempt_only() will be visible + * Ensure that the effect of blk_set_pm_only() will be visible * for percpu_ref_tryget() callers that occur after the queue * unfreeze even if the queue was already frozen before this function * was called. See also https://lwn.net/Articles/573497/. @@ -3063,7 +3066,7 @@ scsi_device_quiesce(struct scsi_device *sdev) if (err == 0) sdev->quiesced_by = current; else - blk_clear_preempt_only(q); + blk_clear_pm_only(q); mutex_unlock(&sdev->state_mutex); return err; @@ -3088,7 +3091,7 @@ void scsi_device_resume(struct scsi_device *sdev) mutex_lock(&sdev->state_mutex); WARN_ON_ONCE(!sdev->quiesced_by); sdev->quiesced_by = NULL; - blk_clear_preempt_only(sdev->request_queue); + blk_clear_pm_only(sdev->request_queue); if (sdev->sdev_state == SDEV_QUIESCE) scsi_device_set_state(sdev, SDEV_RUNNING); mutex_unlock(&sdev->state_mutex); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index e25ce42c982d..5f6c36b5058c 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -504,6 +504,12 @@ struct request_queue { * various queue flags, see QUEUE_* below */ unsigned long queue_flags; + /* + * Number of contexts that have called blk_set_pm_only(). If this + * counter is above zero then only RQF_PM and RQF_PREEMPT requests are + * processed. + */ + atomic_t pm_only; /* * ida allocated id for this queue. Used to index queues from @@ -698,7 +704,6 @@ struct request_queue { #define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */ #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ -#define QUEUE_FLAG_PREEMPT_ONLY 29 /* only process REQ_PREEMPT requests */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_SAME_COMP) | \ @@ -736,12 +741,11 @@ bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q); ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \ REQ_FAILFAST_DRIVER)) #define blk_queue_quiesced(q) test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags) -#define blk_queue_preempt_only(q) \ - test_bit(QUEUE_FLAG_PREEMPT_ONLY, &(q)->queue_flags) +#define blk_queue_pm_only(q) atomic_read(&(q)->pm_only) #define blk_queue_fua(q) test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags) -extern int blk_set_preempt_only(struct request_queue *q); -extern void blk_clear_preempt_only(struct request_queue *q); +extern void blk_set_pm_only(struct request_queue *q); +extern void blk_clear_pm_only(struct request_queue *q); static inline int queue_in_flight(struct request_queue *q) { From patchwork Fri Sep 21 20:31:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10610939 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD0A6161F for ; Fri, 21 Sep 2018 20:32:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BD8DA2E7A5 for ; Fri, 21 Sep 2018 20:32:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B14292E7AF; Fri, 21 Sep 2018 20:32:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E3D22E7A5 for ; Fri, 21 Sep 2018 20:32:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391461AbeIVCWj (ORCPT ); Fri, 21 Sep 2018 22:22:39 -0400 Received: from out002.mailprotect.be ([83.217.72.86]:34093 "EHLO out002.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391279AbeIVCWi (ORCPT ); Fri, 21 Sep 2018 22:22:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=LbgzfDZNXDQ/PctnUwnm1FUfAEJ7USaAG4JqOfZLvoY=; b=FjdK33SfAB8+ 46OX7BHx2WvgwjJlcGonIHbHWSsxwr0/m0bKAtWLwo4KQUmjC0XivQJleeDG2Tdnkx+6JILLH6N3v Wo1tJ7wAugQJzf/EWzTz70w9AmbcSWi4tBdPJ1a8n9o48ki/jV6sRFZYlzle08CFQH5jrF9TRA3Wi acPtIu3QLgSJ0G9whD55QnWjCkmZO5FCoUnWOfx9Ofet52nlUiuhxaKKNNkxYxNpZys/VykIiWA0z LUR6vzICEsWLcC4qbY1B6qVv5IOfsSCcOJ3PpQkUlH3bY45PuZ5sp62npAMn59Uzt+bYBGlBUu4qH 7nutzPcF/NSArAkpY0JPyA==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out002.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g3S5Y-0008vJ-LY; Fri, 21 Sep 2018 22:31:44 +0200 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 815B3C0606; Fri, 21 Sep 2018 22:31:42 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , "Martin K . Petersen" , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v10 3/8] block: Split blk_pm_add_request() and blk_pm_put_request() Date: Fri, 21 Sep 2018 13:31:17 -0700 Message-Id: <20180921203122.49743-4-bvanassche@acm.org> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: <20180921203122.49743-1-bvanassche@acm.org> References: <20180921203122.49743-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.02) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5hA9R6g8ATSP48sj5WCkHFZ602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMQkpuFrILVINAX/FRRdHr3yMkm/wiy60JYYB9 uF3NAa1Nk+QSW7RQzhAm/KStEkEjjSySmAlj1sIqwltPBEyYUVLTnxxOo8WXP45qj8D05F0GHXoj JzNBXYf7J9S/MLPJeew2ZFB8y+lVHOa/te45KxSlpp6gZ+aErQiqwMtUFszQVi7RanL0btaq9sjs zlKiU2Itm39BdCc4FEP6OrUewqv6w78MMfzJ6uM1Dx2pwT3wUbZfhnnKOe2l4PLezgjNgl8rTFVB NUayNYioUf/jaE247rv5gOouGP1S4LTxm4IZFA/pqiraPmJuCFYZ26hO7EGy2J9deH+T4rZSs7oC RIPDMAUwB3tfqejuvISV+XjlglRDk9cYQA7KMLDnGOJ56M3u8AeQ7v6DJmTHsjuH4LUFg/fIocum fy7nlHshc2SgmIw4GlBMUJ5tmacVuNcCPzQwTZWk0LJpbj+275alueD3JP1TZOF7xCxMWPmtOrbT CB+y1/kdwCnPNDJxz2+x6xRwdbvGEYre0mvLtHe5W25UyoBaNn4UvUDGLvKfl0KGpS+C6BPAIEs8 PiiVo3EbitwIT4UrixKuvc+2A6GMBkdUx4wVmRRdNAC1VnP7J7aoVa9N1B5a7HEax6KxZtwTGplH cpVCCoX989hgB8R+yBmSQTjEQ7K8L3YnE3YXQesAIdpmOmCdfONBfm+H9NFGnpIL02SbPzH9Z3Bs T7gOv4Clx2Bar4+r4X5TDyO6kqbM3QJ+eNps9QCtxORQfh9Yfx+Rej4UShefUo+T6F7dpBEVniih uDwEGDcmr6e3OPT2vpc7DTbK73Uczg4v9NX8rPLdozHWIiOOCs6xV5cNI6fbwX3Go3GDK2Fgl38k jmmWmGHZiEjUap4xJ0iO7BwYctgzcDoFd+96Xw4QUNtTnUwBPU16Lw5D6FTpb21Gqfhq+NvRNh3j F3bBvS1htv73jyRU1W7yjZrVL5fyX8Ze+edKEy7bP5gIt23U7fpAnNNaaosU2ouOolw9cp1YOrL6 hzzbxWjrOIv823OwuoSzwXBEn95eECKi3gxjzXac2CLtWj2g85mHRkOrwPj67JONTwwsq9jL8kDn CowdqqA99B1XEvFeCBHWER7m/Rdd97Q= X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the pm_request_resume() and pm_runtime_mark_last_busy() calls into two new functions and thereby separate legacy block layer code from code that works for both the legacy block layer and blk-mq. A later patch will add calls to the new functions in the blk-mq code. Signed-off-by: Bart Van Assche Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern Reviewed-by: Christoph Hellwig --- block/blk-core.c | 1 + block/blk-pm.h | 36 +++++++++++++++++++++++++++++++----- block/elevator.c | 1 + 3 files changed, 33 insertions(+), 5 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 1a691f5269bb..fd91e9bf2893 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1744,6 +1744,7 @@ void __blk_put_request(struct request_queue *q, struct request *req) blk_req_zone_write_unlock(req); blk_pm_put_request(req); + blk_pm_mark_last_busy(req); elv_completed_request(q, req); diff --git a/block/blk-pm.h b/block/blk-pm.h index 1ffc8ef203ec..a8564ea72a41 100644 --- a/block/blk-pm.h +++ b/block/blk-pm.h @@ -6,8 +6,23 @@ #include #ifdef CONFIG_PM +static inline void blk_pm_request_resume(struct request_queue *q) +{ + if (q->dev && (q->rpm_status == RPM_SUSPENDED || + q->rpm_status == RPM_SUSPENDING)) + pm_request_resume(q->dev); +} + +static inline void blk_pm_mark_last_busy(struct request *rq) +{ + if (rq->q->dev && !(rq->rq_flags & RQF_PM)) + pm_runtime_mark_last_busy(rq->q->dev); +} + static inline void blk_pm_requeue_request(struct request *rq) { + lockdep_assert_held(rq->q->queue_lock); + if (rq->q->dev && !(rq->rq_flags & RQF_PM)) rq->q->nr_pending--; } @@ -15,17 +30,28 @@ static inline void blk_pm_requeue_request(struct request *rq) static inline void blk_pm_add_request(struct request_queue *q, struct request *rq) { - if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 && - (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING)) - pm_request_resume(q->dev); + lockdep_assert_held(q->queue_lock); + + if (q->dev && !(rq->rq_flags & RQF_PM)) + q->nr_pending++; } static inline void blk_pm_put_request(struct request *rq) { - if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending) - pm_runtime_mark_last_busy(rq->q->dev); + lockdep_assert_held(rq->q->queue_lock); + + if (rq->q->dev && !(rq->rq_flags & RQF_PM)) + --rq->q->nr_pending; } #else +static inline void blk_pm_request_resume(struct request_queue *q) +{ +} + +static inline void blk_pm_mark_last_busy(struct request *rq) +{ +} + static inline void blk_pm_requeue_request(struct request *rq) { } diff --git a/block/elevator.c b/block/elevator.c index e18ac68626e3..1c992bf6cfb1 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -601,6 +601,7 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where) trace_block_rq_insert(q, rq); blk_pm_add_request(q, rq); + blk_pm_request_resume(q); rq->q = q; From patchwork Fri Sep 21 20:31:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10610937 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9130C112B for ; Fri, 21 Sep 2018 20:32:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80B292E7A5 for ; Fri, 21 Sep 2018 20:32:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7473F2E7AF; Fri, 21 Sep 2018 20:32:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 13B132E7A5 for ; Fri, 21 Sep 2018 20:32:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391459AbeIVCWd (ORCPT ); Fri, 21 Sep 2018 22:22:33 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:57889 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391279AbeIVCWd (ORCPT ); Fri, 21 Sep 2018 22:22:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=2kCIrMu9yGICwJiACNU6eRHFIyIfvCklq9VoSQk1iL0=; b=PU5r3B3/7clH q1uEfkxPx2Se5knMqv9EcSZ9vIad3LUCoq9RwPw2fqxtU1JRdUFwetnTCZJBbN5gEMeDuJ2Iwt7La CvttPSOHXhv/TIiHcB3GKlJHejSeFGDJdzggZ7xdqdm97unCjy+SQRdvRKBzom/2tuwhAxVrNDAyL oxltCHtucHe/4MQYfdRAxwk374/MJEgqA+N5QOlQx8Ei2eNEnFLLjsne5PdXW8LUdr5lCzpjv+uff KuNPm1CnZ5Pcfeql+htz5QzJNe+udG85G44YPzOuzyiAYezFqbD6YcWU6Iwum0yUyFCT8tZ0uqhod BLDdqyDDTJIiqLjZAVWVFg==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g3S5c-0009bZ-0E; Fri, 21 Sep 2018 22:31:48 +0200 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id C84F1C05F3; Fri, 21 Sep 2018 22:31:44 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v10 4/8] block: Schedule runtime resume earlier Date: Fri, 21 Sep 2018 13:31:18 -0700 Message-Id: <20180921203122.49743-5-bvanassche@acm.org> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: <20180921203122.49743-1-bvanassche@acm.org> References: <20180921203122.49743-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.02) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5mHmxlPCC2aiq5yWXM8vrD5602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9T53hegHvhWoArN/ECaHo+4SYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktw2usqskBf h9gLNBvEVVkgjgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnlPuqZbMrgHHVYh4W0Zs0i3u8wNzAuosYH7EO0/sG0TzM GFp3hf2fU3wmEvVOzv0bNXf0xLkfge9mzM1qtvMzGtQ3zbTb9ioJQTKsIej5k9paxvGXBaSBqdUb CKlGqRQ9eK0fQKpKEJhHmEdzQWgsJnMcN6qoXPjenLhIOF1oeRYT1Cmt5/tKxv1Eb49x6pCWdIpg wUz8e0OkQEN208RvKXR7TblnP0Zt/ik+ZSPHx3Q3NrkGXSLvPoU2CPGNDJrzMzfVS2F5zx6Xv1wY D5pggJR+b/8ppo9FzlAc2qebdELTPpuFqUUQz+mM8JAD4ECWjgbvxDHpubcqvCt3KW3r983k5tpF 2HAQY75TBNNMBbCB5nK66PwHLd6sbl+W+ZmOg9aICeEjR14tDOyKFWRwB1z1pRXWhjh9fdbl44I0 Df1nN63Cw5XairzMoRO+2yR7pt9JqCP5X+u3uDCejVE4UNDPNRgrf1AeViSeIVnMBBfBRBZgCikv OAMF/4pY8933x+uRyQECoIXFluPJGyJ2ePcQWk+3dNZU5rZYaF7U3qSQORcYm+eIaK9fUWr9RAN1 9ssFeollKTUEleC/aMh59YF7hUbm4nn1f8ypkbDQ8N7mh/jFPxqiQ662IrNcVOfW X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of scheduling runtime resume of a request queue after a request has been queued, schedule asynchronous resume during request allocation. The new pm_request_resume() calls occur after blk_queue_enter() has increased the q_usage_counter request queue member. This change is needed for a later patch that will make request allocation block while the queue status is not RPM_ACTIVE. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern Reviewed-by: Christoph Hellwig --- block/blk-core.c | 3 ++- block/elevator.c | 1 - 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index fd91e9bf2893..fec135ae52cf 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -956,7 +956,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) wait_event(q->mq_freeze_wq, (atomic_read(&q->mq_freeze_depth) == 0 && - (pm || !blk_queue_pm_only(q))) || + (pm || (blk_pm_request_resume(q), + !blk_queue_pm_only(q)))) || blk_queue_dying(q)); if (blk_queue_dying(q)) return -ENODEV; diff --git a/block/elevator.c b/block/elevator.c index 1c992bf6cfb1..e18ac68626e3 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -601,7 +601,6 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where) trace_block_rq_insert(q, rq); blk_pm_add_request(q, rq); - blk_pm_request_resume(q); rq->q = q; From patchwork Fri Sep 21 20:31:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10610933 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06523161F for ; Fri, 21 Sep 2018 20:32:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EAB182E7A5 for ; Fri, 21 Sep 2018 20:31:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DEF432E7AF; Fri, 21 Sep 2018 20:31:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 74A032E7A5 for ; Fri, 21 Sep 2018 20:31:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391321AbeIVCWa (ORCPT ); Fri, 21 Sep 2018 22:22:30 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:59231 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391458AbeIVCW3 (ORCPT ); Fri, 21 Sep 2018 22:22:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=8eYc2rooetW8HduPVK5h2VAHhWZ+Oi8ZO560YW3RI40=; b=axeqdIGKUxuw xaIS/ZhhhN1ZlftQG0bpKXjjspki1ijOGkSGL9aRT9j12r/yNh5Q7lmn9LT4FLSEpTdACc6pU8M+8 mRHrXPbP66tU2u75a2kbHBbK/zB/x08mI/xYG/jQrHRIcTdFOA0ISBI3YjnYq7tenfInwpNVIRMW/ GCbLzL0eMjysCvvyjsW3CS/MABZ+uHtjtX67c6dQ+VpPHIgKO318cSpMI+Bm9aZx59bUqHt7ii3pV 63XQ+aczwsf1I6IQpTR56ye0G3uUfVsHl7m70OyW9Iyda1/6ro6/9vdkEbRD+LjXAIHcahZIbITeM qW/nqz11cQsNQoC0XSX1sA==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g3S5d-0009bZ-4s; Fri, 21 Sep 2018 22:31:49 +0200 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 2A48DC0606; Fri, 21 Sep 2018 22:31:47 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Tejun Heo , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn Subject: [PATCH v10 5/8] percpu-refcount: Introduce percpu_ref_resurrect() Date: Fri, 21 Sep 2018 13:31:19 -0700 Message-Id: <20180921203122.49743-6-bvanassche@acm.org> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: <20180921203122.49743-1-bvanassche@acm.org> References: <20180921203122.49743-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.01) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5papScuQ7M8dLlvgAyC6v69602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3pt2tN//3H9/wWWXAe+sSUKuV W+g/h+X/wE+f/VWrtIrYydu6jMWuxOnqJFfJZuKREtVYAQYChUJlUgsyzPIQ46EECwro3zJKIDOp vGlwxNdCe+ttlbX5S0h77Rb1GYQ9WqYyTEbp1JLX9PcB11V50H+1d8OrKid08UD63WgFM2yEpXgo 3xVDR6tJKBagpKSYhvq/zpsT+mHbGXDupgVIcG61atAN6WpHBZeLlJUoXWm7sd8DEzjodhBhiuze WOzlUl+/eA28u899zK3bYNr+5VWvx4/5aan/L8AHYrKyAYObgPMHR0rnmzS+sioy5Yi/Lz/dsiah Q1DFoGJGH4QvNG7YtPNqKsP0aFdkpj9AZtjLcvZ+ooy9Vd1tXjyRxtLgputPNQCJecAfiZxXXg4j EKmAwTrzmCHomMs24PzTKAo9gK7/ebDoJ006fIc/+3LKjy+GqD0CyO+ujAYv9eSRCQkWEBM1Tk3W R5RtK0GPYPTgXjN1oyoyf+XOu18QZ+nNXYMpXUEzSop3xLQFMERj/FAuwDgglDTccTZknGyWQ/ln IKDFkcg0DoDn7tP2jpvL5Y4qqCmr2XZpuOqaj0inQEQsE9GFfAmaUtylwv5ObK16EipRzMVZ5Lqw Tx7Vvn9SXZ9LAb9O8CjnL3fBzOKr2SmMi9OuktJZaj0gU5NIZ9/G3rl5ysqiVTrohuKoegxdSz5L Tx3mBi7x+q+bEjyaggeYUOp7A73HI6oJg7w/Vof+JwdldeFSqHUTxSftFFnR/k9WvB7/Q3Npaopz pr/OPHx0ZoMvEl302DtcDE7WwzuOayc6G7ZwS5T/OPOAbPSOMDRcCgYQIezXimM8MMGBqcA6tMAE meOYmq5g47VHl5vxSDqQNw7Dj0UnCw1XZdqSxAsDGo1RL4iGb+yv0605xG0rgVFyVz7ZgDW8hBGN EiOUAwtoYoFTPtITORPU5/QZYCWMwdah5mm+W+O8ASRl0ycHL8+lTGBIVE9xbEkX79vG5s5tNcGy 4AEm7z1nBo1ZU0apuRO8rDZ8GamXvfC7sB1/rxbQicf/5tMwo2bfwweDZBmEJGbJuieYD0lHb0sw KYmA9i9TPOOUTGSbKboeYqAOu0ktdhnp1d3xEIjvhIIxL7hrJSk60SF3F6RYOYr2 X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This function will be used in a later patch to switch the struct request_queue q_usage_counter from killed back to live. In contrast to percpu_ref_reinit(), this new function does not require that the refcount is zero. Signed-off-by: Bart Van Assche Cc: Tejun Heo Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Acked-by: Tejun Heo --- include/linux/percpu-refcount.h | 1 + lib/percpu-refcount.c | 28 ++++++++++++++++++++++++++-- 2 files changed, 27 insertions(+), 2 deletions(-) diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 009cdf3d65b6..b297cd1cd4f1 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -108,6 +108,7 @@ void percpu_ref_switch_to_atomic_sync(struct percpu_ref *ref); void percpu_ref_switch_to_percpu(struct percpu_ref *ref); void percpu_ref_kill_and_confirm(struct percpu_ref *ref, percpu_ref_func_t *confirm_kill); +void percpu_ref_resurrect(struct percpu_ref *ref); void percpu_ref_reinit(struct percpu_ref *ref); /** diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index 9f96fa7bc000..17fe3e996ddc 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -356,11 +356,35 @@ EXPORT_SYMBOL_GPL(percpu_ref_kill_and_confirm); */ void percpu_ref_reinit(struct percpu_ref *ref) { + WARN_ON_ONCE(!percpu_ref_is_zero(ref)); + + percpu_ref_resurrect(ref); +} +EXPORT_SYMBOL_GPL(percpu_ref_reinit); + +/** + * percpu_ref_resurrect - modify a percpu refcount from dead to live + * @ref: perpcu_ref to resurrect + * + * Modify @ref so that it's in the same state as before percpu_ref_kill() was + * called. @ref must have be dead but not exited. + * + * If @ref->release() frees @ref then the caller is responsible for + * guaranteeing that @ref->release() does not get called while this + * function is in progress. + * + * Note that percpu_ref_tryget[_live]() are safe to perform on @ref while + * this function is in progress. + */ +void percpu_ref_resurrect(struct percpu_ref *ref) +{ + unsigned long __percpu *percpu_count; unsigned long flags; spin_lock_irqsave(&percpu_ref_switch_lock, flags); - WARN_ON_ONCE(!percpu_ref_is_zero(ref)); + WARN_ON_ONCE(!(ref->percpu_count_ptr & __PERCPU_REF_DEAD)); + WARN_ON_ONCE(__ref_is_percpu(ref, &percpu_count)); ref->percpu_count_ptr &= ~__PERCPU_REF_DEAD; percpu_ref_get(ref); @@ -368,4 +392,4 @@ void percpu_ref_reinit(struct percpu_ref *ref) spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); } -EXPORT_SYMBOL_GPL(percpu_ref_reinit); +EXPORT_SYMBOL_GPL(percpu_ref_resurrect); From patchwork Fri Sep 21 20:31:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10610935 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 68654112B for ; Fri, 21 Sep 2018 20:32:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A1BB2E7A5 for ; Fri, 21 Sep 2018 20:32:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4E99C2E7AC; Fri, 21 Sep 2018 20:32:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB4922E7B1 for ; Fri, 21 Sep 2018 20:31:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391458AbeIVCWa (ORCPT ); Fri, 21 Sep 2018 22:22:30 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:60103 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391279AbeIVCWa (ORCPT ); Fri, 21 Sep 2018 22:22:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=5MZ98/YuNnpGX6FEnII+cK+WDhjhw0Hu/YAjaOEFPxc=; b=qJDxt77kIJRx 7zDM5IYtndCpMNEgzcJqy2hg2i/uwIpURMPESHavI0HQBfQd1ZQh8g9cnPqjQVXHTAWbw02eNere/ PbQmCBcEYhb9fSjJOJb6ZXFoVmYGyHaP1K2UaHUUlgfvlkkQVKEvGqzH+W/28JK804jVisK3ovv/A TvYAicAFdzKCHdp15W+n24FYxstjL2Ilt7oXKZU1aDxmrHymL+EeOBgGnQ1R03VCNqWCfkkNmeg7E LA/YT1YvaUf6v7lWz9yO6OQvK5OeoW5ounNBY+Ulf5hvB+B1cJRdjLuetrUqpnlB/FLmB9uigHeDC yzAy2YVybhE+/zpVgNyrAg==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g3S5f-0009bZ-2L; Fri, 21 Sep 2018 22:31:51 +0200 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 44894C05F3; Fri, 21 Sep 2018 22:31:49 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn Subject: [PATCH v10 6/8] block: Allow unfreezing of a queue while requests are in progress Date: Fri, 21 Sep 2018 13:31:20 -0700 Message-Id: <20180921203122.49743-7-bvanassche@acm.org> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: <20180921203122.49743-1-bvanassche@acm.org> References: <20180921203122.49743-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.03) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5s2M5byl259dygN0qfouYmF602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmxiDKqBZt+RmZBAoK6NQJjatZsKsQQxQNar1gzy+f7dghxsyhtVAWyLP3sCt0cP4c90DckQ p81EoIRVnif+ZCSpuIT8+r21d2ysG4QOmSMH+Myb0lkrtcxVaFvGo4dXB/td3FLLN5JZzRmnU7SU HT2Pg8CBO1Snvm6qXHQp7O9kdbQ+474EzD6sfFobF2lAMMCyNKvOeLJBg69UhVs/U6xGLnQKyOOd NV2reR3vzxDXNzDt3skp6X1z4vnXAvsYlMbwgn3cxqjQiWJvLJ1LCTkYmgjCa8Q0z411CtB1U74U b4SbE1l0jCSqwqiEbnL4lT6XNt7k/jXBfxoamvCeW5qopzvAXfuA7SfJzjMYMU+vzUVrS0dDJ9gE sqV0wk3wKYrFWT1ROcYC8FxG576lHCGHFcRKBISRf9a0jSltkT5XdqQBJ7TPMihJUfmPOEFZaWMM jrqI2oiAXuQ1HJi38ET1Xj3gjnaj+6UA0UPkADHoS6gJylT+GdeVzzo8fCPU9s2+535SEn7rjFiH ERuQKNS8cfLzgKjV3TvSCt4dHaL//xnjPwBm5a6SZT1RnRFVD4MQj9ejTvZ0kbZy5ajP///WmkqY ttGnmHCh3YkGrrS+ZrB5qPrmLnSSifG4ucd7W6Xjp4wBeJ1bBzmy0cHdoxBslkCwablXWKJdZmLa wOg0zuxHfQnEW59lGakD4hu0EVRQvy23SZXSfMZ7RzHGH1l2CmFPje6vvfYmo/29qbgmYbsPb0CG cZbJ6DfC97oi/43JQ3+6q4AlHWfFiKC1INpQMsHcYox7c7ZsoYwS+KCiB8pTPkce0ip05dLwfzN0 WaDweGoFjgVERfb2WRDymz+2tgZkcRkCnoMzps1w8ZlyWgD9hJfBmJfMRsiZuwnxy0yeAs6tSdE8 TNQC3wbhPzhsZyCgxZHINA6A5+7T9o6by3fyJMzeSPO4me587e7DP/mB4b82Bi5RHSD5admVNAUi ewe6zWtZQMrfF7bZB6+xNjEvuGslKTrRIXcXpFg5ivY= X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A later patch will call blk_freeze_queue_start() followed by blk_mq_unfreeze_queue() without waiting for q_usage_counter to drop to zero. Make sure that this doesn't cause a kernel warning to appear by switching from percpu_ref_reinit() to percpu_ref_resurrect(). The former namely requires that the refcount it operates on is zero. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Reviewed-by: Christoph Hellwig --- block/blk-mq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 85a1c1a59c72..96d501e8663c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -198,7 +198,7 @@ void blk_mq_unfreeze_queue(struct request_queue *q) freeze_depth = atomic_dec_return(&q->mq_freeze_depth); WARN_ON_ONCE(freeze_depth < 0); if (!freeze_depth) { - percpu_ref_reinit(&q->q_usage_counter); + percpu_ref_resurrect(&q->q_usage_counter); wake_up_all(&q->mq_freeze_wq); } } From patchwork Fri Sep 21 20:31:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10610943 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 10417112B for ; Fri, 21 Sep 2018 20:32:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 00BD12E7A5 for ; Fri, 21 Sep 2018 20:32:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E93412E7AF; Fri, 21 Sep 2018 20:32:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 418D22E7A5 for ; Fri, 21 Sep 2018 20:32:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391279AbeIVCWv (ORCPT ); Fri, 21 Sep 2018 22:22:51 -0400 Received: from out002.mailprotect.be ([83.217.72.86]:48705 "EHLO out002.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391488AbeIVCWu (ORCPT ); Fri, 21 Sep 2018 22:22:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=1zWAavV0Kr/RghGY77RAvnA+h+mPQU6jMsM33hMo6c8=; b=F34JZs4cf9sp t14p47P+YscZFub5u9ZlrBHRNxne0vjayq+PkPCPaJc37QnWE1q4OHbF4ifdrC9SHNUS4r0Tq0UbZ PiHdZNEzE3Z1Cn/GEunP3INe5bwZYUPzTszYifHAxzAD3wu9fn+V4N/h2CGHHlxqtSC4iqOjiHcii dzkbuLRZ6obEkZ3lrQWJ+FYMnOp5Iy8+C0nGi/IF/x+YP5cl1SW8VjokbG/TtzjJPdF6+cvIuyVmK h5teW4wzgl4zLytFxgLSNh1ox7KGuTiZyCbRquZMYuWvYMKMFwps8oMJhXyYGJWUDXeZoFfH1LFA+ NejEapKI2NTisVh62P1CCA==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out002.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g3S5h-0008wI-6v; Fri, 21 Sep 2018 22:31:53 +0200 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 373FAC05F3; Fri, 21 Sep 2018 22:31:51 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v10 7/8] block: Make blk_get_request() block for non-PM requests while suspended Date: Fri, 21 Sep 2018 13:31:21 -0700 Message-Id: <20180921203122.49743-8-bvanassche@acm.org> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: <20180921203122.49743-1-bvanassche@acm.org> References: <20180921203122.49743-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: SB/global_tokens (0.00608552332434) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5q+3rgwZIyahYBC/lMm14qR602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9iRneFVZIWbUn0WqgIRy8/4SYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktwTvADW575 0b8F3/kmeu/2zgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnMdRatk7KHHjyULplJGegbdxbXpCgbiKBsA+Ddi6mawer tqWJnTnoN4VUNd4gjQeVvLN728IxxZDbDiI7+RnRY0757KiUYziXyuXG4Z+FT7f2Z5FmZiYh5zrn 6SPoZlSUVqZvWGb8FLl4ADZ/Txs2SsRyUiMV0bJ59aqbfue3i8/41P+vcbHOYMtZCUK3cTvB5CC0 B0N4GDXwtimutwTDmjDptKOiJtD3jqCFxjwevNzRK6cOvqKzAOXvmPQVkksDRZ1RoScDOsq2Parb hQOhvUFGnvKVJnFQQaAnDicXOB3EjzrwzpOE5mwgEdfdYo4D+vH10Kef7/oaji7+V0umCbBezpWT CcpgMNUt3YH99/LRMrw5DdwH1AXXXLvYNFIcFNl57WdwhmTsAKzUogeC9MGuVV3rHYDv9UsIXXfp MrVSOYE66SFhT6NVrQT/pWDn674K8PUeEVvDdxLV5QoWYB4kT8X9aUY7djYr1mogTEaxmF3d2RZv qUM/cwNhuECOygEwwwWe3xXNBQ8NJdm4Puz0ZaEH4EagAPQe7incqLSp9Bs6vFf0/xkS2e/oeNZr pAEntM8yKElR+Y84QVlpY8ekY5wl4Q3nPDb6eyxRMZLQiWYpjzKs/f2li8jAd+iCpKNMVj7VMvuF NvyQYijYug== X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of allowing requests that are not power management requests to enter the queue in runtime suspended status (RPM_SUSPENDED), make the blk_get_request() caller block. This change fixes a starvation issue: it is now guaranteed that power management requests will be executed no matter how many blk_get_request() callers are waiting. For blk-mq, instead of maintaining the q->nr_pending counter, rely on q->q_usage_counter. Call pm_runtime_mark_last_busy() every time a request finishes instead of only if the queue depth drops to zero. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern Reviewed-by: Christoph Hellwig --- block/blk-core.c | 37 ++++++++----------------------------- block/blk-pm.c | 44 +++++++++++++++++++++++++++++++++++++++----- 2 files changed, 47 insertions(+), 34 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index fec135ae52cf..16dd3a989753 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2746,30 +2746,6 @@ void blk_account_io_done(struct request *req, u64 now) } } -#ifdef CONFIG_PM -/* - * Don't process normal requests when queue is suspended - * or in the process of suspending/resuming - */ -static bool blk_pm_allow_request(struct request *rq) -{ - switch (rq->q->rpm_status) { - case RPM_RESUMING: - case RPM_SUSPENDING: - return rq->rq_flags & RQF_PM; - case RPM_SUSPENDED: - return false; - default: - return true; - } -} -#else -static bool blk_pm_allow_request(struct request *rq) -{ - return true; -} -#endif - void blk_account_io_start(struct request *rq, bool new_io) { struct hd_struct *part; @@ -2815,11 +2791,14 @@ static struct request *elv_next_request(struct request_queue *q) while (1) { list_for_each_entry(rq, &q->queue_head, queuelist) { - if (blk_pm_allow_request(rq)) - return rq; - - if (rq->rq_flags & RQF_SOFTBARRIER) - break; +#ifdef CONFIG_PM + /* + * If a request gets queued in state RPM_SUSPENDED + * then that's a kernel bug. + */ + WARN_ON_ONCE(q->rpm_status == RPM_SUSPENDED); +#endif + return rq; } /* diff --git a/block/blk-pm.c b/block/blk-pm.c index 9b636960d285..972fbc656846 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -1,8 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include #include +#include "blk-mq.h" +#include "blk-mq-tag.h" /** * blk_pm_runtime_init - Block layer runtime PM initialization routine @@ -68,14 +71,40 @@ int blk_pre_runtime_suspend(struct request_queue *q) if (!q->dev) return ret; + WARN_ON_ONCE(q->rpm_status != RPM_ACTIVE); + + /* + * Increase the pm_only counter before checking whether any + * non-PM blk_queue_enter() calls are in progress to avoid that any + * new non-PM blk_queue_enter() calls succeed before the pm_only + * counter is decreased again. + */ + blk_set_pm_only(q); + ret = -EBUSY; + /* Switch q_usage_counter from per-cpu to atomic mode. */ + blk_freeze_queue_start(q); + /* + * Wait until atomic mode has been reached. Since that + * involves calling call_rcu(), it is guaranteed that later + * blk_queue_enter() calls see the pm-only state. See also + * http://lwn.net/Articles/573497/. + */ + percpu_ref_switch_to_atomic_sync(&q->q_usage_counter); + if (percpu_ref_is_zero(&q->q_usage_counter)) + ret = 0; + /* Switch q_usage_counter back to per-cpu mode. */ + blk_mq_unfreeze_queue(q); + spin_lock_irq(q->queue_lock); - if (q->nr_pending) { - ret = -EBUSY; + if (ret < 0) pm_runtime_mark_last_busy(q->dev); - } else { + else q->rpm_status = RPM_SUSPENDING; - } spin_unlock_irq(q->queue_lock); + + if (ret) + blk_clear_pm_only(q); + return ret; } EXPORT_SYMBOL(blk_pre_runtime_suspend); @@ -106,6 +135,9 @@ void blk_post_runtime_suspend(struct request_queue *q, int err) pm_runtime_mark_last_busy(q->dev); } spin_unlock_irq(q->queue_lock); + + if (err) + blk_clear_pm_only(q); } EXPORT_SYMBOL(blk_post_runtime_suspend); @@ -153,13 +185,15 @@ void blk_post_runtime_resume(struct request_queue *q, int err) spin_lock_irq(q->queue_lock); if (!err) { q->rpm_status = RPM_ACTIVE; - __blk_run_queue(q); pm_runtime_mark_last_busy(q->dev); pm_request_autosuspend(q->dev); } else { q->rpm_status = RPM_SUSPENDED; } spin_unlock_irq(q->queue_lock); + + if (!err) + blk_clear_pm_only(q); } EXPORT_SYMBOL(blk_post_runtime_resume); From patchwork Fri Sep 21 20:31:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10610941 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83235161F for ; Fri, 21 Sep 2018 20:32:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 73FBD2E7A5 for ; Fri, 21 Sep 2018 20:32:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 67BF92E7AF; Fri, 21 Sep 2018 20:32:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED6142E7A5 for ; Fri, 21 Sep 2018 20:32:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391489AbeIVCWt (ORCPT ); Fri, 21 Sep 2018 22:22:49 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:43585 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391279AbeIVCWs (ORCPT ); Fri, 21 Sep 2018 22:22:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=0Dn/SQ3BxqI+rBs795NhNukqrMFq58bsy6iENGuQodM=; b=AVR+Sy5k9yR+ S6tY6O8WE1PWuPFk3UYG74amEx6hxsAr7HwIZibp2zwGoWGMvM+GUxKc1Bv0wJIrn4Q3YqWu8dwhV 7h5Hu7ezqS90KwQ/LDm9+uC8xnB+nUEAy+ywvkDsseMIfBjIv872fbfFLQdrME+XvNNtNDS6urq6f xce8/2RE68FdbUFts6bvRxV7vXttr7wvg+hhiqJTwPettj9CmCwyi13ThRm8OzoQmQuJ0+OAnEFMd RrW4CJ4435iiok0QIdZb+GNnaCXSCpWaGPUXMR1QTVML35r4c/5dSsyMaXUxg6cAOEEYm6ZlJ9m72 2vV/mRfzOnyoT4b70Wn2bg==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g3S5j-0009cT-AZ; Fri, 21 Sep 2018 22:31:55 +0200 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 553BAC0606; Fri, 21 Sep 2018 22:31:53 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v10 8/8] blk-mq: Enable support for runtime power management Date: Fri, 21 Sep 2018 13:31:22 -0700 Message-Id: <20180921203122.49743-9-bvanassche@acm.org> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: <20180921203122.49743-1-bvanassche@acm.org> References: <20180921203122.49743-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.02) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5qMY6uUdluqdtvz28SlWQCl602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9a7cNd3SEPcZujVbTCCI1uISYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktw2usqskBf h9gLNBvEVVkgjgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnlPuqZbMrgHHVYh4W0Zs0i3u8wNzAuosYH7EO0/sG0TzM GFp3hf2fU3wmEvVOzv0bNXf0xLkfge9mzM1qtvMzGtQ3zbTb9ioJQTKsIej5k9paxvGXBaSBqdUb CKlGqRQ9eK0fQKpKEJhHmEdzQWgsJnMcN6qoXPjenLhIOF1oeRYT1Cmt5/tKxv1Eb49x6pCWdIpg wUz8e0OkQEN208RvKU4b4F0iH+okBoVVctlJhnM3NrkGXSLvPoU2CPGNDJrzMzfVS2F5zx6Xv1wY D5pggJR+b/8ppo9FzlAc2qebdELTPpuFqUUQz+mM8JAD4ECWjgbvxDHpubcqvCt3KW3r983k5tpF 2HAQY75TBNNMBbCB5nK66PwHLd6sbl+W+ZmOz0ugkbPVsG8Cuc6GaH4HSFz1pRXWhjh9fdbl44I0 Df1nN63Cw5XairzMoRO+2yR7pt9JqCP5X+u3uDCejVE4UNDPNRgrf1AeViSeIVnMBBfBRBZgCikv OAMF/4pY8933x+uRyQECoIXFluPJGyJ2ePcQWk+3dNZU5rZYaF7U3qSQORcYm+eIaK9fUWr9RAN1 9ssFeollKTUEleC/aMh59YF7hUbm4nn1f8ypkbDQ8N7mh/jFPxqiQ662IrNcVOfW X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that the blk-mq core processes power management requests (marked with RQF_PREEMPT) in other states than RPM_ACTIVE, enable runtime power management for blk-mq. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern Reviewed-by: Christoph Hellwig --- block/blk-mq.c | 2 ++ block/blk-pm.c | 6 ------ 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 96d501e8663c..d384ab700afd 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -33,6 +33,7 @@ #include "blk-mq.h" #include "blk-mq-debugfs.h" #include "blk-mq-tag.h" +#include "blk-pm.h" #include "blk-stat.h" #include "blk-mq-sched.h" #include "blk-rq-qos.h" @@ -475,6 +476,7 @@ static void __blk_mq_free_request(struct request *rq) struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu); const int sched_tag = rq->internal_tag; + blk_pm_mark_last_busy(rq); if (rq->tag != -1) blk_mq_put_tag(hctx, hctx->tags, ctx, rq->tag); if (sched_tag != -1) diff --git a/block/blk-pm.c b/block/blk-pm.c index 972fbc656846..f8fdae01bea2 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -30,12 +30,6 @@ */ void blk_pm_runtime_init(struct request_queue *q, struct device *dev) { - /* Don't enable runtime PM for blk-mq until it is ready */ - if (q->mq_ops) { - pm_runtime_disable(dev); - return; - } - q->dev = dev; q->rpm_status = RPM_ACTIVE; pm_runtime_set_autosuspend_delay(q->dev, -1);