From patchwork Fri Sep 21 20:31:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10610943 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 10417112B for ; Fri, 21 Sep 2018 20:32:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 00BD12E7A5 for ; Fri, 21 Sep 2018 20:32:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E93412E7AF; Fri, 21 Sep 2018 20:32:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 418D22E7A5 for ; Fri, 21 Sep 2018 20:32:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391279AbeIVCWv (ORCPT ); Fri, 21 Sep 2018 22:22:51 -0400 Received: from out002.mailprotect.be ([83.217.72.86]:48705 "EHLO out002.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391488AbeIVCWu (ORCPT ); Fri, 21 Sep 2018 22:22:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=1zWAavV0Kr/RghGY77RAvnA+h+mPQU6jMsM33hMo6c8=; b=F34JZs4cf9sp t14p47P+YscZFub5u9ZlrBHRNxne0vjayq+PkPCPaJc37QnWE1q4OHbF4ifdrC9SHNUS4r0Tq0UbZ PiHdZNEzE3Z1Cn/GEunP3INe5bwZYUPzTszYifHAxzAD3wu9fn+V4N/h2CGHHlxqtSC4iqOjiHcii dzkbuLRZ6obEkZ3lrQWJ+FYMnOp5Iy8+C0nGi/IF/x+YP5cl1SW8VjokbG/TtzjJPdF6+cvIuyVmK h5teW4wzgl4zLytFxgLSNh1ox7KGuTiZyCbRquZMYuWvYMKMFwps8oMJhXyYGJWUDXeZoFfH1LFA+ NejEapKI2NTisVh62P1CCA==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out002.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g3S5h-0008wI-6v; Fri, 21 Sep 2018 22:31:53 +0200 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 373FAC05F3; Fri, 21 Sep 2018 22:31:51 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v10 7/8] block: Make blk_get_request() block for non-PM requests while suspended Date: Fri, 21 Sep 2018 13:31:21 -0700 Message-Id: <20180921203122.49743-8-bvanassche@acm.org> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: <20180921203122.49743-1-bvanassche@acm.org> References: <20180921203122.49743-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: SB/global_tokens (0.00608552332434) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5q+3rgwZIyahYBC/lMm14qR602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9iRneFVZIWbUn0WqgIRy8/4SYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktwTvADW575 0b8F3/kmeu/2zgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnMdRatk7KHHjyULplJGegbdxbXpCgbiKBsA+Ddi6mawer tqWJnTnoN4VUNd4gjQeVvLN728IxxZDbDiI7+RnRY0757KiUYziXyuXG4Z+FT7f2Z5FmZiYh5zrn 6SPoZlSUVqZvWGb8FLl4ADZ/Txs2SsRyUiMV0bJ59aqbfue3i8/41P+vcbHOYMtZCUK3cTvB5CC0 B0N4GDXwtimutwTDmjDptKOiJtD3jqCFxjwevNzRK6cOvqKzAOXvmPQVkksDRZ1RoScDOsq2Parb hQOhvUFGnvKVJnFQQaAnDicXOB3EjzrwzpOE5mwgEdfdYo4D+vH10Kef7/oaji7+V0umCbBezpWT CcpgMNUt3YH99/LRMrw5DdwH1AXXXLvYNFIcFNl57WdwhmTsAKzUogeC9MGuVV3rHYDv9UsIXXfp MrVSOYE66SFhT6NVrQT/pWDn674K8PUeEVvDdxLV5QoWYB4kT8X9aUY7djYr1mogTEaxmF3d2RZv qUM/cwNhuECOygEwwwWe3xXNBQ8NJdm4Puz0ZaEH4EagAPQe7incqLSp9Bs6vFf0/xkS2e/oeNZr pAEntM8yKElR+Y84QVlpY8ekY5wl4Q3nPDb6eyxRMZLQiWYpjzKs/f2li8jAd+iCpKNMVj7VMvuF NvyQYijYug== X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of allowing requests that are not power management requests to enter the queue in runtime suspended status (RPM_SUSPENDED), make the blk_get_request() caller block. This change fixes a starvation issue: it is now guaranteed that power management requests will be executed no matter how many blk_get_request() callers are waiting. For blk-mq, instead of maintaining the q->nr_pending counter, rely on q->q_usage_counter. Call pm_runtime_mark_last_busy() every time a request finishes instead of only if the queue depth drops to zero. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern Reviewed-by: Christoph Hellwig --- block/blk-core.c | 37 ++++++++----------------------------- block/blk-pm.c | 44 +++++++++++++++++++++++++++++++++++++++----- 2 files changed, 47 insertions(+), 34 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index fec135ae52cf..16dd3a989753 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2746,30 +2746,6 @@ void blk_account_io_done(struct request *req, u64 now) } } -#ifdef CONFIG_PM -/* - * Don't process normal requests when queue is suspended - * or in the process of suspending/resuming - */ -static bool blk_pm_allow_request(struct request *rq) -{ - switch (rq->q->rpm_status) { - case RPM_RESUMING: - case RPM_SUSPENDING: - return rq->rq_flags & RQF_PM; - case RPM_SUSPENDED: - return false; - default: - return true; - } -} -#else -static bool blk_pm_allow_request(struct request *rq) -{ - return true; -} -#endif - void blk_account_io_start(struct request *rq, bool new_io) { struct hd_struct *part; @@ -2815,11 +2791,14 @@ static struct request *elv_next_request(struct request_queue *q) while (1) { list_for_each_entry(rq, &q->queue_head, queuelist) { - if (blk_pm_allow_request(rq)) - return rq; - - if (rq->rq_flags & RQF_SOFTBARRIER) - break; +#ifdef CONFIG_PM + /* + * If a request gets queued in state RPM_SUSPENDED + * then that's a kernel bug. + */ + WARN_ON_ONCE(q->rpm_status == RPM_SUSPENDED); +#endif + return rq; } /* diff --git a/block/blk-pm.c b/block/blk-pm.c index 9b636960d285..972fbc656846 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -1,8 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include #include +#include "blk-mq.h" +#include "blk-mq-tag.h" /** * blk_pm_runtime_init - Block layer runtime PM initialization routine @@ -68,14 +71,40 @@ int blk_pre_runtime_suspend(struct request_queue *q) if (!q->dev) return ret; + WARN_ON_ONCE(q->rpm_status != RPM_ACTIVE); + + /* + * Increase the pm_only counter before checking whether any + * non-PM blk_queue_enter() calls are in progress to avoid that any + * new non-PM blk_queue_enter() calls succeed before the pm_only + * counter is decreased again. + */ + blk_set_pm_only(q); + ret = -EBUSY; + /* Switch q_usage_counter from per-cpu to atomic mode. */ + blk_freeze_queue_start(q); + /* + * Wait until atomic mode has been reached. Since that + * involves calling call_rcu(), it is guaranteed that later + * blk_queue_enter() calls see the pm-only state. See also + * http://lwn.net/Articles/573497/. + */ + percpu_ref_switch_to_atomic_sync(&q->q_usage_counter); + if (percpu_ref_is_zero(&q->q_usage_counter)) + ret = 0; + /* Switch q_usage_counter back to per-cpu mode. */ + blk_mq_unfreeze_queue(q); + spin_lock_irq(q->queue_lock); - if (q->nr_pending) { - ret = -EBUSY; + if (ret < 0) pm_runtime_mark_last_busy(q->dev); - } else { + else q->rpm_status = RPM_SUSPENDING; - } spin_unlock_irq(q->queue_lock); + + if (ret) + blk_clear_pm_only(q); + return ret; } EXPORT_SYMBOL(blk_pre_runtime_suspend); @@ -106,6 +135,9 @@ void blk_post_runtime_suspend(struct request_queue *q, int err) pm_runtime_mark_last_busy(q->dev); } spin_unlock_irq(q->queue_lock); + + if (err) + blk_clear_pm_only(q); } EXPORT_SYMBOL(blk_post_runtime_suspend); @@ -153,13 +185,15 @@ void blk_post_runtime_resume(struct request_queue *q, int err) spin_lock_irq(q->queue_lock); if (!err) { q->rpm_status = RPM_ACTIVE; - __blk_run_queue(q); pm_runtime_mark_last_busy(q->dev); pm_request_autosuspend(q->dev); } else { q->rpm_status = RPM_SUSPENDED; } spin_unlock_irq(q->queue_lock); + + if (!err) + blk_clear_pm_only(q); } EXPORT_SYMBOL(blk_post_runtime_resume);