From patchwork Tue Sep 18 20:58:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10604871 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0DA4C6CB for ; Tue, 18 Sep 2018 20:59:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF8432BEAB for ; Tue, 18 Sep 2018 20:59:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E3A3E2BEB1; Tue, 18 Sep 2018 20:59:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 644282BEAB for ; Tue, 18 Sep 2018 20:59:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729932AbeISCeG (ORCPT ); Tue, 18 Sep 2018 22:34:06 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:56899 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729859AbeISCeG (ORCPT ); Tue, 18 Sep 2018 22:34:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=l3EIiWyK7gW4ATDEGX2gD8QTAR4Ijs7QieELCzWHE6E=; b=iLwOXrO41sh/ 1+VzZgg29a9KNO+3bcDM1I8ycm91o/361ovKRvka3IvRQ7vS+2v3TGBFyCnZBkpp5IBUmwbQVJGeV GVW/05/rFpNLECMSfoVZwlSlZpEIUfiSnfZzOVKZ+pvCA0wscPfos5NaTzNGuMHFGBkym+BdobzbB 5iXq2Jc52lIfD6dW+duYEveJPpKYAycD+R3H4loaGFw7ERohRpxB10fTDxFoRMcTd8M8bUkk4qut6 doZycoCa/f5bxe+FG/9qYIjMOdV+6cos+Ng2kSZf07r5H0RWIUdbrm0/NTNif9jdqcMzt4L8Rr3DR jGxvX7rJejsR0SH2rFqjrg==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g2N5f-000FVL-JU; Tue, 18 Sep 2018 22:59:24 +0200 Received: from asus.hsd1.ca.comcast.net (c-174-62-111-89.hsd1.ca.comcast.net [174.62.111.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 8AD5EC09A6; Tue, 18 Sep 2018 22:59:20 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn Subject: [PATCH v8 1/8] blk-mq: Document the functions that iterate over requests Date: Tue, 18 Sep 2018 13:58:56 -0700 Message-Id: <20180918205903.15516-2-bvanassche@acm.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180918205903.15516-1-bvanassche@acm.org> References: <20180918205903.15516-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.05) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5iv8s1F1n1Vw+4W3/GvYU1Z602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmxiDKqBZt+RmZBAoK6NQJjatZsKsQQxQNar1gzy+f7dghxsyhtVAWyLP3sCt0cP4c90DckQ p81EoIRVnif+ZCSpuIT8+r21d2ysG4QOmSMH+D6B+472R3DXlb2kIZcTHV1d3FLLN5JZzRmnU7SU HT2Pg8CBO1Snvm6qXHQp7O9kdbQ+474EzD6sfFobF2lAMMCyNKvOeLJBg69UhVs/U6xGLnQKyOOd NV2reR3vzxDXNzDt3skp6X1z4vnXAvsYlMbwgn3cxqjQiWJvLJ1LCTkYmgjCa8Q0z411CtB1U74U b4SbE1l0jCSqwqiEbnL4lT6XNt7k/jXBfxoamvCeW5qopzvAXfuA7SfJzjMYMU+vzUVrS0dDJ9gE sqV0wk3wKYrFWT1ROcYC8FxG576lHCGHFcRKBISRf9a0jSltkT5XdqQBJ7TPMihJUfmPOEFZaWMM jrqI2oiAXuQ1HJi38ET1Xj3gjnaj+6UA0UPkADHoS6gJylT+GdeVzzo8fCPU9s2+535SEn7rjFiH ERuQKNS8cfLzgKjV3TvSCt4dHaL//xnjPwBm5a6SZT1RnRFVD4PuILSULe2bU8MRKnUMeW3amkqY ttGnmHCh3YkGrrS+ZrB5qPrmLnSSifG4ucd7W6Xjp4wBeJ1bBzmy0cHdoxBslkCwablXWKJdZmLa wOg0zuxHfQnEW59lGakD4hu0EVRQvy23SZXSfMZ7RzHGH1l2CmFPje6vvfYmo/29qbgmYXPPA3HW KMTZZWDq6JCc3kDJQ3+6q4AlHWfFiKC1INpQMsHcYox7c7ZsoYwS+KCiB8pTPkce0ip05dLwfzN0 WaDweGoFjgVERfb2WRDymz+2tgZkcRkCnoMzps1w8ZlyWgD9hJfBmJfMRsiZuwnxy0yeAs6tSdE8 TNQC3wbhPzhsZyCgxZHINA6A5+7T9o6by3fyJMzeSPO4me587e7DP/mB4b82Bi5RHSD5admVNAUi ewe6zWtZQMrfF7bZB6+xNjEvuGslKTrRIXcXpFg5ivY= X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Make it easier to understand the purpose of the functions that iterate over requests by documenting their purpose. Fix two minor spelling mistakes in comments in these functions. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn --- block/blk-mq-tag.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 94e1ed667b6e..ef3acb4a80e0 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -232,13 +232,19 @@ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data) /* * We can hit rq == NULL here, because the tagging functions - * test and set the bit before assining ->rqs[]. + * test and set the bit before assigning ->rqs[]. */ if (rq && rq->q == hctx->queue) iter_data->fn(hctx, rq, iter_data->data, reserved); return true; } +/* + * Call function @fn(@hctx, rq, @data, @reserved) for each request queued on + * @hctx that has been assigned a driver tag. @reserved indicates whether @bt + * is the breserved_tags member or the bitmap_tags member of struct + * blk_mq_tags. + */ static void bt_for_each(struct blk_mq_hw_ctx *hctx, struct sbitmap_queue *bt, busy_iter_fn *fn, void *data, bool reserved) { @@ -280,6 +286,11 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data) return true; } +/* + * Call function @fn(rq, @data, @reserved) for each request in @tags that has + * been started. @reserved indicates whether @bt is the breserved_tags member + * or the bitmap_tags member of struct blk_mq_tags. + */ static void bt_tags_for_each(struct blk_mq_tags *tags, struct sbitmap_queue *bt, busy_tag_iter_fn *fn, void *data, bool reserved) { @@ -294,6 +305,10 @@ static void bt_tags_for_each(struct blk_mq_tags *tags, struct sbitmap_queue *bt, sbitmap_for_each_set(&bt->sb, bt_tags_iter, &iter_data); } +/* + * Call @fn(rq, @priv, reserved) for each started request in @tags. 'reserved' + * indicates whether or not 'rq' is a reserved request. + */ static void blk_mq_all_tag_busy_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn, void *priv) { @@ -302,6 +317,10 @@ static void blk_mq_all_tag_busy_iter(struct blk_mq_tags *tags, bt_tags_for_each(tags, &tags->bitmap_tags, fn, priv, false); } +/* + * Call @fn(rq, @priv, reserved) for each request in @tagset. 'reserved' + * indicates whether or not 'rq' is a reserved request. + */ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset, busy_tag_iter_fn *fn, void *priv) { @@ -314,6 +333,11 @@ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset, } EXPORT_SYMBOL(blk_mq_tagset_busy_iter); +/* + * Call @fn(rq, @priv, reserved) for each request associated with request + * queue @q or any queue it shares tags with and that has been assigned a + * driver tag. 'reserved' indicates whether or not 'rq' is a reserved request. + */ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, void *priv) { @@ -337,7 +361,7 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, struct blk_mq_tags *tags = hctx->tags; /* - * If not software queues are currently mapped to this + * If no software queues are currently mapped to this * hardware queue, there's nothing to check */ if (!blk_mq_hw_queue_mapped(hctx)) From patchwork Tue Sep 18 20:58:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10604875 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0FE636CB for ; Tue, 18 Sep 2018 20:59:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F21422BEAA for ; Tue, 18 Sep 2018 20:59:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E5A382BEAC; Tue, 18 Sep 2018 20:59:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 717592BEAB for ; Tue, 18 Sep 2018 20:59:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729873AbeISCeH (ORCPT ); Tue, 18 Sep 2018 22:34:07 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:50203 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729902AbeISCeH (ORCPT ); Tue, 18 Sep 2018 22:34:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=XUKZkxqZnfuA7Kmsljv2AqNev93+v94huV7zjHkTg1Y=; b=txvmxKfhmVa5 aUGTINVHIQ6EfTJaOwl5yHLYMC35Eu0oFy75DQnd5yvk5I01jJRb6lMo9lpW+vXIyBR00XvN7bvVt n27yv54RiBvScCf1pNvPrOXtjfE9sMgClxIW6xssAgb1uK59YM8GOlLE3nCPqffZxXswe5opHfBTG GDiR3KQWBk4VXlYFHHfb6RxsWCHNuDhQagsRinnXI0SEutauCTDULVch4Ftm7y6LNkfqRb68QRC1L KoGmatnfBUQAQ22pMWAJWDOJ2tRRWXuBabtyrR+39+KHvrX1AGIXc6oz5LNL8CnTMZ0lQupiRHEoN hwJmFwg1n/8Lyi2EFpB9xA==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g2N5i-000FVv-61; Tue, 18 Sep 2018 22:59:26 +0200 Received: from asus.hsd1.ca.comcast.net (c-174-62-111-89.hsd1.ca.comcast.net [174.62.111.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 9B2B2C0906; Tue, 18 Sep 2018 22:59:22 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v8 2/8] blk-mq: Introduce blk_mq_queue_rq_iter() Date: Tue, 18 Sep 2018 13:58:57 -0700 Message-Id: <20180918205903.15516-3-bvanassche@acm.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180918205903.15516-1-bvanassche@acm.org> References: <20180918205903.15516-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.10) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5qGgjXR28sIHzYJd+Z2mWvt602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9ij+VPbkjUJKfR362kS4u4ISYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktw2usqskBf h9gLNBvEVVkgjgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnlPuqZbMrgHHVYh4W0Zs0i3u8wNzAuosYH7EO0/sG0TzM GFp3hf2fU3wmEvVOzv0bNXf0xLkfge9mzM1qtvMzGtQ3zbTb9ioJQTKsIej5k9paxvGXBaSBqdUb CKlGqRQ9eK0fQKpKEJhHmEdzQWgsJnMcN6qoXPjenLhIOF1oeRYT1Cmt5/tKxv1Eb49x6pCWdIpg wUz8e0OkQEN208RvKY0ODx9Lnt9k+kEhWz2CrzM3NrkGXSLvPoU2CPGNDJrzMzfVS2F5zx6Xv1wY D5pggJR+b/8ppo9FzlAc2qebdELTPpuFqUUQz+mM8JAD4ECWjgbvxDHpubcqvCt3KW3r983k5tpF 2HAQY75TBNNMBbDdWXJeGq06L50X2Qy2upyaP8j1NfwLSMzMWgFsmE0+Xlz1pRXWhjh9fdbl44I0 Df1nN63Cw5XairzMoRO+2yR7pt9JqCP5X+u3uDCejVE4UNDPNRgrf1AeViSeIVnMBBfBRBZgCikv OAMF/4pY8933x+uRyQECoIXFluPJGyJ2ePcQWk+3dNZU5rZYaF7U3qSQORcYm+eIaK9fUWr9RAN1 9ssFeollKTUEleC/aMh59YF7hUbm4nn1f8ypkbDQ8N7mh/jFPxqiQ662IrNcVOfW X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This function will be used in the patch "Make blk_get_request() block for non-PM requests while suspended". Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern --- block/blk-mq-tag.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ block/blk-mq-tag.h | 2 ++ 2 files changed, 46 insertions(+) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index ef3acb4a80e0..cf8537017f78 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -374,6 +374,50 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, rcu_read_unlock(); } +/* + * Call @fn(rq, @priv, reserved) for each request associated with request + * queue @q or any queue that it shares tags with and that has been assigned a + * tag. 'reserved' indicates whether or not 'rq' is a reserved request. In + * contrast to blk_mq_queue_tag_busy_iter(), if an I/O scheduler has been + * associated with @q, this function also iterates over requests that have + * been assigned a scheduler tag but that have not yet been assigned a driver + * tag. + */ +void blk_mq_queue_rq_iter(struct request_queue *q, busy_iter_fn *fn, void *priv) +{ + struct blk_mq_hw_ctx *hctx; + int i; + + /* + * __blk_mq_update_nr_hw_queues will update the nr_hw_queues and + * queue_hw_ctx after freeze the queue. So we could use q_usage_counter + * to avoid race with it. __blk_mq_update_nr_hw_queues will users + * synchronize_rcu to ensure all of the users go out of the critical + * section below and see zeroed q_usage_counter. + */ + rcu_read_lock(); + if (percpu_ref_is_zero(&q->q_usage_counter)) { + rcu_read_unlock(); + return; + } + + queue_for_each_hw_ctx(q, hctx, i) { + struct blk_mq_tags *tags = hctx->sched_tags ? : hctx->tags; + + /* + * If no software queues are currently mapped to this + * hardware queue, there's nothing to check + */ + if (!blk_mq_hw_queue_mapped(hctx)) + continue; + + if (tags->nr_reserved_tags) + bt_for_each(hctx, &tags->breserved_tags, fn, priv, true); + bt_for_each(hctx, &tags->bitmap_tags, fn, priv, false); + } + rcu_read_unlock(); +} + static int bt_alloc(struct sbitmap_queue *bt, unsigned int depth, bool round_robin, int node) { diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 61deab0b5a5a..25e62997ed6c 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -35,6 +35,8 @@ extern int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool); void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, void *priv); +void blk_mq_queue_rq_iter(struct request_queue *q, busy_iter_fn *fn, + void *priv); static inline struct sbq_wait_state *bt_wait_ptr(struct sbitmap_queue *bt, struct blk_mq_hw_ctx *hctx) From patchwork Tue Sep 18 20:58:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10604879 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D4EDC161F for ; Tue, 18 Sep 2018 20:59:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C1E1D2BEAA for ; Tue, 18 Sep 2018 20:59:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B66C02BEAC; Tue, 18 Sep 2018 20:59:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 503F62BEAA for ; Tue, 18 Sep 2018 20:59:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730134AbeISCeL (ORCPT ); Tue, 18 Sep 2018 22:34:11 -0400 Received: from out002.mailprotect.be ([83.217.72.86]:60129 "EHLO out002.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729859AbeISCeK (ORCPT ); Tue, 18 Sep 2018 22:34:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=u2lO93jxK/xtKAhmgMzzHLZg4SN8pBmA4EGi6hEVeLo=; b=tdcJnmpnkvwY GH1aIrvkaLiKzSP1BSC5VfLlMltyxTRm5v0XH1Y6q0Xmp0WraoKX0VyMoN6lEg7ibzO7Ck97qGdib Y1c/pNrUO1mAsPrTrj4f2rVQtA4M8zWMHIHs0dVnrn0CMl2FtlaWmzhA5dSvBH2gfTL78JAgy+0ep uni8WKrYwlWwiDFF51AhArCo2uN3pJxJhC+eCJWRxDFqh+wwetBzbzkvbKeuizvrpQulE7Ps2v0zj p80rDTN/EuuY+1ABZaOdWzfAVstePb/m456Bb/6SEcZE8J6RISTpHHAbOYy2861aP9uO9+bYFv4jG eiKxhs9OIH/yQio4CmdT4g==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out002.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g2N5n-0007qp-S7; Tue, 18 Sep 2018 22:59:32 +0200 Received: from asus.hsd1.ca.comcast.net (c-174-62-111-89.hsd1.ca.comcast.net [174.62.111.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id DBA51C09A6; Tue, 18 Sep 2018 22:59:25 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v8 3/8] block: Move power management code into a new source file Date: Tue, 18 Sep 2018 13:58:58 -0700 Message-Id: <20180918205903.15516-4-bvanassche@acm.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180918205903.15516-1-bvanassche@acm.org> References: <20180918205903.15516-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: SB/global_tokens (3.54032239011e-05) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5hTl6ltU15RsaUKxz7/iZFd602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9wMw4pGfeHXrzTQ70jAX224SYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktwTvADW575 0b8F3/kmeu/2zgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnMdRatk7KHHjyULplJGegbdxbXpCgbiKBsA+Ddi6mawer tqWJnTnoN4VUNd4gjQeVvLN728IxxZDbDiI7+RnRY0757KiUYziXyuXG4Z+FT7f2Z5FmZiYh5zrn 6SPoZlSUVqZvWGb8FLl4ADZ/Txs2SsRyUiMV0bJ59aqbfue3i8/41P+vcbHOYMtZCUK3cTvB5CC0 B0N4GDXwtimutwTDmjDptKOiJtD3jqCFxjwevNwmVoCQFY5tFfJABjOHVxlqRZ1RoScDOsq2Parb hQOhvUFGnvKVJnFQQaAnDicXOB3EjzrwzpOE5mwgEdfdYo4D+vH10Kef7/oaji7+V0umCbBezpWT CcpgMNUt3YH99/LRMrw5DdwH1AXXXLvYNFIcFNl57WdwhmTsAKzUogeC9KUJFT6Hx/6J1LPiL3+8 NExSOYE66SFhT6NVrQT/pWDn674K8PUeEVvDdxLV5QoWYB4kT8X9aUY7djYr1mogTEaxmF3d2RZv qUM/cwNhuECOygEwwwWe3xXNBQ8NJdm4Puz0ZaEH4EagAPQe7incqLSp9Bs6vFf0/xkS2e/oeNZr pAEntM8yKElR+Y84QVlpY8ekY5wl4Q3nPDb6eyxRMZLQiWYpjzKs/f2li8jAd+iCpKNMVj7VMvuF NvyQYijYug== X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the code for runtime power management from blk-core.c into the new source file blk-pm.c. Move the corresponding declarations from into . For CONFIG_PM=n, leave out the declarations of the functions that are not used in that mode. This patch not only reduces the number of #ifdefs in the block layer core code but also reduces the size of header file and hence should help to reduce the build time of the Linux kernel if CONFIG_PM is not defined. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern --- block/Kconfig | 3 + block/Makefile | 1 + block/blk-core.c | 196 +---------------------------------------- block/blk-pm.c | 188 +++++++++++++++++++++++++++++++++++++++ block/blk-pm.h | 43 +++++++++ block/elevator.c | 22 +---- drivers/scsi/scsi_pm.c | 1 + drivers/scsi/sd.c | 1 + drivers/scsi/sr.c | 1 + include/linux/blk-pm.h | 24 +++++ include/linux/blkdev.h | 23 ----- 11 files changed, 264 insertions(+), 239 deletions(-) create mode 100644 block/blk-pm.c create mode 100644 block/blk-pm.h create mode 100644 include/linux/blk-pm.h diff --git a/block/Kconfig b/block/Kconfig index 1f2469a0123c..85263e7bded6 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -228,4 +228,7 @@ config BLK_MQ_RDMA depends on BLOCK && INFINIBAND default y +config BLK_PM + def_bool BLOCK && PM + source block/Kconfig.iosched diff --git a/block/Makefile b/block/Makefile index 572b33f32c07..27eac600474f 100644 --- a/block/Makefile +++ b/block/Makefile @@ -37,3 +37,4 @@ obj-$(CONFIG_BLK_WBT) += blk-wbt.o obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o +obj-$(CONFIG_BLK_PM) += blk-pm.o diff --git a/block/blk-core.c b/block/blk-core.c index 4dbc93f43b38..6d4dd176bd9d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -42,6 +42,7 @@ #include "blk.h" #include "blk-mq.h" #include "blk-mq-sched.h" +#include "blk-pm.h" #include "blk-rq-qos.h" #ifdef CONFIG_DEBUG_FS @@ -1726,16 +1727,6 @@ void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part) } EXPORT_SYMBOL_GPL(part_round_stats); -#ifdef CONFIG_PM -static void blk_pm_put_request(struct request *rq) -{ - if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending) - pm_runtime_mark_last_busy(rq->q->dev); -} -#else -static inline void blk_pm_put_request(struct request *rq) {} -#endif - void __blk_put_request(struct request_queue *q, struct request *req) { req_flags_t rq_flags = req->rq_flags; @@ -3757,191 +3748,6 @@ void blk_finish_plug(struct blk_plug *plug) } EXPORT_SYMBOL(blk_finish_plug); -#ifdef CONFIG_PM -/** - * blk_pm_runtime_init - Block layer runtime PM initialization routine - * @q: the queue of the device - * @dev: the device the queue belongs to - * - * Description: - * Initialize runtime-PM-related fields for @q and start auto suspend for - * @dev. Drivers that want to take advantage of request-based runtime PM - * should call this function after @dev has been initialized, and its - * request queue @q has been allocated, and runtime PM for it can not happen - * yet(either due to disabled/forbidden or its usage_count > 0). In most - * cases, driver should call this function before any I/O has taken place. - * - * This function takes care of setting up using auto suspend for the device, - * the autosuspend delay is set to -1 to make runtime suspend impossible - * until an updated value is either set by user or by driver. Drivers do - * not need to touch other autosuspend settings. - * - * The block layer runtime PM is request based, so only works for drivers - * that use request as their IO unit instead of those directly use bio's. - */ -void blk_pm_runtime_init(struct request_queue *q, struct device *dev) -{ - /* Don't enable runtime PM for blk-mq until it is ready */ - if (q->mq_ops) { - pm_runtime_disable(dev); - return; - } - - q->dev = dev; - q->rpm_status = RPM_ACTIVE; - pm_runtime_set_autosuspend_delay(q->dev, -1); - pm_runtime_use_autosuspend(q->dev); -} -EXPORT_SYMBOL(blk_pm_runtime_init); - -/** - * blk_pre_runtime_suspend - Pre runtime suspend check - * @q: the queue of the device - * - * Description: - * This function will check if runtime suspend is allowed for the device - * by examining if there are any requests pending in the queue. If there - * are requests pending, the device can not be runtime suspended; otherwise, - * the queue's status will be updated to SUSPENDING and the driver can - * proceed to suspend the device. - * - * For the not allowed case, we mark last busy for the device so that - * runtime PM core will try to autosuspend it some time later. - * - * This function should be called near the start of the device's - * runtime_suspend callback. - * - * Return: - * 0 - OK to runtime suspend the device - * -EBUSY - Device should not be runtime suspended - */ -int blk_pre_runtime_suspend(struct request_queue *q) -{ - int ret = 0; - - if (!q->dev) - return ret; - - spin_lock_irq(q->queue_lock); - if (q->nr_pending) { - ret = -EBUSY; - pm_runtime_mark_last_busy(q->dev); - } else { - q->rpm_status = RPM_SUSPENDING; - } - spin_unlock_irq(q->queue_lock); - return ret; -} -EXPORT_SYMBOL(blk_pre_runtime_suspend); - -/** - * blk_post_runtime_suspend - Post runtime suspend processing - * @q: the queue of the device - * @err: return value of the device's runtime_suspend function - * - * Description: - * Update the queue's runtime status according to the return value of the - * device's runtime suspend function and mark last busy for the device so - * that PM core will try to auto suspend the device at a later time. - * - * This function should be called near the end of the device's - * runtime_suspend callback. - */ -void blk_post_runtime_suspend(struct request_queue *q, int err) -{ - if (!q->dev) - return; - - spin_lock_irq(q->queue_lock); - if (!err) { - q->rpm_status = RPM_SUSPENDED; - } else { - q->rpm_status = RPM_ACTIVE; - pm_runtime_mark_last_busy(q->dev); - } - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_post_runtime_suspend); - -/** - * blk_pre_runtime_resume - Pre runtime resume processing - * @q: the queue of the device - * - * Description: - * Update the queue's runtime status to RESUMING in preparation for the - * runtime resume of the device. - * - * This function should be called near the start of the device's - * runtime_resume callback. - */ -void blk_pre_runtime_resume(struct request_queue *q) -{ - if (!q->dev) - return; - - spin_lock_irq(q->queue_lock); - q->rpm_status = RPM_RESUMING; - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_pre_runtime_resume); - -/** - * blk_post_runtime_resume - Post runtime resume processing - * @q: the queue of the device - * @err: return value of the device's runtime_resume function - * - * Description: - * Update the queue's runtime status according to the return value of the - * device's runtime_resume function. If it is successfully resumed, process - * the requests that are queued into the device's queue when it is resuming - * and then mark last busy and initiate autosuspend for it. - * - * This function should be called near the end of the device's - * runtime_resume callback. - */ -void blk_post_runtime_resume(struct request_queue *q, int err) -{ - if (!q->dev) - return; - - spin_lock_irq(q->queue_lock); - if (!err) { - q->rpm_status = RPM_ACTIVE; - __blk_run_queue(q); - pm_runtime_mark_last_busy(q->dev); - pm_request_autosuspend(q->dev); - } else { - q->rpm_status = RPM_SUSPENDED; - } - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_post_runtime_resume); - -/** - * blk_set_runtime_active - Force runtime status of the queue to be active - * @q: the queue of the device - * - * If the device is left runtime suspended during system suspend the resume - * hook typically resumes the device and corrects runtime status - * accordingly. However, that does not affect the queue runtime PM status - * which is still "suspended". This prevents processing requests from the - * queue. - * - * This function can be used in driver's resume hook to correct queue - * runtime PM status and re-enable peeking requests from the queue. It - * should be called before first request is added to the queue. - */ -void blk_set_runtime_active(struct request_queue *q) -{ - spin_lock_irq(q->queue_lock); - q->rpm_status = RPM_ACTIVE; - pm_runtime_mark_last_busy(q->dev); - pm_request_autosuspend(q->dev); - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL(blk_set_runtime_active); -#endif - int __init blk_dev_init(void) { BUILD_BUG_ON(REQ_OP_LAST >= (1 << REQ_OP_BITS)); diff --git a/block/blk-pm.c b/block/blk-pm.c new file mode 100644 index 000000000000..9b636960d285 --- /dev/null +++ b/block/blk-pm.c @@ -0,0 +1,188 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include + +/** + * blk_pm_runtime_init - Block layer runtime PM initialization routine + * @q: the queue of the device + * @dev: the device the queue belongs to + * + * Description: + * Initialize runtime-PM-related fields for @q and start auto suspend for + * @dev. Drivers that want to take advantage of request-based runtime PM + * should call this function after @dev has been initialized, and its + * request queue @q has been allocated, and runtime PM for it can not happen + * yet(either due to disabled/forbidden or its usage_count > 0). In most + * cases, driver should call this function before any I/O has taken place. + * + * This function takes care of setting up using auto suspend for the device, + * the autosuspend delay is set to -1 to make runtime suspend impossible + * until an updated value is either set by user or by driver. Drivers do + * not need to touch other autosuspend settings. + * + * The block layer runtime PM is request based, so only works for drivers + * that use request as their IO unit instead of those directly use bio's. + */ +void blk_pm_runtime_init(struct request_queue *q, struct device *dev) +{ + /* Don't enable runtime PM for blk-mq until it is ready */ + if (q->mq_ops) { + pm_runtime_disable(dev); + return; + } + + q->dev = dev; + q->rpm_status = RPM_ACTIVE; + pm_runtime_set_autosuspend_delay(q->dev, -1); + pm_runtime_use_autosuspend(q->dev); +} +EXPORT_SYMBOL(blk_pm_runtime_init); + +/** + * blk_pre_runtime_suspend - Pre runtime suspend check + * @q: the queue of the device + * + * Description: + * This function will check if runtime suspend is allowed for the device + * by examining if there are any requests pending in the queue. If there + * are requests pending, the device can not be runtime suspended; otherwise, + * the queue's status will be updated to SUSPENDING and the driver can + * proceed to suspend the device. + * + * For the not allowed case, we mark last busy for the device so that + * runtime PM core will try to autosuspend it some time later. + * + * This function should be called near the start of the device's + * runtime_suspend callback. + * + * Return: + * 0 - OK to runtime suspend the device + * -EBUSY - Device should not be runtime suspended + */ +int blk_pre_runtime_suspend(struct request_queue *q) +{ + int ret = 0; + + if (!q->dev) + return ret; + + spin_lock_irq(q->queue_lock); + if (q->nr_pending) { + ret = -EBUSY; + pm_runtime_mark_last_busy(q->dev); + } else { + q->rpm_status = RPM_SUSPENDING; + } + spin_unlock_irq(q->queue_lock); + return ret; +} +EXPORT_SYMBOL(blk_pre_runtime_suspend); + +/** + * blk_post_runtime_suspend - Post runtime suspend processing + * @q: the queue of the device + * @err: return value of the device's runtime_suspend function + * + * Description: + * Update the queue's runtime status according to the return value of the + * device's runtime suspend function and mark last busy for the device so + * that PM core will try to auto suspend the device at a later time. + * + * This function should be called near the end of the device's + * runtime_suspend callback. + */ +void blk_post_runtime_suspend(struct request_queue *q, int err) +{ + if (!q->dev) + return; + + spin_lock_irq(q->queue_lock); + if (!err) { + q->rpm_status = RPM_SUSPENDED; + } else { + q->rpm_status = RPM_ACTIVE; + pm_runtime_mark_last_busy(q->dev); + } + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_post_runtime_suspend); + +/** + * blk_pre_runtime_resume - Pre runtime resume processing + * @q: the queue of the device + * + * Description: + * Update the queue's runtime status to RESUMING in preparation for the + * runtime resume of the device. + * + * This function should be called near the start of the device's + * runtime_resume callback. + */ +void blk_pre_runtime_resume(struct request_queue *q) +{ + if (!q->dev) + return; + + spin_lock_irq(q->queue_lock); + q->rpm_status = RPM_RESUMING; + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_pre_runtime_resume); + +/** + * blk_post_runtime_resume - Post runtime resume processing + * @q: the queue of the device + * @err: return value of the device's runtime_resume function + * + * Description: + * Update the queue's runtime status according to the return value of the + * device's runtime_resume function. If it is successfully resumed, process + * the requests that are queued into the device's queue when it is resuming + * and then mark last busy and initiate autosuspend for it. + * + * This function should be called near the end of the device's + * runtime_resume callback. + */ +void blk_post_runtime_resume(struct request_queue *q, int err) +{ + if (!q->dev) + return; + + spin_lock_irq(q->queue_lock); + if (!err) { + q->rpm_status = RPM_ACTIVE; + __blk_run_queue(q); + pm_runtime_mark_last_busy(q->dev); + pm_request_autosuspend(q->dev); + } else { + q->rpm_status = RPM_SUSPENDED; + } + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_post_runtime_resume); + +/** + * blk_set_runtime_active - Force runtime status of the queue to be active + * @q: the queue of the device + * + * If the device is left runtime suspended during system suspend the resume + * hook typically resumes the device and corrects runtime status + * accordingly. However, that does not affect the queue runtime PM status + * which is still "suspended". This prevents processing requests from the + * queue. + * + * This function can be used in driver's resume hook to correct queue + * runtime PM status and re-enable peeking requests from the queue. It + * should be called before first request is added to the queue. + */ +void blk_set_runtime_active(struct request_queue *q) +{ + spin_lock_irq(q->queue_lock); + q->rpm_status = RPM_ACTIVE; + pm_runtime_mark_last_busy(q->dev); + pm_request_autosuspend(q->dev); + spin_unlock_irq(q->queue_lock); +} +EXPORT_SYMBOL(blk_set_runtime_active); diff --git a/block/blk-pm.h b/block/blk-pm.h new file mode 100644 index 000000000000..1ffc8ef203ec --- /dev/null +++ b/block/blk-pm.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _BLOCK_BLK_PM_H_ +#define _BLOCK_BLK_PM_H_ + +#include + +#ifdef CONFIG_PM +static inline void blk_pm_requeue_request(struct request *rq) +{ + if (rq->q->dev && !(rq->rq_flags & RQF_PM)) + rq->q->nr_pending--; +} + +static inline void blk_pm_add_request(struct request_queue *q, + struct request *rq) +{ + if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 && + (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING)) + pm_request_resume(q->dev); +} + +static inline void blk_pm_put_request(struct request *rq) +{ + if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending) + pm_runtime_mark_last_busy(rq->q->dev); +} +#else +static inline void blk_pm_requeue_request(struct request *rq) +{ +} + +static inline void blk_pm_add_request(struct request_queue *q, + struct request *rq) +{ +} + +static inline void blk_pm_put_request(struct request *rq) +{ +} +#endif + +#endif /* _BLOCK_BLK_PM_H_ */ diff --git a/block/elevator.c b/block/elevator.c index 6a06b5d040e5..e18ac68626e3 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -41,6 +41,7 @@ #include "blk.h" #include "blk-mq-sched.h" +#include "blk-pm.h" #include "blk-wbt.h" static DEFINE_SPINLOCK(elv_list_lock); @@ -557,27 +558,6 @@ void elv_bio_merged(struct request_queue *q, struct request *rq, e->type->ops.sq.elevator_bio_merged_fn(q, rq, bio); } -#ifdef CONFIG_PM -static void blk_pm_requeue_request(struct request *rq) -{ - if (rq->q->dev && !(rq->rq_flags & RQF_PM)) - rq->q->nr_pending--; -} - -static void blk_pm_add_request(struct request_queue *q, struct request *rq) -{ - if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 && - (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING)) - pm_request_resume(q->dev); -} -#else -static inline void blk_pm_requeue_request(struct request *rq) {} -static inline void blk_pm_add_request(struct request_queue *q, - struct request *rq) -{ -} -#endif - void elv_requeue_request(struct request_queue *q, struct request *rq) { /* diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c index b44c1bb687a2..a2b4179bfdf7 100644 --- a/drivers/scsi/scsi_pm.c +++ b/drivers/scsi/scsi_pm.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index b79b366a94f7..64514e8359e4 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -45,6 +45,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c index d0389b20574d..4f07b3410595 100644 --- a/drivers/scsi/sr.c +++ b/drivers/scsi/sr.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include #include diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h new file mode 100644 index 000000000000..b80c65aba249 --- /dev/null +++ b/include/linux/blk-pm.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _BLK_PM_H_ +#define _BLK_PM_H_ + +struct device; +struct request_queue; + +/* + * block layer runtime pm functions + */ +#ifdef CONFIG_PM +extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev); +extern int blk_pre_runtime_suspend(struct request_queue *q); +extern void blk_post_runtime_suspend(struct request_queue *q, int err); +extern void blk_pre_runtime_resume(struct request_queue *q); +extern void blk_post_runtime_resume(struct request_queue *q, int err); +extern void blk_set_runtime_active(struct request_queue *q); +#else +static inline void blk_pm_runtime_init(struct request_queue *q, + struct device *dev) {} +#endif + +#endif /* _BLK_PM_H_ */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 6980014357d4..e25ce42c982d 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1280,29 +1280,6 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, extern void blk_put_queue(struct request_queue *); extern void blk_set_queue_dying(struct request_queue *); -/* - * block layer runtime pm functions - */ -#ifdef CONFIG_PM -extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev); -extern int blk_pre_runtime_suspend(struct request_queue *q); -extern void blk_post_runtime_suspend(struct request_queue *q, int err); -extern void blk_pre_runtime_resume(struct request_queue *q); -extern void blk_post_runtime_resume(struct request_queue *q, int err); -extern void blk_set_runtime_active(struct request_queue *q); -#else -static inline void blk_pm_runtime_init(struct request_queue *q, - struct device *dev) {} -static inline int blk_pre_runtime_suspend(struct request_queue *q) -{ - return -ENOSYS; -} -static inline void blk_post_runtime_suspend(struct request_queue *q, int err) {} -static inline void blk_pre_runtime_resume(struct request_queue *q) {} -static inline void blk_post_runtime_resume(struct request_queue *q, int err) {} -static inline void blk_set_runtime_active(struct request_queue *q) {} -#endif - /* * blk_plug permits building a queue of related requests by holding the I/O * fragments for a short period. This allows merging of sequential requests From patchwork Tue Sep 18 20:58:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10604873 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B032C161F for ; Tue, 18 Sep 2018 20:59:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FADB2BEAA for ; Tue, 18 Sep 2018 20:59:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 943092BEB1; Tue, 18 Sep 2018 20:59:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C56682BEAA for ; Tue, 18 Sep 2018 20:59:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730016AbeISCeH (ORCPT ); Tue, 18 Sep 2018 22:34:07 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:41327 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729873AbeISCeH (ORCPT ); Tue, 18 Sep 2018 22:34:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=9vx84XRWfnDsELapDA5uW7TIUqmN+QT1S/kNamBWirE=; b=c5dWqY64u30/ uTKqP2BDmduiX/HFfnHLYEbFmKgSLCJQyWixrAeYtATohKPNWyhqq4/wJHaR0CAm3qV2rUk8ZDwbV KqjgoyJvVwTDNI8QMmmlVmYMwNCeGXgF8U7qbfwHh1Dg0AjgCdXpn4AcrUKKtA2dnCDiYrTEJNmRc A3gLi+gGptqFDySYwgbo0QIzlyqr2PB8G51Br1CiqjtMIhc9OiAqNGhR2Aq8HHy6v+TEa/tddvNh5 dx8B6ND3cekptk1eaU3U7/4e17FmPeffw/Y1Q0ZYzJiV/o3bFLA9m0mRKyV9zYgJCFYFbXfvtQNKT SKi3t50ki4TElWN3TgU9Bg==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g2N5q-000FXE-K7; Tue, 18 Sep 2018 22:59:35 +0200 Received: from asus.hsd1.ca.comcast.net (c-174-62-111-89.hsd1.ca.comcast.net [174.62.111.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 15D28C0906; Tue, 18 Sep 2018 22:59:30 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , "Martin K . Petersen" , Ming Lei , Jianchao Wang , Johannes Thumshirn , Alan Stern Subject: [PATCH v8 4/8] block, scsi: Change the preempt-only flag into a counter Date: Tue, 18 Sep 2018 13:58:59 -0700 Message-Id: <20180918205903.15516-5-bvanassche@acm.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180918205903.15516-1-bvanassche@acm.org> References: <20180918205903.15516-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: SB/global_tokens (0.00236222021808) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5qf+C4H6SRiRvObvO7XgP3F602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMQkpuFrILVINAX/FRRdHr3yMkm/wiy60JYYB9 uF3NAa1Nk+QSW7RQzhAm/KStEkEjjSySmAlj1sIqwltPBEyYUVLTnxxOo8WXP45qj8D05F1Et5qO iHPCfy+Mw6EsPu1Xs/fsPQmTmoa3gruSbY5MSgnn9/SnaMe5zST07YHutrmaULGiPFCY0LvWXvm2 zNEpQaXPwIlI4yu/PH1NWDdGRQQ7F/gva2BoNhub/McLY6l2ZhU1Rc4F7kffzmPL0kaiP069MQie 2a9t68Wak3ejIgyS4CGtCxoTgQUi1zcXdQGYLhCbPnXGS26g+fe1UPOYVcspn4OgerQuLHuZ1Ye8 XV+l30H0rpQBrPleYAQRZqwoA0VL4BOWelo892tSfwlDlhQMXA0KRqnIyXdaZxWXlkQMF99VlWOZ Vo1sbhkkbsX0KAdz5bjcKsFBSAEXFEssUCyY9VeO3wVjwgRXLzrv2lYuJCfF/LAgML2VkneCAwhf SJSoWtdxJerU8wHqrCz+oV9skJ5C+N2t2Br3nJnI4CtibyEAdp2B2MxSAAkfFa9wRJ/eXhAiot4M Y812nNgiJTV5TSpxPzARZvIqtGiBjgd2E08+eSbqai6WfhwIJ+jVrI6HLw3hVLpgrFeYMZAp++Du IQUs/5JJj4C/n4CILnhs+K1Z3lg9LHrUjeEl84Z88t/LfWhT11qdOSEiGCUURqZC3/sFXFqioJwg u9lZDgcqaVfK6LpcijgHtWZeoRYbcCTHsCYe2wmYCB7xseRFOPEfDQ5kA2FAM0elrWW4h/djzQ6Y C7Heg3Xf7O1TOd58Tufmk7sU/IKQLJ13ZMp7aObTr3DucFCDsEXmp1/ZdJH8JNHVbtg6eY4IvRBc r63hp1LSaHqZB2H0i9LLe+qFPAlbDjazCbhs7qBpykynMmEC6PZjo5y2jrPCzseqiEVikzNI6G8B XCpFvNsRZaLiYA75YM/7FhgZIdhlaoyFN1BrMyfpYuyanAZpMXssfrVDPnTY9Ikk7i1CSceYv69F cUUybbZJXNzfCW/EMSSeJTV39MS5H4HvZszNarbzMxqEDB5JUzJGAb4EG8iBxenx4paFSGU9Mnbi 1yfztb5M+P3hVjYcJt6oD2gmbmk+kCk= X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The RQF_PREEMPT flag is used for three purposes: - In the SCSI core, for making sure that power management requests are executed even if a device is in the "quiesced" state. - For domain validation by SCSI drivers that use the parallel port. - In the IDE driver, for IDE preempt requests. Rename "preempt-only" into "pm-only" because the primary purpose of this mode is power management. Since the power management core may but does not have to resume a runtime suspended device before performing system-wide suspend and since a later patch will set "pm-only" mode as long as a block device is runtime suspended, make it possible to set "pm-only" mode from more than one context. Since with this change scsi_device_quiesce() is no longer idempotent, make that function return early if it is called for a quiesced queue. Signed-off-by: Bart Van Assche Cc: Martin K. Petersen Reviewed-by: Hannes Reinecke Reviewed-by: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Johannes Thumshirn Cc: Alan Stern --- block/blk-core.c | 35 ++++++++++++++++++----------------- block/blk-mq-debugfs.c | 10 +++++++++- drivers/scsi/scsi_lib.c | 11 +++++++---- include/linux/blkdev.h | 14 +++++++++----- 4 files changed, 43 insertions(+), 27 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 6d4dd176bd9d..1a691f5269bb 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -422,24 +422,25 @@ void blk_sync_queue(struct request_queue *q) EXPORT_SYMBOL(blk_sync_queue); /** - * blk_set_preempt_only - set QUEUE_FLAG_PREEMPT_ONLY + * blk_set_pm_only - increment pm_only counter * @q: request queue pointer - * - * Returns the previous value of the PREEMPT_ONLY flag - 0 if the flag was not - * set and 1 if the flag was already set. */ -int blk_set_preempt_only(struct request_queue *q) +void blk_set_pm_only(struct request_queue *q) { - return blk_queue_flag_test_and_set(QUEUE_FLAG_PREEMPT_ONLY, q); + atomic_inc(&q->pm_only); } -EXPORT_SYMBOL_GPL(blk_set_preempt_only); +EXPORT_SYMBOL_GPL(blk_set_pm_only); -void blk_clear_preempt_only(struct request_queue *q) +void blk_clear_pm_only(struct request_queue *q) { - blk_queue_flag_clear(QUEUE_FLAG_PREEMPT_ONLY, q); - wake_up_all(&q->mq_freeze_wq); + int pm_only; + + pm_only = atomic_dec_return(&q->pm_only); + WARN_ON_ONCE(pm_only < 0); + if (pm_only == 0) + wake_up_all(&q->mq_freeze_wq); } -EXPORT_SYMBOL_GPL(blk_clear_preempt_only); +EXPORT_SYMBOL_GPL(blk_clear_pm_only); /** * __blk_run_queue_uncond - run a queue whether or not it has been stopped @@ -918,7 +919,7 @@ EXPORT_SYMBOL(blk_alloc_queue); */ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) { - const bool preempt = flags & BLK_MQ_REQ_PREEMPT; + const bool pm = flags & BLK_MQ_REQ_PREEMPT; while (true) { bool success = false; @@ -926,11 +927,11 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) rcu_read_lock(); if (percpu_ref_tryget_live(&q->q_usage_counter)) { /* - * The code that sets the PREEMPT_ONLY flag is - * responsible for ensuring that that flag is globally - * visible before the queue is unfrozen. + * The code that increments the pm_only counter is + * responsible for ensuring that that counter is + * globally visible before the queue is unfrozen. */ - if (preempt || !blk_queue_preempt_only(q)) { + if (pm || !blk_queue_pm_only(q)) { success = true; } else { percpu_ref_put(&q->q_usage_counter); @@ -955,7 +956,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) wait_event(q->mq_freeze_wq, (atomic_read(&q->mq_freeze_depth) == 0 && - (preempt || !blk_queue_preempt_only(q))) || + (pm || !blk_queue_pm_only(q))) || blk_queue_dying(q)); if (blk_queue_dying(q)) return -ENODEV; diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index cb1e6cf7ac48..a5ea86835fcb 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -102,6 +102,14 @@ static int blk_flags_show(struct seq_file *m, const unsigned long flags, return 0; } +static int queue_pm_only_show(void *data, struct seq_file *m) +{ + struct request_queue *q = data; + + seq_printf(m, "%d\n", atomic_read(&q->pm_only)); + return 0; +} + #define QUEUE_FLAG_NAME(name) [QUEUE_FLAG_##name] = #name static const char *const blk_queue_flag_name[] = { QUEUE_FLAG_NAME(QUEUED), @@ -132,7 +140,6 @@ static const char *const blk_queue_flag_name[] = { QUEUE_FLAG_NAME(REGISTERED), QUEUE_FLAG_NAME(SCSI_PASSTHROUGH), QUEUE_FLAG_NAME(QUIESCED), - QUEUE_FLAG_NAME(PREEMPT_ONLY), }; #undef QUEUE_FLAG_NAME @@ -209,6 +216,7 @@ static ssize_t queue_write_hint_store(void *data, const char __user *buf, static const struct blk_mq_debugfs_attr blk_mq_debugfs_queue_attrs[] = { { "poll_stat", 0400, queue_poll_stat_show }, { "requeue_list", 0400, .seq_ops = &queue_requeue_list_seq_ops }, + { "pm_only", 0600, queue_pm_only_show, NULL }, { "state", 0600, queue_state_show, queue_state_write }, { "write_hints", 0600, queue_write_hint_show, queue_write_hint_store }, { "zone_wlock", 0400, queue_zone_wlock_show, NULL }, diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index eb97d2dd3651..62348412ed1b 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -3046,11 +3046,14 @@ scsi_device_quiesce(struct scsi_device *sdev) */ WARN_ON_ONCE(sdev->quiesced_by && sdev->quiesced_by != current); - blk_set_preempt_only(q); + if (sdev->quiesced_by == current) + return 0; + + blk_set_pm_only(q); blk_mq_freeze_queue(q); /* - * Ensure that the effect of blk_set_preempt_only() will be visible + * Ensure that the effect of blk_set_pm_only() will be visible * for percpu_ref_tryget() callers that occur after the queue * unfreeze even if the queue was already frozen before this function * was called. See also https://lwn.net/Articles/573497/. @@ -3063,7 +3066,7 @@ scsi_device_quiesce(struct scsi_device *sdev) if (err == 0) sdev->quiesced_by = current; else - blk_clear_preempt_only(q); + blk_clear_pm_only(q); mutex_unlock(&sdev->state_mutex); return err; @@ -3088,7 +3091,7 @@ void scsi_device_resume(struct scsi_device *sdev) mutex_lock(&sdev->state_mutex); WARN_ON_ONCE(!sdev->quiesced_by); sdev->quiesced_by = NULL; - blk_clear_preempt_only(sdev->request_queue); + blk_clear_pm_only(sdev->request_queue); if (sdev->sdev_state == SDEV_QUIESCE) scsi_device_set_state(sdev, SDEV_RUNNING); mutex_unlock(&sdev->state_mutex); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index e25ce42c982d..5f6c36b5058c 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -504,6 +504,12 @@ struct request_queue { * various queue flags, see QUEUE_* below */ unsigned long queue_flags; + /* + * Number of contexts that have called blk_set_pm_only(). If this + * counter is above zero then only RQF_PM and RQF_PREEMPT requests are + * processed. + */ + atomic_t pm_only; /* * ida allocated id for this queue. Used to index queues from @@ -698,7 +704,6 @@ struct request_queue { #define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */ #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ -#define QUEUE_FLAG_PREEMPT_ONLY 29 /* only process REQ_PREEMPT requests */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_SAME_COMP) | \ @@ -736,12 +741,11 @@ bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q); ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \ REQ_FAILFAST_DRIVER)) #define blk_queue_quiesced(q) test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags) -#define blk_queue_preempt_only(q) \ - test_bit(QUEUE_FLAG_PREEMPT_ONLY, &(q)->queue_flags) +#define blk_queue_pm_only(q) atomic_read(&(q)->pm_only) #define blk_queue_fua(q) test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags) -extern int blk_set_preempt_only(struct request_queue *q); -extern void blk_clear_preempt_only(struct request_queue *q); +extern void blk_set_pm_only(struct request_queue *q); +extern void blk_clear_pm_only(struct request_queue *q); static inline int queue_in_flight(struct request_queue *q) { From patchwork Tue Sep 18 20:59:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10604877 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E4C56CB for ; Tue, 18 Sep 2018 20:59:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D0672BEAB for ; Tue, 18 Sep 2018 20:59:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 810542BEB1; Tue, 18 Sep 2018 20:59:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 103FA2BEAB for ; Tue, 18 Sep 2018 20:59:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729859AbeISCeL (ORCPT ); Tue, 18 Sep 2018 22:34:11 -0400 Received: from out002.mailprotect.be ([83.217.72.86]:34211 "EHLO out002.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729902AbeISCeL (ORCPT ); Tue, 18 Sep 2018 22:34:11 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=hPBHDdQ4zc/ToeByF0/bA5P1Aew+hsnfgnX4xwmEa7Y=; b=vAQES5gRgv+0 W3VV7zbGEsFlgxoQ5dyKb+wiFukUPoRJiQgV/zk6Gsm8hSMDahj0UfbcSW9NpXx7Iwtd8Xf8BpsKC b0KQv4i5s9g8VhGMffxSkl3pBvVjwcJgIRhmorww9lKRBZCIUYRUoIMmvhR//3+ZAhBAEaIiuucyB J8fvQlzx2J9zy+usRJcEwcBCniJqCvRZxB6QFz/GcoGw0k6YVSqciENnoB1Gwx7/HFxcxxHsOAxzO HXeWcHzLXCa60CreT4rZE0RyADC49GBHKdHhFCtQLeHtf7TkID1kNvs+iGfyfprP0MmBSXEo0u+qE Ny9dJhAdCsAvdnQ5ypdnaQ==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out002.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g2N5t-0007rp-Ff; Tue, 18 Sep 2018 22:59:38 +0200 Received: from asus.hsd1.ca.comcast.net (c-174-62-111-89.hsd1.ca.comcast.net [174.62.111.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 97031C09A6; Tue, 18 Sep 2018 22:59:34 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , "Martin K . Petersen" , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v8 5/8] block: Split blk_pm_add_request() and blk_pm_put_request() Date: Tue, 18 Sep 2018 13:59:00 -0700 Message-Id: <20180918205903.15516-6-bvanassche@acm.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180918205903.15516-1-bvanassche@acm.org> References: <20180918205903.15516-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.02) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5ksdu7TaUtX58p2uOeg2kIp602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMQkpuFrILVINAX/FRRdHr3yMkm/wiy60JYYB9 uF3NAa1Nk+QSW7RQzhAm/KStEkEjjSySmAlj1sIqwltPBEyYUVLTnxxOo8WXP45qj8D05F0GHXoj JzNBXYf7J9S/MLPJeew2ZFB8y+lVHOa/te45KxSlpp6gZ+aErQiqwMtUFszQVi7RanL0btaq9sjs zlKiU2Itm39BdCc4FEP6OrUewqv6w78MMfzJ6uM1Dx2pwT02MDKcFISveXb58XwZtRhJgl8rTFVB NUayNYioUf/jaE247rv5gOouGP1S4LTxm4IZFA/pqiraPmJuCFYZ26hO7EGy2J9deH+T4rZSs7oC RIPDMAUwB3tfqejuvISV+XjlglRDk9cYQA7KMLDnGOJ56M3u8AeQ7v6DJmTHsjuH4LUFg/fIocum fy7nlHshc2SgmIw4GlBMUJ5tmacVuNcCPzQwTZWk0LJpbj+275alueD3JP1TZOF7xCxMWPmtOrbT CB+y1/kdwCnPNDJxz2+x6xRwdbvGEYre0mvLtHe5W25UyoBaNn4UvUDGLvKfl0KGpS+C6BPAIEs8 PiiVo3EbitwIT4UrixKuvc+2A6GMBkdUx4wVmRRdNAC1VnP7J7aoVa9N1B5a7HEax6KxZtwTGplH cpVCCoX989hgB8R+yBmSQTjEQ7K8L3YnE3YXQesAIdpmOmCdfONBfm+H9NFGTDupgph7S1vYw17S xQS3Z4Clx2Bar4+r4X5TDyO6kqbM3QJ+eNps9QCtxORQfh9Yfx+Rej4UShefUo+T6F7dpBEVniih uDwEGDcmr6e3OPT2vpc7DTbK73Uczg4v9NX8rPLdozHWIiOOCs6xV5cNI5dlRzZcPtDrszH8KIBn qeOjWprkY1zQhOoOEbr7MsjqctgzcDoFd+96Xw4QUNtTnUwBPU16Lw5D6FTpb21Gqfhq+NvRNh3j F3bBvS1htv73jyRU1W7yjZrVL5fyX8Ze+edKEy7bP5gIt23U7fpAnNNaaosU2ouOolw9cp1YOrL6 hzzbxWjrOIv823OwuoSzwXBEn95eECKi3gxjzXac2CLtWj2g85mHRkOrwPj67JONTwwsq9jL8kDn CowdqqA99B1XEvFeCBHWER7m/Rdd97Q= X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the pm_request_resume() and pm_runtime_mark_last_busy() calls into two new functions and thereby separate legacy block layer code from code that works for both the legacy block layer and blk-mq. A later patch will add calls to the new functions in the blk-mq code. Signed-off-by: Bart Van Assche Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern --- block/blk-core.c | 1 + block/blk-pm.h | 36 +++++++++++++++++++++++++++++++----- block/elevator.c | 1 + 3 files changed, 33 insertions(+), 5 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 1a691f5269bb..fd91e9bf2893 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1744,6 +1744,7 @@ void __blk_put_request(struct request_queue *q, struct request *req) blk_req_zone_write_unlock(req); blk_pm_put_request(req); + blk_pm_mark_last_busy(req); elv_completed_request(q, req); diff --git a/block/blk-pm.h b/block/blk-pm.h index 1ffc8ef203ec..a8564ea72a41 100644 --- a/block/blk-pm.h +++ b/block/blk-pm.h @@ -6,8 +6,23 @@ #include #ifdef CONFIG_PM +static inline void blk_pm_request_resume(struct request_queue *q) +{ + if (q->dev && (q->rpm_status == RPM_SUSPENDED || + q->rpm_status == RPM_SUSPENDING)) + pm_request_resume(q->dev); +} + +static inline void blk_pm_mark_last_busy(struct request *rq) +{ + if (rq->q->dev && !(rq->rq_flags & RQF_PM)) + pm_runtime_mark_last_busy(rq->q->dev); +} + static inline void blk_pm_requeue_request(struct request *rq) { + lockdep_assert_held(rq->q->queue_lock); + if (rq->q->dev && !(rq->rq_flags & RQF_PM)) rq->q->nr_pending--; } @@ -15,17 +30,28 @@ static inline void blk_pm_requeue_request(struct request *rq) static inline void blk_pm_add_request(struct request_queue *q, struct request *rq) { - if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 && - (q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING)) - pm_request_resume(q->dev); + lockdep_assert_held(q->queue_lock); + + if (q->dev && !(rq->rq_flags & RQF_PM)) + q->nr_pending++; } static inline void blk_pm_put_request(struct request *rq) { - if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending) - pm_runtime_mark_last_busy(rq->q->dev); + lockdep_assert_held(rq->q->queue_lock); + + if (rq->q->dev && !(rq->rq_flags & RQF_PM)) + --rq->q->nr_pending; } #else +static inline void blk_pm_request_resume(struct request_queue *q) +{ +} + +static inline void blk_pm_mark_last_busy(struct request *rq) +{ +} + static inline void blk_pm_requeue_request(struct request *rq) { } diff --git a/block/elevator.c b/block/elevator.c index e18ac68626e3..1c992bf6cfb1 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -601,6 +601,7 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where) trace_block_rq_insert(q, rq); blk_pm_add_request(q, rq); + blk_pm_request_resume(q); rq->q = q; From patchwork Tue Sep 18 20:59:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10604881 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4411513AD for ; Tue, 18 Sep 2018 21:00:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3251D2BEB3 for ; Tue, 18 Sep 2018 21:00:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 257372BEB6; Tue, 18 Sep 2018 21:00:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AEBBF2BEB3 for ; Tue, 18 Sep 2018 21:00:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729902AbeISCeY (ORCPT ); Tue, 18 Sep 2018 22:34:24 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:55083 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729693AbeISCeY (ORCPT ); Tue, 18 Sep 2018 22:34:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=IeVxgPXjHG+mqJB3iqL2B6+i2mD4Cz6P3hsiqaBv+J0=; b=Xi7XqCB7yf+5 gZZphDkOHS1aaBio4EJzUd0pheWPwnwskdgwpcciZ3uQXileBo7vAOdfXfDvV1Nbh26SFtp2CTa4N 5ZTn+VWf9zio58ypKBN/OH8zxt4X2YpgA/IHCTNrDX16tpbyGNtYz7dIJYOAMO+oCL92NH3UX1DHp EgIg4dbfoVFcEDdSz4OVG9JT3tfQc/NzxiPoga+tvjgt8HKtt/Bo9nNBbHT8+XIOiNWBgaLtmaae9 DudEBYvw03CUVC1YISRIZvh8b674udftvTtmpxwdcc8HBDY2Ownhm/deH8vLueI6z0a70QRCron5/ SjXT9z2kcPc25WUiM8zhow==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g2N5v-000FXd-UB; Tue, 18 Sep 2018 22:59:40 +0200 Received: from asus.hsd1.ca.comcast.net (c-174-62-111-89.hsd1.ca.comcast.net [174.62.111.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 76801C0906; Tue, 18 Sep 2018 22:59:37 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v8 6/8] block: Schedule runtime resume earlier Date: Tue, 18 Sep 2018 13:59:01 -0700 Message-Id: <20180918205903.15516-7-bvanassche@acm.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180918205903.15516-1-bvanassche@acm.org> References: <20180918205903.15516-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.02) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5l83fuaxv1OYwn/5BfdOpox602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9RpciY9ftBUEUf83E+XxLnISYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktw2usqskBf h9gLNBvEVVkgjgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnlPuqZbMrgHHVYh4W0Zs0i3u8wNzAuosYH7EO0/sG0TzM GFp3hf2fU3wmEvVOzv0bNXf0xLkfge9mzM1qtvMzGtQ3zbTb9ioJQTKsIej5k9paxvGXBaSBqdUb CKlGqRQ9eK0fQKpKEJhHmEdzQWgsJnMcN6qoXPjenLhIOF1oeRYT1Cmt5/tKxv1Eb49x6pCWdIpg wUz8e0OkQEN208RvKRqYb7jo1qGXP0HdNXMxEmY3NrkGXSLvPoU2CPGNDJrzMzfVS2F5zx6Xv1wY D5pggJR+b/8ppo9FzlAc2qebdELTPpuFqUUQz+mM8JAD4ECWjgbvxDHpubcqvCt3KW3r983k5tpF 2HAQY75TBNNMBbDdWXJeGq06L50X2Qy2upyaRbO7FbZEDr2t/XKMlrImnVz1pRXWhjh9fdbl44I0 Df1nN63Cw5XairzMoRO+2yR7pt9JqCP5X+u3uDCejVE4UNDPNRgrf1AeViSeIVnMBBfBRBZgCikv OAMF/4pY8933x+uRyQECoIXFluPJGyJ2ePcQWk+3dNZU5rZYaF7U3qSQORcYm+eIaK9fUWr9RAN1 9ssFeollKTUEleC/aMh59YF7hUbm4nn1f8ypkbDQ8N7mh/jFPxqiQ662IrNcVOfW X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of scheduling runtime resume of a request queue after a request has been queued, schedule asynchronous resume during request allocation. The new pm_request_resume() calls occur after blk_queue_enter() has increased the q_usage_counter request queue member. This change is needed for a later patch that will make request allocation block while the queue status is not RPM_ACTIVE. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern --- block/blk-core.c | 2 ++ block/elevator.c | 1 - 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/block/blk-core.c b/block/blk-core.c index fd91e9bf2893..18b874d5c9c9 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -942,6 +942,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) if (success) return 0; + blk_pm_request_resume(q); + if (flags & BLK_MQ_REQ_NOWAIT) return -EBUSY; diff --git a/block/elevator.c b/block/elevator.c index 1c992bf6cfb1..e18ac68626e3 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -601,7 +601,6 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where) trace_block_rq_insert(q, rq); blk_pm_add_request(q, rq); - blk_pm_request_resume(q); rq->q = q; From patchwork Tue Sep 18 20:59:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10604883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE04613AD for ; Tue, 18 Sep 2018 21:00:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A0732BEB4 for ; Tue, 18 Sep 2018 21:00:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C3842BEC0; Tue, 18 Sep 2018 21:00:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E99182BEB4 for ; Tue, 18 Sep 2018 21:00:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730149AbeISCe2 (ORCPT ); Tue, 18 Sep 2018 22:34:28 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:54959 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729693AbeISCe2 (ORCPT ); Tue, 18 Sep 2018 22:34:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=b83tegknkOOKOLRTjlb9J3zCgbWLaTSASRp39Q7rKQ4=; b=BBD8+L7VaBqu mzX38AV9DWitLaOBKeQVzMNBVSK5/NcjSg5JmxiepA0F1CS5Edf072pMtWoIxhY/nQZS/Izjd0tAS X1dFtotu5ZXUtWXJ1GKlFbtQ9tjH2pDUyv6bpyeuN8DyV0aN1Ei2tK5pZ4HZjCD4a3sTh3umLzTGT 1KrEAsujF9iWOBnjMuvt5Avy+2ecSvl9+LlCp9by7tHX3ZKUMVDWc66nny8T5XqnlT+ZK8BCzJmwJ VElCJDsdB56uMEgPRjfVEYifomryhPYHnpCq5uIymcY/qY4QPhH5eer4fHC/2j065z8TlqzDIIj2U XYR04JrJpvwby1wiE1yXUA==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g2N5y-000FXr-ST; Tue, 18 Sep 2018 22:59:43 +0200 Received: from asus.hsd1.ca.comcast.net (c-174-62-111-89.hsd1.ca.comcast.net [174.62.111.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id E3343C09A6; Tue, 18 Sep 2018 22:59:39 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v8 7/8] block: Make blk_get_request() block for non-PM requests while suspended Date: Tue, 18 Sep 2018 13:59:02 -0700 Message-Id: <20180918205903.15516-8-bvanassche@acm.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180918205903.15516-1-bvanassche@acm.org> References: <20180918205903.15516-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: SB/global_tokens (0.00042815176003) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5sqx92NUtnFE1nX7Z2QvcbR602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9Il9Nd8Tzgg9NunEiMNoTAYSYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktw2usqskBf h9gLNBvEVVkgjgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnMdRatk7KHHjyULplJGegbdxbXpCgbiKBsA+Ddi6mawer tqWJnTnoN4VUNd4gjQeVvLN728IxxZDbDiI7+RnRY0757KiUYziXyuXG4Z+FT7f2Z5FmZiYh5zrn 6SPoZlSUVqZvWGb8FLl4ADZ/Txs2SsRyUiMV0bJ59aqbfue3i8/41P+vcbHOYMtZCUK3cTvB5CC0 B0N4GDXwtimutwTDmvTpmOkkPKA6zgbtEUKf0VxPgOgffISdG7HbGyLtN3rBRZ1RoScDOsq2Parb hQOhvUFGnvKVJnFQQaAnDicXOB3EjzrwzpOE5mwgEdfdYo4D+vH10Kef7/oaji7+V0umCbBezpWT CcpgMNUt3YH99/LRMrw5DdwH1AXXXLvYNFIcFNl57WdwhmTsAKzUogeC9FFofOtUjrDZRde/Yb4V fv9SOYE66SFhT6NVrQT/pWDn674K8PUeEVvDdxLV5QoWYB4kT8X9aUY7djYr1mogTEaxmF3d2RZv qUM/cwNhuECOygEwwwWe3xXNBQ8NJdm4Puz0ZaEH4EagAPQe7incqLSp9Bs6vFf0/xkS2e/oeNZr pAEntM8yKElR+Y84QVlpY8ekY5wl4Q3nPDb6eyxRMZLQiWYpjzKs/f2li8jAd+iCpKNMVj7VMvuF NvyQYijYug== X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of allowing requests that are not power management requests to enter the queue in runtime suspended status (RPM_SUSPENDED), make the blk_get_request() caller block. This change fixes a starvation issue: it is now guaranteed that power management requests will be executed no matter how many blk_get_request() callers are waiting. For blk-mq, instead of maintaining the q->nr_pending counter, rely on q->q_usage_counter. Call pm_runtime_mark_last_busy() every time a request finishes instead of only if the queue depth drops to zero. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern --- block/blk-core.c | 37 ++++++------------------- block/blk-pm.c | 70 ++++++++++++++++++++++++++++++++++++++++++++---- 2 files changed, 73 insertions(+), 34 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 18b874d5c9c9..ae092ca121d5 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2747,30 +2747,6 @@ void blk_account_io_done(struct request *req, u64 now) } } -#ifdef CONFIG_PM -/* - * Don't process normal requests when queue is suspended - * or in the process of suspending/resuming - */ -static bool blk_pm_allow_request(struct request *rq) -{ - switch (rq->q->rpm_status) { - case RPM_RESUMING: - case RPM_SUSPENDING: - return rq->rq_flags & RQF_PM; - case RPM_SUSPENDED: - return false; - default: - return true; - } -} -#else -static bool blk_pm_allow_request(struct request *rq) -{ - return true; -} -#endif - void blk_account_io_start(struct request *rq, bool new_io) { struct hd_struct *part; @@ -2816,11 +2792,14 @@ static struct request *elv_next_request(struct request_queue *q) while (1) { list_for_each_entry(rq, &q->queue_head, queuelist) { - if (blk_pm_allow_request(rq)) - return rq; - - if (rq->rq_flags & RQF_SOFTBARRIER) - break; +#ifdef CONFIG_PM + /* + * If a request gets queued in state RPM_SUSPENDED + * then that's a kernel bug. + */ + WARN_ON_ONCE(q->rpm_status == RPM_SUSPENDED); +#endif + return rq; } /* diff --git a/block/blk-pm.c b/block/blk-pm.c index 9b636960d285..dc9bc45b0db5 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -1,8 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include #include +#include "blk-mq.h" +#include "blk-mq-tag.h" /** * blk_pm_runtime_init - Block layer runtime PM initialization routine @@ -40,6 +43,34 @@ void blk_pm_runtime_init(struct request_queue *q, struct device *dev) } EXPORT_SYMBOL(blk_pm_runtime_init); +struct in_flight_data { + struct request_queue *q; + int in_flight; +}; + +static void blk_count_in_flight(struct blk_mq_hw_ctx *hctx, struct request *rq, + void *priv, bool reserved) +{ + struct in_flight_data *in_flight = priv; + + if (rq->q == in_flight->q) + in_flight->in_flight++; +} + +/* + * Count the number of requests that are in flight for request queue @q. Use + * @q->nr_pending for legacy queues. Iterate over the tag set for blk-mq + * queues. + */ +static int blk_requests_in_flight(struct request_queue *q) +{ + struct in_flight_data in_flight = { .q = q }; + + if (q->mq_ops) + blk_mq_queue_rq_iter(q, blk_count_in_flight, &in_flight); + return q->nr_pending + in_flight.in_flight; +} + /** * blk_pre_runtime_suspend - Pre runtime suspend check * @q: the queue of the device @@ -68,14 +99,38 @@ int blk_pre_runtime_suspend(struct request_queue *q) if (!q->dev) return ret; + WARN_ON_ONCE(q->rpm_status != RPM_ACTIVE); + + blk_set_pm_only(q); + /* + * This function only gets called if the most recent request activity + * occurred at least autosuspend_delay_ms ago. Since blk_queue_enter() + * is called by the request allocation code before + * pm_request_resume(), if no requests have a tag assigned it is safe + * to suspend the device. + */ + ret = -EBUSY; + if (blk_requests_in_flight(q) == 0) { + /* + * Call synchronize_rcu() such that later blk_queue_enter() + * calls see the pm-only state. See also + * http://lwn.net/Articles/573497/. + */ + synchronize_rcu(); + if (blk_requests_in_flight(q) == 0) + ret = 0; + } + spin_lock_irq(q->queue_lock); - if (q->nr_pending) { - ret = -EBUSY; + if (ret < 0) pm_runtime_mark_last_busy(q->dev); - } else { + else q->rpm_status = RPM_SUSPENDING; - } spin_unlock_irq(q->queue_lock); + + if (ret) + blk_clear_pm_only(q); + return ret; } EXPORT_SYMBOL(blk_pre_runtime_suspend); @@ -106,6 +161,9 @@ void blk_post_runtime_suspend(struct request_queue *q, int err) pm_runtime_mark_last_busy(q->dev); } spin_unlock_irq(q->queue_lock); + + if (err) + blk_clear_pm_only(q); } EXPORT_SYMBOL(blk_post_runtime_suspend); @@ -153,13 +211,15 @@ void blk_post_runtime_resume(struct request_queue *q, int err) spin_lock_irq(q->queue_lock); if (!err) { q->rpm_status = RPM_ACTIVE; - __blk_run_queue(q); pm_runtime_mark_last_busy(q->dev); pm_request_autosuspend(q->dev); } else { q->rpm_status = RPM_SUSPENDED; } spin_unlock_irq(q->queue_lock); + + if (!err) + blk_clear_pm_only(q); } EXPORT_SYMBOL(blk_post_runtime_resume); From patchwork Tue Sep 18 20:59:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10604885 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 43299161F for ; Tue, 18 Sep 2018 21:00:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 30D992BEB3 for ; Tue, 18 Sep 2018 21:00:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 24B332BEB4; Tue, 18 Sep 2018 21:00:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20FD52BEB6 for ; Tue, 18 Sep 2018 21:00:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729693AbeISCe3 (ORCPT ); Tue, 18 Sep 2018 22:34:29 -0400 Received: from com-out001.mailprotect.be ([83.217.72.83]:47065 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730048AbeISCe2 (ORCPT ); Tue, 18 Sep 2018 22:34:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=Oi30u/+F7IbqyJvFwARtBBPsdT2IdpdiCq4sPOG+U78=; b=HDSYrx3aZZM1 nOiIvJZWZy4BJS0SQJp/Q0CULho4LIkyau7vtWpyFN6DgOdZuKM+vo2bUrFjgqoSfoelOu1Lpph91 +RYkyEGoo8gErzEZy0DmqDgrJ5OnuhouMzuyO1OqDcRs6BAFWIfxIzAvQlCH7SwUucNv8hJJjiJ4s aTh2VQx0xwcN1xkYPVIo6Af5rpFm4Hsz4bWOqD+H82Ka1NtMBTcO60Gp4Mrus1ysf05J82dj2IMQt D/wux4002upm4HdUipmErjMft6WQ/1D11GYxyGfQLySpNBetq0Ge4vmfgbTNnvy1QALcI7PMxxenb gT+QNuysEG8HTC4/+Xdi/Q==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1g2N61-000FYD-95; Tue, 18 Sep 2018 22:59:45 +0200 Received: from asus.hsd1.ca.comcast.net (c-174-62-111-89.hsd1.ca.comcast.net [174.62.111.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id D61A6C09D0; Tue, 18 Sep 2018 22:59:42 +0200 (CEST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Alan Stern Subject: [PATCH v8 8/8] blk-mq: Enable support for runtime power management Date: Tue, 18 Sep 2018 13:59:03 -0700 Message-Id: <20180918205903.15516-9-bvanassche@acm.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180918205903.15516-1-bvanassche@acm.org> References: <20180918205903.15516-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.02) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5sAz7aMBOfHwhZaTlEhpARp602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTaOqZpfO3PVJjdazu3l6Zm3CrxbKqqxVb1b/D8J7mjn9ilV cTJ2PzGYt6C/dLANFNC9eBrzwetNxEiSKDRqnfKMchy245qaCGq3TeWvoSybAt5ZBZ2f8Z/psVOj OL3ZCmydclC0ZTlj5mu9SqtPNLdvD5ZcU14fn9LDWdfT12La6fnkh42tAYDRw9CusfrzVmK0WsJ7 KGS+qYB0uWxgiY7CnWgbDYoAoppqKkXjYJMeqS7vMPoASQBHF7NsWwEU7vVcSv/qU9FFqqWjPxOZ 4rS9G5xCfgEsFZpCuns9jvhjQoSYjNUoBCz9aTu56OA1tEt5/6v7c9QHckE3f/XVZktw2usqskBf h9gLNBvEVVkgjgCZ3lN0dg693VokPoLlNvu19yPULOCRMJQ2BgM/Wut6cO9MNpwkpncXgB6bffaX 7xYrrIBbnoZxxFp2ILDar2JNQvoC5ZFW9jc7Gmccf4fCAIyNSTWVq4dZvieuPRMUTLlOMYNNn/Xj 9WX1Vxj8fmy0GPm9ewau6K38/Gs14AgnlPuqZbMrgHHVYh4W0Zs0i3u8wNzAuosYH7EO0/sG0TzM GFp3hf2fU3wmEvVOzv0bNXf0xLkfge9mzM1qtvMzGtQ3zbTb9ioJQTKsIej5k9paxvGXBaSBqdUb CKlGqRQ9eK0fQKpKEJhHmEdzQWgsJnMcN6qoXPjenLhIOF1oeRYT1Cmt5/tKxv1Eb49x6pCWdIpg wUz8e0OkQEN208RvKUva/bxZciYMiTwIenaAxw03NrkGXSLvPoU2CPGNDJrzMzfVS2F5zx6Xv1wY D5pggJR+b/8ppo9FzlAc2qebdELTPpuFqUUQz+mM8JAD4ECWjgbvxDHpubcqvCt3KW3r983k5tpF 2HAQY75TBNNMBbDdWXJeGq06L50X2Qy2upyaGas2KDThbAxlDHMyFQEL9lz1pRXWhjh9fdbl44I0 Df1nN63Cw5XairzMoRO+2yR7pt9JqCP5X+u3uDCejVE4UNDPNRgrf1AeViSeIVnMBBfBRBZgCikv OAMF/4pY8933x+uRyQECoIXFluPJGyJ2ePcQWk+3dNZU5rZYaF7U3qSQORcYm+eIaK9fUWr9RAN1 9ssFeollKTUEleC/aMh59YF7hUbm4nn1f8ypkbDQ8N7mh/jFPxqiQ662IrNcVOfW X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that the blk-mq core processes power management requests (marked with RQF_PREEMPT) in other states than RPM_ACTIVE, enable runtime power management for blk-mq. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Alan Stern --- block/blk-mq.c | 2 ++ block/blk-pm.c | 6 ------ 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 85a1c1a59c72..20fdd78b75c7 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -33,6 +33,7 @@ #include "blk-mq.h" #include "blk-mq-debugfs.h" #include "blk-mq-tag.h" +#include "blk-pm.h" #include "blk-stat.h" #include "blk-mq-sched.h" #include "blk-rq-qos.h" @@ -475,6 +476,7 @@ static void __blk_mq_free_request(struct request *rq) struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu); const int sched_tag = rq->internal_tag; + blk_pm_mark_last_busy(rq); if (rq->tag != -1) blk_mq_put_tag(hctx, hctx->tags, ctx, rq->tag); if (sched_tag != -1) diff --git a/block/blk-pm.c b/block/blk-pm.c index dc9bc45b0db5..2a7a9c0f5b99 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -30,12 +30,6 @@ */ void blk_pm_runtime_init(struct request_queue *q, struct device *dev) { - /* Don't enable runtime PM for blk-mq until it is ready */ - if (q->mq_ops) { - pm_runtime_disable(dev); - return; - } - q->dev = dev; q->rpm_status = RPM_ACTIVE; pm_runtime_set_autosuspend_delay(q->dev, -1);