From patchwork Sat Aug 11 07:12:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10563365 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 048601057 for ; Sat, 11 Aug 2018 07:16:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E34E12A957 for ; Sat, 11 Aug 2018 07:16:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D6D9D2AC03; Sat, 11 Aug 2018 07:16:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4957B2A957 for ; Sat, 11 Aug 2018 07:16:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727314AbeHKJtO (ORCPT ); Sat, 11 Aug 2018 05:49:14 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:48780 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727173AbeHKJtO (ORCPT ); Sat, 11 Aug 2018 05:49:14 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2445777153; Sat, 11 Aug 2018 07:15:59 +0000 (UTC) Received: from localhost (ovpn-12-20.pek2.redhat.com [10.72.12.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id E8D7A2026D74; Sat, 11 Aug 2018 07:15:50 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Alan Stern , Christoph Hellwig , Bart Van Assche , Jianchao Wang , Hannes Reinecke , Johannes Thumshirn , Adrian Hunter , "James E.J. Bottomley" , "Martin K. Petersen" , linux-scsi@vger.kernel.org Subject: [RFC PATCH V2 17/17] block: enable runtime PM for blk-mq Date: Sat, 11 Aug 2018 15:12:20 +0800 Message-Id: <20180811071220.357-18-ming.lei@redhat.com> In-Reply-To: <20180811071220.357-1-ming.lei@redhat.com> References: <20180811071220.357-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Sat, 11 Aug 2018 07:15:59 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Sat, 11 Aug 2018 07:15:59 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now blk-mq can borrow the runtime PM approach from legacy path, so enable it simply. The only difference with legacy is that: 1) blk_mq_queue_sched_tag_busy_iter() is introduced for checking if queue is idle, instead of maintaining one counter. 2) we have to iterate over scheduler tags for counting how many requests entering queue because requests in hw tags don't cover these allocated and not dispatched. Cc: Alan Stern Cc: Christoph Hellwig Cc: Bart Van Assche Cc: Jianchao Wang Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Adrian Hunter Cc: "James E.J. Bottomley" Cc: "Martin K. Petersen" Cc: linux-scsi@vger.kernel.org Signed-off-by: Ming Lei --- block/blk-core.c | 29 ++++++++++++++++++++++++----- block/blk-mq-tag.c | 21 +++++++++++++++++++-- block/blk-mq-tag.h | 2 ++ block/blk-mq.c | 4 ++++ 4 files changed, 49 insertions(+), 7 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 939e1dae4ea8..f42197c9f7af 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -3751,11 +3751,8 @@ EXPORT_SYMBOL(blk_finish_plug); */ void blk_pm_runtime_init(struct request_queue *q, struct device *dev) { - /* Don't enable runtime PM for blk-mq until it is ready */ - if (q->mq_ops) { - pm_runtime_disable(dev); + if (WARN_ON_ONCE(blk_queue_admin(q))) return; - } q->dev = dev; q->rpm_status = RPM_ACTIVE; @@ -3764,6 +3761,23 @@ void blk_pm_runtime_init(struct request_queue *q, struct device *dev) } EXPORT_SYMBOL(blk_pm_runtime_init); +static void blk_mq_pm_count_req(struct blk_mq_hw_ctx *hctx, + struct request *rq, void *priv, bool reserved) +{ + unsigned long *cnt = priv; + + (*cnt)++; +} + +static bool blk_mq_pm_queue_busy(struct request_queue *q) +{ + unsigned long cnt = 0; + + blk_mq_queue_sched_tag_busy_iter(q, blk_mq_pm_count_req, &cnt); + + return cnt > 0; +} + /** * blk_pre_runtime_suspend - Pre runtime suspend check * @q: the queue of the device @@ -3788,12 +3802,17 @@ EXPORT_SYMBOL(blk_pm_runtime_init); int blk_pre_runtime_suspend(struct request_queue *q) { int ret = 0; + bool busy = true; if (!q->dev) return ret; + if (q->mq_ops) + busy = blk_mq_pm_queue_busy(q); + spin_lock_irq(q->queue_lock); - if (q->nr_pending) { + busy = q->mq_ops ? busy : !!q->nr_pending; + if (busy) { ret = -EBUSY; pm_runtime_mark_last_busy(q->dev); } else { diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 7cd09fd16f5a..0580f80fa350 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -316,8 +316,8 @@ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset, } EXPORT_SYMBOL(blk_mq_tagset_busy_iter); -void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, - void *priv) +static void __blk_mq_queue_tag_busy_iter(struct request_queue *q, + busy_iter_fn *fn, void *priv, bool sched_tag) { struct blk_mq_hw_ctx *hctx; int i; @@ -326,6 +326,9 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, queue_for_each_hw_ctx(q, hctx, i) { struct blk_mq_tags *tags = hctx->tags; + if (sched_tag && hctx->sched_tags) + tags = hctx->sched_tags; + /* * If not software queues are currently mapped to this * hardware queue, there's nothing to check @@ -340,6 +343,20 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, } +void blk_mq_queue_tag_busy_iter(struct request_queue *q, + busy_iter_fn *fn, void *priv) +{ + + __blk_mq_queue_tag_busy_iter(q, fn, priv, false); +} + +void blk_mq_queue_sched_tag_busy_iter(struct request_queue *q, + busy_iter_fn *fn, void *priv) +{ + + __blk_mq_queue_tag_busy_iter(q, fn, priv, true); +} + static int bt_alloc(struct sbitmap_queue *bt, unsigned int depth, bool round_robin, int node) { diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 61deab0b5a5a..5513c3eeab00 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -35,6 +35,8 @@ extern int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool); void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, void *priv); +void blk_mq_queue_sched_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, + void *priv); static inline struct sbq_wait_state *bt_wait_ptr(struct sbitmap_queue *bt, struct blk_mq_hw_ctx *hctx) diff --git a/block/blk-mq.c b/block/blk-mq.c index aea121c41a30..b42a2c9ba00e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -25,6 +25,7 @@ #include #include #include +#include #include @@ -503,6 +504,9 @@ static void __blk_mq_free_request(struct request *rq) blk_mq_put_tag(hctx, hctx->sched_tags, ctx, sched_tag); blk_mq_sched_restart(hctx); blk_queue_exit(q); + + if (q->dev) + pm_runtime_mark_last_busy(q->dev); } void blk_mq_free_request(struct request *rq)