From patchwork Wed Jul 25 22:26:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10544877 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DF345A639 for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE5D32A9FC for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C2AD32AA06; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E15B285CC for ; Wed, 25 Jul 2018 22:26:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731357AbeGYXjx (ORCPT ); Wed, 25 Jul 2018 19:39:53 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:40585 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731259AbeGYXjx (ORCPT ); Wed, 25 Jul 2018 19:39:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1532557569; x=1564093569; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=/T54FybXL4xsLGbhX6Wt7Y2fsIvMt1mvZu0Z+8YOJW4=; b=neKx71UPL5Y7kj2lbunnKR1PZ+rwnePRwDzX/4w2BglgKS7zs6TOlXf3 fL0wu/gncbDMQtPl+B7LVA58a6HlELC22EaPGw5JXCfFvN+b7pWT3hZT/ LvIaYIAn2DiX7v8H8SZjXKYw30Gt5THXTSMUkpShz6v3zPNF7KFq5+kNP HCbzNJo63VqLZeSU0mI4SD8f0rV6mKLUi6G8R98aVOSp8VCgPT2Z1ksLp GWriE9SOCilJ3uZKMY8L+n2SJyqZ6j/gsjFrtOO5lVj/gwAXOQEjRanr4 F8Ze7PZDIHqdlmgMZD9pF0yNd18bqTybqvcljFgetJl7OcFcIQqQBAzEH g==; X-IronPort-AV: E=Sophos;i="5.51,402,1526313600"; d="scan'208";a="84987665" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 26 Jul 2018 06:26:08 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 25 Jul 2018 15:14:44 -0700 Received: from thinkpad-bart.sdcorp.global.sandisk.com ([10.111.67.248]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Jul 2018 15:26:08 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Ming Lei , Johannes Thumshirn , Alan Stern Subject: [PATCH v2 3/5] block: Serialize queue freezing and blk_pre_runtime_suspend() Date: Wed, 25 Jul 2018 15:26:05 -0700 Message-Id: <20180725222607.8854-4-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180725222607.8854-1-bart.vanassche@wdc.com> References: <20180725222607.8854-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Serialize these operations because the next patch will add code into blk_pre_runtime_suspend() that should not run concurrently with queue freezing nor unfreezing. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Ming Lei Cc: Johannes Thumshirn Cc: Alan Stern --- block/blk-core.c | 5 +++++ block/blk-mq.c | 3 +++ block/blk-pm.c | 40 ++++++++++++++++++++++++++++++++++++++++ include/linux/blk-pm.h | 9 +++++++++ include/linux/blkdev.h | 4 ++++ 5 files changed, 61 insertions(+) diff --git a/block/blk-core.c b/block/blk-core.c index 14c28197ea42..feac2b4d3b90 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -694,6 +695,7 @@ void blk_set_queue_dying(struct request_queue *q) * prevent I/O from crossing blk_queue_enter(). */ blk_freeze_queue_start(q); + blk_pm_runtime_unlock(q); if (q->mq_ops) blk_mq_wake_waiters(q); @@ -754,6 +756,7 @@ void blk_cleanup_queue(struct request_queue *q) * prevent that q->request_fn() gets invoked after draining finished. */ blk_freeze_queue(q); + blk_pm_runtime_unlock(q); spin_lock_irq(lock); queue_flag_set(QUEUE_FLAG_DEAD, q); spin_unlock_irq(lock); @@ -1043,6 +1046,8 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, #ifdef CONFIG_BLK_DEV_IO_TRACE mutex_init(&q->blk_trace_mutex); #endif + blk_pm_init(q); + mutex_init(&q->sysfs_lock); spin_lock_init(&q->__queue_lock); diff --git a/block/blk-mq.c b/block/blk-mq.c index c92ce06fd565..8d845872ea02 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -138,6 +139,7 @@ void blk_freeze_queue_start(struct request_queue *q) { int freeze_depth; + blk_pm_runtime_lock(q); freeze_depth = atomic_inc_return(&q->mq_freeze_depth); if (freeze_depth == 1) { percpu_ref_kill(&q->q_usage_counter); @@ -201,6 +203,7 @@ void blk_mq_unfreeze_queue(struct request_queue *q) percpu_ref_reinit(&q->q_usage_counter); wake_up_all(&q->mq_freeze_wq); } + blk_pm_runtime_unlock(q); } EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue); diff --git a/block/blk-pm.c b/block/blk-pm.c index 08d7222d4757..7dc9375a2f46 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -3,6 +3,41 @@ #include #include #include +#include + +/* + * Initialize the request queue members used by blk_pm_runtime_lock() and + * blk_pm_runtime_unlock(). + */ +void blk_pm_init(struct request_queue *q) +{ + spin_lock_init(&q->rpm_lock); + init_waitqueue_head(&q->rpm_wq); + q->rpm_owner = NULL; + q->rpm_nesting_level = 0; +} + +void blk_pm_runtime_lock(struct request_queue *q) +{ + spin_lock(&q->rpm_lock); + wait_event_interruptible_locked(q->rpm_wq, + q->rpm_owner == NULL || q->rpm_owner == current); + if (q->rpm_owner == NULL) + q->rpm_owner = current; + q->rpm_nesting_level++; + spin_unlock(&q->rpm_lock); +} + +void blk_pm_runtime_unlock(struct request_queue *q) +{ + spin_lock(&q->rpm_lock); + WARN_ON_ONCE(q->rpm_nesting_level <= 0); + if (--q->rpm_nesting_level == 0) { + q->rpm_owner = NULL; + wake_up(&q->rpm_wq); + } + spin_unlock(&q->rpm_lock); +} /** * blk_pm_runtime_init - Block layer runtime PM initialization routine @@ -66,6 +101,8 @@ int blk_pre_runtime_suspend(struct request_queue *q) if (!q->dev) return ret; + blk_pm_runtime_lock(q); + spin_lock_irq(q->queue_lock); if (q->nr_pending) { ret = -EBUSY; @@ -74,6 +111,9 @@ int blk_pre_runtime_suspend(struct request_queue *q) q->rpm_status = RPM_SUSPENDING; } spin_unlock_irq(q->queue_lock); + + blk_pm_runtime_unlock(q); + return ret; } EXPORT_SYMBOL(blk_pre_runtime_suspend); diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h index fe3f4e8efbe9..aafcc7877e53 100644 --- a/include/linux/blk-pm.h +++ b/include/linux/blk-pm.h @@ -3,10 +3,16 @@ #ifndef _BLK_PM_H_ #define _BLK_PM_H_ +struct device; +struct request_queue; + /* * block layer runtime pm functions */ #ifdef CONFIG_PM +extern void blk_pm_init(struct request_queue *q); +extern void blk_pm_runtime_lock(struct request_queue *q); +extern void blk_pm_runtime_unlock(struct request_queue *q); extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev); extern int blk_pre_runtime_suspend(struct request_queue *q); extern void blk_post_runtime_suspend(struct request_queue *q, int err); @@ -14,6 +20,9 @@ extern void blk_pre_runtime_resume(struct request_queue *q); extern void blk_post_runtime_resume(struct request_queue *q, int err); extern void blk_set_runtime_active(struct request_queue *q); #else +static inline void blk_pm_init(struct request_queue *q) {} +static inline void blk_pm_runtime_lock(struct request_queue *q) {} +static inline void blk_pm_runtime_unlock(struct request_queue *q) {} static inline void blk_pm_runtime_init(struct request_queue *q, struct device *dev) {} #endif diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index a0cf2e352a31..3a8c20eafe58 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -544,6 +544,10 @@ struct request_queue { struct device *dev; int rpm_status; unsigned int nr_pending; + spinlock_t rpm_lock; + wait_queue_head_t rpm_wq; + struct task_struct *rpm_owner; + int rpm_nesting_level; #endif /*