From patchwork Wed Jan 10 18:18:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10155823 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9B6C5601A1 for ; Wed, 10 Jan 2018 18:19:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 93577285BD for ; Wed, 10 Jan 2018 18:19:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 87F3E285BE; Wed, 10 Jan 2018 18:19:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 265E7285C7 for ; Wed, 10 Jan 2018 18:19:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753084AbeAJSTC (ORCPT ); Wed, 10 Jan 2018 13:19:02 -0500 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:23869 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752424AbeAJSSS (ORCPT ); Wed, 10 Jan 2018 13:18:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1515608634; x=1547144634; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=YsPh027v6Q5qjWyDdmPu52Lc5tCxn/dz2R2oKaYmK80=; b=GgmRoJ99ibu7Bo8CZhyfyHWpZlvyaXHybe+7gcSzd4TNVNkttIWcUjLc Rb1Zn40fzIrMf0bjKug1UED99JZyEZHxWoq3+iznbs+wsqZhFdi8yrfI6 vkK5nUdOMHfCA2u7eokxLKsHgteXFmhVKJW8QqyD7pSpLbDaKbntmX0u3 SvzPQ2ilzDvbyY3L6wGFjO3C6DDWRbv2PgU0N9lAWCM22Oj1UXjtfXbvi ftdK2GOWstx3tGrtat3eeeKYyh2snNZS/idkV9zYTAXX4vFH24P6RXQGJ yMty8/5D5pmqE+zah1HzZOWyLtiRJMc/Dlioj7awNnFJ8px3aYfKkOhWn Q==; X-IronPort-AV: E=Sophos;i="5.46,341,1511798400"; d="scan'208";a="164728786" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Jan 2018 02:23:53 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP; 10 Jan 2018 10:13:50 -0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.171.236]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Jan 2018 10:18:18 -0800 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, "Martin K . Petersen" , Christoph Hellwig , Hannes Reinecke , Jason Gunthorpe , Doug Ledford , linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org, Bart Van Assche , Hannes Reinecke , Johannes Thumshirn , Ming Lei Subject: [PATCH v2 2/4] block: Introduce blk_start_wait_if_quiesced() and blk_finish_wait_if_quiesced() Date: Wed, 10 Jan 2018 10:18:15 -0800 Message-Id: <20180110181817.25166-3-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180110181817.25166-1-bart.vanassche@wdc.com> References: <20180110181817.25166-1-bart.vanassche@wdc.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce functions that allow block drivers to wait while a request queue is in the quiesced state (blk-mq) or in the stopped state (legacy block layer). The next patch will add calls to these functions in the SCSI core. Signed-off-by: Bart Van Assche Cc: Martin K. Petersen Cc: Christoph Hellwig Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Ming Lei --- block/blk-core.c | 1 + block/blk-mq.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++++ include/linux/blk-mq.h | 2 ++ 3 files changed, 67 insertions(+) diff --git a/block/blk-core.c b/block/blk-core.c index c10b4ce95248..06eaea15bae9 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -287,6 +287,7 @@ void blk_start_queue(struct request_queue *q) WARN_ON_ONCE(q->mq_ops); queue_flag_clear(QUEUE_FLAG_STOPPED, q); + wake_up_all(&q->mq_wq); __blk_run_queue(q); } EXPORT_SYMBOL(blk_start_queue); diff --git a/block/blk-mq.c b/block/blk-mq.c index a05ea7e9b415..87455977ad34 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -247,11 +247,75 @@ void blk_mq_unquiesce_queue(struct request_queue *q) queue_flag_clear(QUEUE_FLAG_QUIESCED, q); spin_unlock_irqrestore(q->queue_lock, flags); + wake_up_all(&q->mq_wq); + /* dispatch requests which are inserted during quiescing */ blk_mq_run_hw_queues(q, true); } EXPORT_SYMBOL_GPL(blk_mq_unquiesce_queue); +/** + * blk_start_wait_if_quiesced() - wait if a queue is quiesced (blk-mq) or stopped (legacy block layer) + * @q: Request queue pointer. + * + * Some block drivers, e.g. the SCSI core, can bypass the block layer core + * request submission mechanism. Surround such code with + * blk_start_wait_if_quiesced() / blk_finish_wait_if_quiesced() to avoid that + * request submission can happen while a queue is quiesced or stopped. + * + * Returns with the RCU read lock held (blk-mq) or with q->queue_lock held + * (legacy block layer). + * + * Notes: + * - Every call of this function must be followed by a call of + * blk_finish_wait_if_quiesced(). + * - This function does not support block drivers whose .queue_rq() + * implementation can sleep (BLK_MQ_F_BLOCKING). + */ +int blk_start_wait_if_quiesced(struct request_queue *q) +{ + struct blk_mq_hw_ctx *hctx; + unsigned int i; + + might_sleep(); + + if (q->mq_ops) { + queue_for_each_hw_ctx(q, hctx, i) + WARN_ON(hctx->flags & BLK_MQ_F_BLOCKING); + + rcu_read_lock(); + while (!blk_queue_dying(q) && blk_queue_quiesced(q)) { + rcu_read_unlock(); + wait_event(q->mq_wq, blk_queue_dying(q) || + !blk_queue_quiesced(q)); + rcu_read_lock(); + } + } else { + spin_lock_irq(q->queue_lock); + wait_event_lock_irq(q->mq_wq, + blk_queue_dying(q) || !blk_queue_stopped(q), + *q->queue_lock); + q->request_fn_active++; + } + return blk_queue_dying(q) ? -ENODEV : 0; +} +EXPORT_SYMBOL(blk_start_wait_if_quiesced); + +/** + * blk_finish_wait_if_quiesced() - counterpart of blk_start_wait_if_quiesced() + * @q: Request queue pointer. + */ +void blk_finish_wait_if_quiesced(struct request_queue *q) +{ + if (q->mq_ops) { + rcu_read_unlock(); + } else { + q->request_fn_active--; + spin_unlock_irq(q->queue_lock); + } +} +EXPORT_SYMBOL(blk_finish_wait_if_quiesced); + void blk_mq_wake_waiters(struct request_queue *q) { struct blk_mq_hw_ctx *hctx; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 8efcf49796a3..15912cd348b5 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -267,6 +267,8 @@ void blk_mq_start_stopped_hw_queue(struct blk_mq_hw_ctx *hctx, bool async); void blk_mq_start_stopped_hw_queues(struct request_queue *q, bool async); void blk_mq_quiesce_queue(struct request_queue *q); void blk_mq_unquiesce_queue(struct request_queue *q); +int blk_start_wait_if_quiesced(struct request_queue *q); +void blk_finish_wait_if_quiesced(struct request_queue *q); void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs); bool blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async); void blk_mq_run_hw_queues(struct request_queue *q, bool async);