From patchwork Mon Jul 31 16:51:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9872557 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 26740603B4 for ; Mon, 31 Jul 2017 16:53:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 007EA28508 for ; Mon, 31 Jul 2017 16:53:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E94E12855E; Mon, 31 Jul 2017 16:53:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D80D28524 for ; Mon, 31 Jul 2017 16:53:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751176AbdGaQxQ (ORCPT ); Mon, 31 Jul 2017 12:53:16 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60794 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751102AbdGaQxQ (ORCPT ); Mon, 31 Jul 2017 12:53:16 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F264F110EC7; Mon, 31 Jul 2017 16:53:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com F264F110EC7 Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=ming.lei@redhat.com Received: from localhost (ovpn-12-46.pek2.redhat.com [10.72.12.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 06733A21D4; Mon, 31 Jul 2017 16:53:07 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, Christoph Hellwig Cc: Bart Van Assche , linux-scsi@vger.kernel.org, "Martin K . Petersen" , "James E . J . Bottomley" , Ming Lei Subject: [PATCH 11/14] blk-mq: introduce helpers for operating ->dispatch list Date: Tue, 1 Aug 2017 00:51:08 +0800 Message-Id: <20170731165111.11536-13-ming.lei@redhat.com> In-Reply-To: <20170731165111.11536-1-ming.lei@redhat.com> References: <20170731165111.11536-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Mon, 31 Jul 2017 16:53:16 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Ming Lei --- block/blk-mq-sched.c | 19 +++---------------- block/blk-mq.c | 18 +++++++++++------- block/blk-mq.h | 44 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 58 insertions(+), 23 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 112270961af0..8ff74efe4172 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -131,19 +131,8 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) * If we have previous entries on our dispatch list, grab them first for * more fair dispatch. */ - if (!list_empty_careful(&hctx->dispatch)) { - spin_lock(&hctx->lock); - if (!list_empty(&hctx->dispatch)) { - list_splice_init(&hctx->dispatch, &rq_list); - - /* - * BUSY won't be cleared until all requests - * in hctx->dispatch are dispatched successfully - */ - blk_mq_hctx_set_busy(hctx); - } - spin_unlock(&hctx->lock); - } + if (blk_mq_has_dispatch_rqs(hctx)) + blk_mq_take_list_from_dispatch(hctx, &rq_list); /* * Only ask the scheduler for requests, if we didn't have residual @@ -296,9 +285,7 @@ static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx, * If we already have a real request tag, send directly to * the dispatch list. */ - spin_lock(&hctx->lock); - list_add(&rq->queuelist, &hctx->dispatch); - spin_unlock(&hctx->lock); + blk_mq_add_rq_to_dispatch(hctx, rq); return true; } diff --git a/block/blk-mq.c b/block/blk-mq.c index db635ef06a72..785145f60c1d 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -63,7 +63,7 @@ static int blk_mq_poll_stats_bkt(const struct request *rq) bool blk_mq_hctx_has_pending(struct blk_mq_hw_ctx *hctx) { return sbitmap_any_bit_set(&hctx->ctx_map) || - !list_empty_careful(&hctx->dispatch) || + blk_mq_has_dispatch_rqs(hctx) || blk_mq_sched_has_work(hctx); } @@ -1097,9 +1097,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list) rq = list_first_entry(list, struct request, queuelist); blk_mq_put_driver_tag(rq); - spin_lock(&hctx->lock); - list_splice_init(list, &hctx->dispatch); - spin_unlock(&hctx->lock); + blk_mq_add_list_to_dispatch(hctx, list); /* * If SCHED_RESTART was set by the caller of this function and @@ -1874,9 +1872,7 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) if (list_empty(&tmp)) return 0; - spin_lock(&hctx->lock); - list_splice_tail_init(&tmp, &hctx->dispatch); - spin_unlock(&hctx->lock); + blk_mq_add_list_to_dispatch_tail(hctx, &tmp); blk_mq_run_hw_queue(hctx, true); return 0; @@ -1926,6 +1922,13 @@ static void blk_mq_exit_hw_queues(struct request_queue *q, } } +static void blk_mq_init_dispatch(struct request_queue *q, + struct blk_mq_hw_ctx *hctx) +{ + spin_lock_init(&hctx->lock); + INIT_LIST_HEAD(&hctx->dispatch); +} + static int blk_mq_init_hctx(struct request_queue *q, struct blk_mq_tag_set *set, struct blk_mq_hw_ctx *hctx, unsigned hctx_idx) @@ -1939,6 +1942,7 @@ static int blk_mq_init_hctx(struct request_queue *q, INIT_DELAYED_WORK(&hctx->run_work, blk_mq_run_work_fn); spin_lock_init(&hctx->lock); INIT_LIST_HEAD(&hctx->dispatch); + blk_mq_init_dispatch(q, hctx); hctx->queue = q; hctx->flags = set->flags & ~BLK_MQ_F_TAG_SHARED; diff --git a/block/blk-mq.h b/block/blk-mq.h index d9f875093613..2ed355881996 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -150,4 +150,48 @@ static inline void blk_mq_hctx_clear_busy(struct blk_mq_hw_ctx *hctx) clear_bit(BLK_MQ_S_BUSY, &hctx->state); } +static inline bool blk_mq_has_dispatch_rqs(struct blk_mq_hw_ctx *hctx) +{ + return !list_empty_careful(&hctx->dispatch); +} + +static inline void blk_mq_add_rq_to_dispatch(struct blk_mq_hw_ctx *hctx, + struct request *rq) +{ + spin_lock(&hctx->lock); + list_add(&rq->queuelist, &hctx->dispatch); + spin_unlock(&hctx->lock); +} + +static inline void blk_mq_add_list_to_dispatch(struct blk_mq_hw_ctx *hctx, + struct list_head *list) +{ + spin_lock(&hctx->lock); + list_splice_init(list, &hctx->dispatch); + spin_unlock(&hctx->lock); +} + +static inline void blk_mq_add_list_to_dispatch_tail(struct blk_mq_hw_ctx *hctx, + struct list_head *list) +{ + spin_lock(&hctx->lock); + list_splice_tail_init(list, &hctx->dispatch); + spin_unlock(&hctx->lock); +} + +static inline void blk_mq_take_list_from_dispatch(struct blk_mq_hw_ctx *hctx, + struct list_head *list) +{ + spin_lock(&hctx->lock); + list_splice_init(&hctx->dispatch, list); + + /* + * BUSY won't be cleared until all requests + * in hctx->dispatch are dispatched successfully + */ + if (!list_empty(list)) + blk_mq_hctx_set_busy(hctx); + spin_unlock(&hctx->lock); +} + #endif