From patchwork Mon Dec 17 10:42:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10733091 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B2C16C2 for ; Mon, 17 Dec 2018 10:43:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B0E629BD4 for ; Mon, 17 Dec 2018 10:43:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6BC4829BDF; Mon, 17 Dec 2018 10:43:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AFF9829BD4 for ; Mon, 17 Dec 2018 10:43:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726972AbeLQKnJ (ORCPT ); Mon, 17 Dec 2018 05:43:09 -0500 Received: from mx1.redhat.com ([209.132.183.28]:33916 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726911AbeLQKnJ (ORCPT ); Mon, 17 Dec 2018 05:43:09 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C33D181DE9; Mon, 17 Dec 2018 10:43:08 +0000 (UTC) Received: from localhost (ovpn-8-31.pek2.redhat.com [10.72.8.31]) by smtp.corp.redhat.com (Postfix) with ESMTP id D5A2D1883F; Mon, 17 Dec 2018 10:43:00 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Jeff Moyer , Mike Snitzer , Christoph Hellwig Subject: [PATCH V2 1/4] blk-mq: fix allocation for queue mapping table Date: Mon, 17 Dec 2018 18:42:45 +0800 Message-Id: <20181217104248.5828-2-ming.lei@redhat.com> In-Reply-To: <20181217104248.5828-1-ming.lei@redhat.com> References: <20181217104248.5828-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Mon, 17 Dec 2018 10:43:09 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Type of each element in queue mapping table is 'unsigned int, intead of 'struct blk_mq_queue_map)', so fix it. Cc: Jeff Moyer Cc: Mike Snitzer Cc: Christoph Hellwig Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- block/blk-mq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 2d3a29eb58ca..313f28b2d079 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3019,7 +3019,7 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) ret = -ENOMEM; for (i = 0; i < set->nr_maps; i++) { set->map[i].mq_map = kcalloc_node(nr_cpu_ids, - sizeof(struct blk_mq_queue_map), + sizeof(set->map[i].mq_map[0]), GFP_KERNEL, set->numa_node); if (!set->map[i].mq_map) goto out_free_mq_map; From patchwork Mon Dec 17 10:42:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10733093 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06C126C5 for ; Mon, 17 Dec 2018 10:43:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECD1A29BD4 for ; Mon, 17 Dec 2018 10:43:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E154A29BDF; Mon, 17 Dec 2018 10:43:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80F6529BD4 for ; Mon, 17 Dec 2018 10:43:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726911AbeLQKnT (ORCPT ); Mon, 17 Dec 2018 05:43:19 -0500 Received: from mx1.redhat.com ([209.132.183.28]:43810 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726706AbeLQKnS (ORCPT ); Mon, 17 Dec 2018 05:43:18 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 92B78B67C; Mon, 17 Dec 2018 10:43:18 +0000 (UTC) Received: from localhost (ovpn-8-31.pek2.redhat.com [10.72.8.31]) by smtp.corp.redhat.com (Postfix) with ESMTP id 594272719F; Mon, 17 Dec 2018 10:43:10 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Jeff Moyer , Mike Snitzer , Ming Lei Subject: [PATCH V2 2/4] blk-mq: fix shared queue mapping Date: Mon, 17 Dec 2018 18:42:46 +0800 Message-Id: <20181217104248.5828-3-ming.lei@redhat.com> In-Reply-To: <20181217104248.5828-1-ming.lei@redhat.com> References: <20181217104248.5828-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Mon, 17 Dec 2018 10:43:18 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoph Hellwig Even though poll_queues are zero, nvme's mapping for HCTX_TYPE_POLL still may be setup via blk_mq_map_queues() which cause different mapping compared with HCTX_TYPE_DEFAULT's mapping built from managed irq affinity. This mapping will cause hctx->type to be over-written in blk_mq_map_swqueue(), then the whole mapping may become broken, for example, one same ctx can be mapped to different hctxs with same hctx type. This bad mapping has caused IO hang in simple dd test, as reported by Mike. This patch sets map->nr_queues as zero explictly if there is zero queues for such queue type, also maps to correct hctx if .nr_queues of the queue type is zero. Cc: Jeff Moyer Cc: Mike Snitzer Cc: Christoph Hellwig (don't handle zero .nr_queues map in blk_mq_map_swqueue()) Signed-off-by: Ming Lei --- block/blk-mq.c | 3 +++ block/blk-mq.h | 11 +++++++---- drivers/nvme/host/pci.c | 6 +----- 3 files changed, 11 insertions(+), 9 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 313f28b2d079..e843f23843c8 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2431,6 +2431,9 @@ static void blk_mq_map_swqueue(struct request_queue *q) for (j = 0; j < set->nr_maps; j++) { hctx = blk_mq_map_queue_type(q, j, i); + if (!set->map[j].nr_queues) + continue; + /* * If the CPU is already set in the mask, then we've * mapped this one already. This can happen if diff --git a/block/blk-mq.h b/block/blk-mq.h index b63a0de8a07a..f50c73d559d7 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -105,12 +105,15 @@ static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q, { enum hctx_type type = HCTX_TYPE_DEFAULT; - if (q->tag_set->nr_maps > HCTX_TYPE_POLL && - ((flags & REQ_HIPRI) && test_bit(QUEUE_FLAG_POLL, &q->queue_flags))) + if ((flags & REQ_HIPRI) && + q->tag_set->nr_maps > HCTX_TYPE_POLL && + q->tag_set->map[HCTX_TYPE_POLL].nr_queues && + test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) type = HCTX_TYPE_POLL; - else if (q->tag_set->nr_maps > HCTX_TYPE_READ && - ((flags & REQ_OP_MASK) == REQ_OP_READ)) + else if (((flags & REQ_OP_MASK) == REQ_OP_READ) && + q->tag_set->nr_maps > HCTX_TYPE_READ && + q->tag_set->map[HCTX_TYPE_READ].nr_queues) type = HCTX_TYPE_READ; return blk_mq_map_queue_type(q, type, cpu); diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index fb9d8270f32c..698b350b38cf 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -496,11 +496,7 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set) map->nr_queues = dev->io_queues[i]; if (!map->nr_queues) { BUG_ON(i == HCTX_TYPE_DEFAULT); - - /* shared set, resuse read set parameters */ - map->nr_queues = dev->io_queues[HCTX_TYPE_DEFAULT]; - qoff = 0; - offset = queue_irq_offset(dev); + continue; } /* From patchwork Mon Dec 17 10:42:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10733095 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9EE536C5 for ; Mon, 17 Dec 2018 10:43:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8EE5029BD6 for ; Mon, 17 Dec 2018 10:43:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7F42029BD4; Mon, 17 Dec 2018 10:43:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB14029BD4 for ; Mon, 17 Dec 2018 10:43:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732188AbeLQKnc (ORCPT ); Mon, 17 Dec 2018 05:43:32 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52654 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726706AbeLQKnb (ORCPT ); Mon, 17 Dec 2018 05:43:31 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3313F8B10C; Mon, 17 Dec 2018 10:43:31 +0000 (UTC) Received: from localhost (ovpn-8-31.pek2.redhat.com [10.72.8.31]) by smtp.corp.redhat.com (Postfix) with ESMTP id 308CD5DA2A; Mon, 17 Dec 2018 10:43:20 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Jeff Moyer , Mike Snitzer , Christoph Hellwig Subject: [PATCH V2 3/4] blk-mq: fix dispatch from sw queue Date: Mon, 17 Dec 2018 18:42:47 +0800 Message-Id: <20181217104248.5828-4-ming.lei@redhat.com> In-Reply-To: <20181217104248.5828-1-ming.lei@redhat.com> References: <20181217104248.5828-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Mon, 17 Dec 2018 10:43:31 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When requst is added to rq list of sw queue(ctx), the rq may be from different hctx, after multi queue mapping is introduced. So we have to put the request into one per-queue-type list inside sw queue, otherwise the request may be dispatched to wrong hw queue. Cc: Jeff Moyer Cc: Mike Snitzer Cc: Christoph Hellwig Signed-off-by: Ming Lei --- block/blk-mq-debugfs.c | 69 ++++++++++++++++++++++++++++---------------------- block/blk-mq-sched.c | 23 +++++++++++------ block/blk-mq.c | 52 ++++++++++++++++++++++--------------- block/blk-mq.h | 10 +++++--- 4 files changed, 91 insertions(+), 63 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 2793e91bc7a4..7021d44cef6d 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -637,36 +637,43 @@ static int hctx_dispatch_busy_show(void *data, struct seq_file *m) return 0; } -static void *ctx_rq_list_start(struct seq_file *m, loff_t *pos) - __acquires(&ctx->lock) -{ - struct blk_mq_ctx *ctx = m->private; - - spin_lock(&ctx->lock); - return seq_list_start(&ctx->rq_list, *pos); -} - -static void *ctx_rq_list_next(struct seq_file *m, void *v, loff_t *pos) -{ - struct blk_mq_ctx *ctx = m->private; - - return seq_list_next(v, &ctx->rq_list, pos); -} +#define CTX_RQ_SEQ_OPS(name, type) \ +static void *ctx_##name##_rq_list_start(struct seq_file *m, loff_t *pos) \ + __acquires(&ctx->lock) \ +{ \ + struct blk_mq_ctx *ctx = m->private; \ + \ + spin_lock(&ctx->list[type].lock); \ + return seq_list_start(&ctx->list[type].rq_list, *pos); \ +} \ + \ +static void *ctx_##name##_rq_list_next(struct seq_file *m, void *v, \ + loff_t *pos) \ +{ \ + struct blk_mq_ctx *ctx = m->private; \ + \ + return seq_list_next(v, &ctx->list[type].rq_list, pos); \ +} \ + \ +static void ctx_##name##_rq_list_stop(struct seq_file *m, void *v) \ + __releases(&ctx->lock) \ +{ \ + struct blk_mq_ctx *ctx = m->private; \ + \ + spin_unlock(&ctx->list[type].lock); \ +} \ + \ +static const struct seq_operations ctx_##name##_rq_list_seq_ops = { \ + .start = ctx_##name##_rq_list_start, \ + .next = ctx_##name##_rq_list_next, \ + .stop = ctx_##name##_rq_list_stop, \ + .show = blk_mq_debugfs_rq_show, \ +} + +CTX_RQ_SEQ_OPS(default, HCTX_TYPE_DEFAULT); +CTX_RQ_SEQ_OPS(read, HCTX_TYPE_READ); +CTX_RQ_SEQ_OPS(poll, HCTX_TYPE_POLL); -static void ctx_rq_list_stop(struct seq_file *m, void *v) - __releases(&ctx->lock) -{ - struct blk_mq_ctx *ctx = m->private; - - spin_unlock(&ctx->lock); -} - -static const struct seq_operations ctx_rq_list_seq_ops = { - .start = ctx_rq_list_start, - .next = ctx_rq_list_next, - .stop = ctx_rq_list_stop, - .show = blk_mq_debugfs_rq_show, -}; static int ctx_dispatched_show(void *data, struct seq_file *m) { struct blk_mq_ctx *ctx = data; @@ -803,7 +810,9 @@ static const struct blk_mq_debugfs_attr blk_mq_debugfs_hctx_attrs[] = { }; static const struct blk_mq_debugfs_attr blk_mq_debugfs_ctx_attrs[] = { - {"rq_list", 0400, .seq_ops = &ctx_rq_list_seq_ops}, + {"default_rq_list", 0400, .seq_ops = &ctx_default_rq_list_seq_ops}, + {"read_rq_list", 0400, .seq_ops = &ctx_read_rq_list_seq_ops}, + {"poll_rq_list", 0400, .seq_ops = &ctx_poll_rq_list_seq_ops}, {"dispatched", 0600, ctx_dispatched_show, ctx_dispatched_write}, {"merged", 0600, ctx_merged_show, ctx_merged_write}, {"completed", 0600, ctx_completed_show, ctx_completed_write}, diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 5b4d52d9cba2..09594947933e 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -301,11 +301,14 @@ EXPORT_SYMBOL_GPL(blk_mq_bio_list_merge); * too much time checking for merges. */ static bool blk_mq_attempt_merge(struct request_queue *q, + struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct bio *bio) { - lockdep_assert_held(&ctx->lock); + enum hctx_type type = hctx->type; - if (blk_mq_bio_list_merge(q, &ctx->rq_list, bio)) { + lockdep_assert_held(&ctx->list[type].lock); + + if (blk_mq_bio_list_merge(q, &ctx->list[type].rq_list, bio)) { ctx->rq_merged++; return true; } @@ -319,18 +322,20 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio) struct blk_mq_ctx *ctx = blk_mq_get_ctx(q); struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx->cpu); bool ret = false; + enum hctx_type type; if (e && e->type->ops.bio_merge) { blk_mq_put_ctx(ctx); return e->type->ops.bio_merge(hctx, bio); } + type = hctx->type; if ((hctx->flags & BLK_MQ_F_SHOULD_MERGE) && - !list_empty_careful(&ctx->rq_list)) { + !list_empty_careful(&ctx->list[type].rq_list)) { /* default per sw-queue merge */ - spin_lock(&ctx->lock); - ret = blk_mq_attempt_merge(q, ctx, bio); - spin_unlock(&ctx->lock); + spin_lock(&ctx->list[type].lock); + ret = blk_mq_attempt_merge(q, hctx, ctx, bio); + spin_unlock(&ctx->list[type].lock); } blk_mq_put_ctx(ctx); @@ -392,9 +397,11 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head, list_add(&rq->queuelist, &list); e->type->ops.insert_requests(hctx, &list, at_head); } else { - spin_lock(&ctx->lock); + enum hctx_type type = hctx->type; + + spin_lock(&ctx->list[type].lock); __blk_mq_insert_request(hctx, rq, at_head); - spin_unlock(&ctx->lock); + spin_unlock(&ctx->list[type].lock); } run: diff --git a/block/blk-mq.c b/block/blk-mq.c index e843f23843c8..27303951a752 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -958,11 +958,12 @@ static bool flush_busy_ctx(struct sbitmap *sb, unsigned int bitnr, void *data) struct flush_busy_ctx_data *flush_data = data; struct blk_mq_hw_ctx *hctx = flush_data->hctx; struct blk_mq_ctx *ctx = hctx->ctxs[bitnr]; + enum hctx_type type = hctx->type; - spin_lock(&ctx->lock); - list_splice_tail_init(&ctx->rq_list, flush_data->list); + spin_lock(&ctx->list[type].lock); + list_splice_tail_init(&ctx->list[type].rq_list, flush_data->list); sbitmap_clear_bit(sb, bitnr); - spin_unlock(&ctx->lock); + spin_unlock(&ctx->list[type].lock); return true; } @@ -992,15 +993,16 @@ static bool dispatch_rq_from_ctx(struct sbitmap *sb, unsigned int bitnr, struct dispatch_rq_data *dispatch_data = data; struct blk_mq_hw_ctx *hctx = dispatch_data->hctx; struct blk_mq_ctx *ctx = hctx->ctxs[bitnr]; + enum hctx_type type = hctx->type; - spin_lock(&ctx->lock); - if (!list_empty(&ctx->rq_list)) { - dispatch_data->rq = list_entry_rq(ctx->rq_list.next); + spin_lock(&ctx->list[type].lock); + if (!list_empty(&ctx->list[type].rq_list)) { + dispatch_data->rq = list_entry_rq(ctx->list[type].rq_list.next); list_del_init(&dispatch_data->rq->queuelist); - if (list_empty(&ctx->rq_list)) + if (list_empty(&ctx->list[type].rq_list)) sbitmap_clear_bit(sb, bitnr); } - spin_unlock(&ctx->lock); + spin_unlock(&ctx->list[type].lock); return !dispatch_data->rq; } @@ -1608,15 +1610,16 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx, bool at_head) { struct blk_mq_ctx *ctx = rq->mq_ctx; + enum hctx_type type = hctx->type; - lockdep_assert_held(&ctx->lock); + lockdep_assert_held(&ctx->list[type].lock); trace_block_rq_insert(hctx->queue, rq); if (at_head) - list_add(&rq->queuelist, &ctx->rq_list); + list_add(&rq->queuelist, &ctx->list[type].rq_list); else - list_add_tail(&rq->queuelist, &ctx->rq_list); + list_add_tail(&rq->queuelist, &ctx->list[type].rq_list); } void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, @@ -1624,7 +1627,7 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, { struct blk_mq_ctx *ctx = rq->mq_ctx; - lockdep_assert_held(&ctx->lock); + lockdep_assert_held(&ctx->list[hctx->type].lock); __blk_mq_insert_req_list(hctx, rq, at_head); blk_mq_hctx_mark_pending(hctx, ctx); @@ -1651,6 +1654,7 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, { struct request *rq; + enum hctx_type type = hctx->type; /* * preemption doesn't flush plug list, so it's possible ctx->cpu is @@ -1661,10 +1665,10 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, trace_block_rq_insert(hctx->queue, rq); } - spin_lock(&ctx->lock); - list_splice_tail_init(list, &ctx->rq_list); + spin_lock(&ctx->list[type].lock); + list_splice_tail_init(list, &ctx->list[type].rq_list); blk_mq_hctx_mark_pending(hctx, ctx); - spin_unlock(&ctx->lock); + spin_unlock(&ctx->list[type].lock); } static int plug_rq_cmp(void *priv, struct list_head *a, struct list_head *b) @@ -2200,16 +2204,18 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) struct blk_mq_hw_ctx *hctx; struct blk_mq_ctx *ctx; LIST_HEAD(tmp); + enum hctx_type type; hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_dead); ctx = __blk_mq_get_ctx(hctx->queue, cpu); + type = hctx->type; - spin_lock(&ctx->lock); - if (!list_empty(&ctx->rq_list)) { - list_splice_init(&ctx->rq_list, &tmp); + spin_lock(&ctx->list[type].lock); + if (!list_empty(&ctx->list[type].rq_list)) { + list_splice_init(&ctx->list[type].rq_list, &tmp); blk_mq_hctx_clear_pending(hctx, ctx); } - spin_unlock(&ctx->lock); + spin_unlock(&ctx->list[type].lock); if (list_empty(&tmp)) return 0; @@ -2343,10 +2349,14 @@ static void blk_mq_init_cpu_queues(struct request_queue *q, for_each_possible_cpu(i) { struct blk_mq_ctx *__ctx = per_cpu_ptr(q->queue_ctx, i); struct blk_mq_hw_ctx *hctx; + int k; __ctx->cpu = i; - spin_lock_init(&__ctx->lock); - INIT_LIST_HEAD(&__ctx->rq_list); + + for (k = HCTX_TYPE_DEFAULT; k < HCTX_MAX_TYPES; k++) { + spin_lock_init(&__ctx->list[k].lock); + INIT_LIST_HEAD(&__ctx->list[k].rq_list); + } __ctx->queue = q; /* diff --git a/block/blk-mq.h b/block/blk-mq.h index f50c73d559d7..39064144326e 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -12,14 +12,16 @@ struct blk_mq_ctxs { struct blk_mq_ctx __percpu *queue_ctx; }; +struct blk_mq_ctx_list { + spinlock_t lock; + struct list_head rq_list; +} ____cacheline_aligned_in_smp; + /** * struct blk_mq_ctx - State for a software queue facing the submitting CPUs */ struct blk_mq_ctx { - struct { - spinlock_t lock; - struct list_head rq_list; - } ____cacheline_aligned_in_smp; + struct blk_mq_ctx_list list[HCTX_MAX_TYPES]; unsigned int cpu; unsigned short index_hw[HCTX_MAX_TYPES]; From patchwork Mon Dec 17 10:42:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10733097 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83E996C2 for ; Mon, 17 Dec 2018 10:43:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 680122640A for ; Mon, 17 Dec 2018 10:43:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5937C27F93; Mon, 17 Dec 2018 10:43:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB9232640A for ; Mon, 17 Dec 2018 10:43:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726692AbeLQKno (ORCPT ); Mon, 17 Dec 2018 05:43:44 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37672 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726566AbeLQKno (ORCPT ); Mon, 17 Dec 2018 05:43:44 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CD130C7C33; Mon, 17 Dec 2018 10:43:43 +0000 (UTC) Received: from localhost (ovpn-8-31.pek2.redhat.com [10.72.8.31]) by smtp.corp.redhat.com (Postfix) with ESMTP id C0AE71883F; Mon, 17 Dec 2018 10:43:33 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Jeff Moyer , Mike Snitzer , Christoph Hellwig Subject: [PATCH V2 4/4] blk-mq: export hctx->type in debugfs instead of sysfs Date: Mon, 17 Dec 2018 18:42:48 +0800 Message-Id: <20181217104248.5828-5-ming.lei@redhat.com> In-Reply-To: <20181217104248.5828-1-ming.lei@redhat.com> References: <20181217104248.5828-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Mon, 17 Dec 2018 10:43:44 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now we only export hctx->type via sysfs, and there isn't such info in hctx entry under debugfs. We often use debugfs only to diagnose queue mapping issue, so add the support in debugfs. Queue mapping becomes a bit more complicated after multiple queue mapping is supported, we may write blktest to verify if queue mapping is valid based on blk-mq-debugfs. Given not necessary to export hctx->type twice, so remove the export from sysfs. Cc: Jeff Moyer Cc: Mike Snitzer Cc: Christoph Hellwig Signed-off-by: Ming Lei Reviewed-by: Christoph Hellwig --- block/blk-mq-debugfs.c | 16 ++++++++++++++++ block/blk-mq-sysfs.c | 17 ----------------- 2 files changed, 16 insertions(+), 17 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 7021d44cef6d..159607be5158 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -447,6 +447,21 @@ static int hctx_busy_show(void *data, struct seq_file *m) return 0; } +static const char *const hctx_types[] = { + [HCTX_TYPE_DEFAULT] = "default", + [HCTX_TYPE_READ] = "read", + [HCTX_TYPE_POLL] = "poll", +}; + +static int hctx_type_show(void *data, struct seq_file *m) +{ + struct blk_mq_hw_ctx *hctx = data; + + BUILD_BUG_ON(ARRAY_SIZE(hctx_types) != HCTX_MAX_TYPES); + seq_printf(m, "%s\n", hctx_types[hctx->type]); + return 0; +} + static int hctx_ctx_map_show(void *data, struct seq_file *m) { struct blk_mq_hw_ctx *hctx = data; @@ -806,6 +821,7 @@ static const struct blk_mq_debugfs_attr blk_mq_debugfs_hctx_attrs[] = { {"run", 0600, hctx_run_show, hctx_run_write}, {"active", 0400, hctx_active_show}, {"dispatch_busy", 0400, hctx_dispatch_busy_show}, + {"type", 0400, hctx_type_show}, {}, }; diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c index 9c2df137256a..3f9c3f4ac44c 100644 --- a/block/blk-mq-sysfs.c +++ b/block/blk-mq-sysfs.c @@ -173,18 +173,6 @@ static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page) return ret; } -static const char *const hctx_types[] = { - [HCTX_TYPE_DEFAULT] = "default", - [HCTX_TYPE_READ] = "read", - [HCTX_TYPE_POLL] = "poll", -}; - -static ssize_t blk_mq_hw_sysfs_type_show(struct blk_mq_hw_ctx *hctx, char *page) -{ - BUILD_BUG_ON(ARRAY_SIZE(hctx_types) != HCTX_MAX_TYPES); - return sprintf(page, "%s\n", hctx_types[hctx->type]); -} - static struct attribute *default_ctx_attrs[] = { NULL, }; @@ -201,16 +189,11 @@ static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_cpus = { .attr = {.name = "cpu_list", .mode = 0444 }, .show = blk_mq_hw_sysfs_cpus_show, }; -static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_type = { - .attr = {.name = "type", .mode = 0444 }, - .show = blk_mq_hw_sysfs_type_show, -}; static struct attribute *default_hw_ctx_attrs[] = { &blk_mq_hw_sysfs_nr_tags.attr, &blk_mq_hw_sysfs_nr_reserved_tags.attr, &blk_mq_hw_sysfs_cpus.attr, - &blk_mq_hw_sysfs_type.attr, NULL, };