From patchwork Mon Oct 14 01:50:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11187685 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 634F2912 for ; Mon, 14 Oct 2019 01:51:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4D2DA20815 for ; Mon, 14 Oct 2019 01:51:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729663AbfJNBu7 (ORCPT ); Sun, 13 Oct 2019 21:50:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43494 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729630AbfJNBu7 (ORCPT ); Sun, 13 Oct 2019 21:50:59 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5F263800DF5; Mon, 14 Oct 2019 01:50:59 +0000 (UTC) Received: from localhost (ovpn-8-17.pek2.redhat.com [10.72.8.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3C75A60BE0; Mon, 14 Oct 2019 01:50:55 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Bart Van Assche , Hannes Reinecke , Christoph Hellwig , Thomas Gleixner , Keith Busch , John Garry Subject: [PATCH V4 1/5] blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED Date: Mon, 14 Oct 2019 09:50:39 +0800 Message-Id: <20191014015043.25029-2-ming.lei@redhat.com> In-Reply-To: <20191014015043.25029-1-ming.lei@redhat.com> References: <20191014015043.25029-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.67]); Mon, 14 Oct 2019 01:50:59 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add a new hw queue state of BLK_MQ_S_INTERNAL_STOPPED, which prepares for stopping hw queue before all CPUs of this hctx become offline. We can't reuse BLK_MQ_S_STOPPED because that state can be cleared during IO completion. Cc: Bart Van Assche Cc: Hannes Reinecke Cc: Christoph Hellwig Cc: Thomas Gleixner Cc: Keith Busch Cc: John Garry Reviewed-by: Hannes Reinecke Signed-off-by: Ming Lei --- block/blk-mq-debugfs.c | 1 + block/blk-mq.h | 3 ++- include/linux/blk-mq.h | 3 +++ 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index b3f2ba483992..af40a02c46ee 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -213,6 +213,7 @@ static const char *const hctx_state_name[] = { HCTX_STATE_NAME(STOPPED), HCTX_STATE_NAME(TAG_ACTIVE), HCTX_STATE_NAME(SCHED_RESTART), + HCTX_STATE_NAME(INTERNAL_STOPPED), }; #undef HCTX_STATE_NAME diff --git a/block/blk-mq.h b/block/blk-mq.h index 32c62c64e6c2..63717573bc16 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -176,7 +176,8 @@ static inline struct blk_mq_tags *blk_mq_tags_from_data(struct blk_mq_alloc_data static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx) { - return test_bit(BLK_MQ_S_STOPPED, &hctx->state); + return test_bit(BLK_MQ_S_STOPPED, &hctx->state) || + test_bit(BLK_MQ_S_INTERNAL_STOPPED, &hctx->state); } static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx) diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 0bf056de5cc3..079c282e4471 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -235,6 +235,9 @@ enum { BLK_MQ_S_TAG_ACTIVE = 1, BLK_MQ_S_SCHED_RESTART = 2, + /* hw queue is internal stopped, driver do not use it */ + BLK_MQ_S_INTERNAL_STOPPED = 3, + BLK_MQ_MAX_DEPTH = 10240, BLK_MQ_CPU_WORK_BATCH = 8, From patchwork Mon Oct 14 01:50:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11187687 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF0FD912 for ; Mon, 14 Oct 2019 01:51:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D2A7720815 for ; Mon, 14 Oct 2019 01:51:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729698AbfJNBvE (ORCPT ); Sun, 13 Oct 2019 21:51:04 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54798 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729630AbfJNBvE (ORCPT ); Sun, 13 Oct 2019 21:51:04 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 81B7A308FC22; Mon, 14 Oct 2019 01:51:03 +0000 (UTC) Received: from localhost (ovpn-8-17.pek2.redhat.com [10.72.8.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 43CEC60BE0; Mon, 14 Oct 2019 01:51:01 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , John Garry , Bart Van Assche , Hannes Reinecke , Christoph Hellwig , Thomas Gleixner , Keith Busch Subject: [PATCH V4 2/5] blk-mq: prepare for draining IO when hctx's all CPUs are offline Date: Mon, 14 Oct 2019 09:50:40 +0800 Message-Id: <20191014015043.25029-3-ming.lei@redhat.com> In-Reply-To: <20191014015043.25029-1-ming.lei@redhat.com> References: <20191014015043.25029-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.43]); Mon, 14 Oct 2019 01:51:03 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Most of blk-mq drivers depend on managed IRQ's auto-affinity to setup up queue mapping. Thomas mentioned the following point[1]: " That was the constraint of managed interrupts from the very beginning: The driver/subsystem has to quiesce the interrupt line and the associated queue _before_ it gets shutdown in CPU unplug and not fiddle with it until it's restarted by the core when the CPU is plugged in again. " However, current blk-mq implementation doesn't quiesce hw queue before the last CPU in the hctx is shutdown. Even worse, CPUHP_BLK_MQ_DEAD is one cpuhp state handled after the CPU is down, so there isn't any chance to quiesce hctx for blk-mq wrt. CPU hotplug. Add new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE for blk-mq to stop queues and wait for completion of in-flight requests. We will stop hw queue and wait for completion of in-flight requests when one hctx is becoming dead in the following patch. This way may cause dead-lock for some stacking blk-mq drivers, such as dm-rq and loop. Add blk-mq flag of BLK_MQ_F_NO_MANAGED_IRQ and mark it for dm-rq and loop, so we needn't to wait for completion of in-flight requests from dm-rq & loop, then the potential dead-lock can be avoided. [1] https://lore.kernel.org/linux-block/alpine.DEB.2.21.1904051331270.1802@nanos.tec.linutronix.de/ Cc: John Garry Cc: Bart Van Assche Cc: Hannes Reinecke Cc: Christoph Hellwig Cc: Thomas Gleixner Cc: Keith Busch Signed-off-by: Ming Lei --- block/blk-mq-debugfs.c | 1 + block/blk-mq.c | 13 +++++++++++++ drivers/block/loop.c | 2 +- drivers/md/dm-rq.c | 2 +- include/linux/blk-mq.h | 2 ++ include/linux/cpuhotplug.h | 1 + 6 files changed, 19 insertions(+), 2 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index af40a02c46ee..24fff8c90942 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -240,6 +240,7 @@ static const char *const hctx_flag_name[] = { HCTX_FLAG_NAME(TAG_SHARED), HCTX_FLAG_NAME(BLOCKING), HCTX_FLAG_NAME(NO_SCHED), + HCTX_FLAG_NAME(NO_MANAGED_IRQ), }; #undef HCTX_FLAG_NAME diff --git a/block/blk-mq.c b/block/blk-mq.c index ec791156e9cc..a664f196782a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2225,6 +2225,11 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, return -ENOMEM; } +static int blk_mq_hctx_notify_online(unsigned int cpu, struct hlist_node *node) +{ + return 0; +} + /* * 'cpu' is going away. splice any existing rq_list entries from this * software queue to the hw queue dispatch list, and ensure that it @@ -2261,6 +2266,9 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) static void blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx) { + if (!(hctx->flags & BLK_MQ_F_NO_MANAGED_IRQ)) + cpuhp_state_remove_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE, + &hctx->cpuhp_online); cpuhp_state_remove_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead); } @@ -2320,6 +2328,9 @@ static int blk_mq_init_hctx(struct request_queue *q, { hctx->queue_num = hctx_idx; + if (!(hctx->flags & BLK_MQ_F_NO_MANAGED_IRQ)) + cpuhp_state_add_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE, + &hctx->cpuhp_online); cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead); hctx->tags = set->tags[hctx_idx]; @@ -3547,6 +3558,8 @@ static int __init blk_mq_init(void) { cpuhp_setup_state_multi(CPUHP_BLK_MQ_DEAD, "block/mq:dead", NULL, blk_mq_hctx_notify_dead); + cpuhp_setup_state_multi(CPUHP_AP_BLK_MQ_ONLINE, "block/mq:online", + NULL, blk_mq_hctx_notify_online); return 0; } subsys_initcall(blk_mq_init); diff --git a/drivers/block/loop.c b/drivers/block/loop.c index f6f77eaa7217..751a28a1d4b0 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1999,7 +1999,7 @@ static int loop_add(struct loop_device **l, int i) lo->tag_set.queue_depth = 128; lo->tag_set.numa_node = NUMA_NO_NODE; lo->tag_set.cmd_size = sizeof(struct loop_cmd); - lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; + lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_NO_MANAGED_IRQ; lo->tag_set.driver_data = lo; err = blk_mq_alloc_tag_set(&lo->tag_set); diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index 3f8577e2c13b..5f1ff70ac029 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -547,7 +547,7 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t) md->tag_set->ops = &dm_mq_ops; md->tag_set->queue_depth = dm_get_blk_mq_queue_depth(); md->tag_set->numa_node = md->numa_node_id; - md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE; + md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_NO_MANAGED_IRQ; md->tag_set->nr_hw_queues = dm_get_blk_mq_nr_hw_queues(); md->tag_set->driver_data = md; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 079c282e4471..a345f2cf920d 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -58,6 +58,7 @@ struct blk_mq_hw_ctx { atomic_t nr_active; + struct hlist_node cpuhp_online; struct hlist_node cpuhp_dead; struct kobject kobj; @@ -226,6 +227,7 @@ struct blk_mq_ops { enum { BLK_MQ_F_SHOULD_MERGE = 1 << 0, BLK_MQ_F_TAG_SHARED = 1 << 1, + BLK_MQ_F_NO_MANAGED_IRQ = 1 << 2, BLK_MQ_F_BLOCKING = 1 << 5, BLK_MQ_F_NO_SCHED = 1 << 6, BLK_MQ_F_ALLOC_POLICY_START_BIT = 8, diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 068793a619ca..bb80f52040cb 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -147,6 +147,7 @@ enum cpuhp_state { CPUHP_AP_SMPBOOT_THREADS, CPUHP_AP_X86_VDSO_VMA_ONLINE, CPUHP_AP_IRQ_AFFINITY_ONLINE, + CPUHP_AP_BLK_MQ_ONLINE, CPUHP_AP_ARM_MVEBU_SYNC_CLOCKS, CPUHP_AP_X86_INTEL_EPB_ONLINE, CPUHP_AP_PERF_ONLINE, From patchwork Mon Oct 14 01:50:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11187689 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F7841668 for ; Mon, 14 Oct 2019 01:51:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 89A87207FF for ; Mon, 14 Oct 2019 01:51:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729734AbfJNBvL (ORCPT ); Sun, 13 Oct 2019 21:51:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43246 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729726AbfJNBvL (ORCPT ); Sun, 13 Oct 2019 21:51:11 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 97E1610C092A; Mon, 14 Oct 2019 01:51:10 +0000 (UTC) Received: from localhost (ovpn-8-17.pek2.redhat.com [10.72.8.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 43C981001925; Mon, 14 Oct 2019 01:51:06 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , John Garry , Bart Van Assche , Hannes Reinecke , Christoph Hellwig , Thomas Gleixner , Keith Busch Subject: [PATCH V4 3/5] blk-mq: stop to handle IO and drain IO before hctx becomes dead Date: Mon, 14 Oct 2019 09:50:41 +0800 Message-Id: <20191014015043.25029-4-ming.lei@redhat.com> In-Reply-To: <20191014015043.25029-1-ming.lei@redhat.com> References: <20191014015043.25029-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.66]); Mon, 14 Oct 2019 01:51:10 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Before one CPU becomes offline, check if it is the last online CPU of hctx. If yes, mark this hctx as BLK_MQ_S_INTERNAL_STOPPED, meantime wait for completion of all in-flight IOs originated from this hctx. This way guarantees that there isn't any inflight IO before shutdowning the managed IRQ line. Cc: John Garry Cc: Bart Van Assche Cc: Hannes Reinecke Cc: Christoph Hellwig Cc: Thomas Gleixner Cc: Keith Busch Signed-off-by: Ming Lei --- block/blk-mq-tag.c | 2 +- block/blk-mq-tag.h | 2 ++ block/blk-mq.c | 40 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 43 insertions(+), 1 deletion(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 008388e82b5c..31828b82552b 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -325,7 +325,7 @@ static void bt_tags_for_each(struct blk_mq_tags *tags, struct sbitmap_queue *bt, * true to continue iterating tags, false to stop. * @priv: Will be passed as second argument to @fn. */ -static void blk_mq_all_tag_busy_iter(struct blk_mq_tags *tags, +void blk_mq_all_tag_busy_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn, void *priv) { if (tags->nr_reserved_tags) diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 61deab0b5a5a..321fd6f440e6 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -35,6 +35,8 @@ extern int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool); void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, void *priv); +void blk_mq_all_tag_busy_iter(struct blk_mq_tags *tags, + busy_tag_iter_fn *fn, void *priv); static inline struct sbq_wait_state *bt_wait_ptr(struct sbitmap_queue *bt, struct blk_mq_hw_ctx *hctx) diff --git a/block/blk-mq.c b/block/blk-mq.c index a664f196782a..3384242202eb 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2225,8 +2225,46 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, return -ENOMEM; } +static bool blk_mq_count_inflight_rq(struct request *rq, void *data, + bool reserved) +{ + unsigned *count = data; + + if ((blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT)) + (*count)++; + + return true; +} + +static unsigned blk_mq_tags_inflight_rqs(struct blk_mq_tags *tags) +{ + unsigned count = 0; + + blk_mq_all_tag_busy_iter(tags, blk_mq_count_inflight_rq, &count); + + return count; +} + +static void blk_mq_hctx_drain_inflight_rqs(struct blk_mq_hw_ctx *hctx) +{ + while (1) { + if (!blk_mq_tags_inflight_rqs(hctx->tags)) + break; + msleep(5); + } +} + static int blk_mq_hctx_notify_online(unsigned int cpu, struct hlist_node *node) { + struct blk_mq_hw_ctx *hctx = hlist_entry_safe(node, + struct blk_mq_hw_ctx, cpuhp_online); + + if ((cpumask_next_and(-1, hctx->cpumask, cpu_online_mask) == cpu) && + (cpumask_next_and(cpu, hctx->cpumask, cpu_online_mask) >= + nr_cpu_ids)) { + set_bit(BLK_MQ_S_INTERNAL_STOPPED, &hctx->state); + blk_mq_hctx_drain_inflight_rqs(hctx); + } return 0; } @@ -2246,6 +2284,8 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) ctx = __blk_mq_get_ctx(hctx->queue, cpu); type = hctx->type; + clear_bit(BLK_MQ_S_INTERNAL_STOPPED, &hctx->state); + spin_lock(&ctx->lock); if (!list_empty(&ctx->rq_lists[type])) { list_splice_init(&ctx->rq_lists[type], &tmp); From patchwork Mon Oct 14 01:50:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11187691 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E212C1668 for ; Mon, 14 Oct 2019 01:51:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CD224207FF for ; Mon, 14 Oct 2019 01:51:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729726AbfJNBvR (ORCPT ); Sun, 13 Oct 2019 21:51:17 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36878 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729630AbfJNBvR (ORCPT ); Sun, 13 Oct 2019 21:51:17 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E5D48307D8BE; Mon, 14 Oct 2019 01:51:16 +0000 (UTC) Received: from localhost (ovpn-8-17.pek2.redhat.com [10.72.8.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7CFFC5D6A3; Mon, 14 Oct 2019 01:51:13 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , John Garry , Bart Van Assche , Hannes Reinecke , Christoph Hellwig , Thomas Gleixner , Keith Busch Subject: [PATCH V4 4/5] blk-mq: re-submit IO in case that hctx is dead Date: Mon, 14 Oct 2019 09:50:42 +0800 Message-Id: <20191014015043.25029-5-ming.lei@redhat.com> In-Reply-To: <20191014015043.25029-1-ming.lei@redhat.com> References: <20191014015043.25029-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Mon, 14 Oct 2019 01:51:17 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When all CPUs in one hctx are offline, we shouldn't run this hw queue for completing request any more. So steal bios from the request, and resubmit them, and finally free the request in blk_mq_hctx_notify_dead(). Cc: John Garry Cc: Bart Van Assche Cc: Hannes Reinecke Cc: Christoph Hellwig Cc: Thomas Gleixner Cc: Keith Busch Signed-off-by: Ming Lei --- block/blk-mq.c | 54 ++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 48 insertions(+), 6 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 3384242202eb..17f0a9ef32a8 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2268,10 +2268,34 @@ static int blk_mq_hctx_notify_online(unsigned int cpu, struct hlist_node *node) return 0; } +static void blk_mq_resubmit_io(struct request *rq) +{ + struct bio_list list; + struct bio *bio; + + bio_list_init(&list); + blk_steal_bios(&list, rq); + + /* + * Free the old empty request before submitting bio for avoiding + * potential deadlock + */ + blk_mq_cleanup_rq(rq); + blk_mq_end_request(rq, 0); + + while (true) { + bio = bio_list_pop(&list); + if (!bio) + break; + + generic_make_request(bio); + } +} + /* - * 'cpu' is going away. splice any existing rq_list entries from this - * software queue to the hw queue dispatch list, and ensure that it - * gets run. + * 'cpu' has gone away. If this hctx is dead, we can't dispatch request + * to the hctx any more, so steal bios from requests of this hctx, and + * re-submit them to the request queue, and free these requests finally. */ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) { @@ -2279,6 +2303,8 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) struct blk_mq_ctx *ctx; LIST_HEAD(tmp); enum hctx_type type; + bool hctx_dead; + struct request *rq; hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_dead); ctx = __blk_mq_get_ctx(hctx->queue, cpu); @@ -2286,6 +2312,9 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) clear_bit(BLK_MQ_S_INTERNAL_STOPPED, &hctx->state); + hctx_dead = cpumask_first_and(hctx->cpumask, cpu_online_mask) >= + nr_cpu_ids; + spin_lock(&ctx->lock); if (!list_empty(&ctx->rq_lists[type])) { list_splice_init(&ctx->rq_lists[type], &tmp); @@ -2293,14 +2322,27 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) } spin_unlock(&ctx->lock); - if (list_empty(&tmp)) + if (!hctx_dead) { + if (list_empty(&tmp)) + return 0; + spin_lock(&hctx->lock); + list_splice_tail_init(&tmp, &hctx->dispatch); + spin_unlock(&hctx->lock); + blk_mq_run_hw_queue(hctx, true); return 0; + } + /* requests in dispatch list has to be re-submitted too */ spin_lock(&hctx->lock); - list_splice_tail_init(&tmp, &hctx->dispatch); + list_splice_tail_init(&hctx->dispatch, &tmp); spin_unlock(&hctx->lock); - blk_mq_run_hw_queue(hctx, true); + while (!list_empty(&tmp)) { + rq = list_entry(tmp.next, struct request, queuelist); + list_del_init(&rq->queuelist); + blk_mq_resubmit_io(rq); + } + return 0; } From patchwork Mon Oct 14 01:50:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11187693 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8249B912 for ; Mon, 14 Oct 2019 01:51:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6CD7C20815 for ; Mon, 14 Oct 2019 01:51:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729744AbfJNBvV (ORCPT ); Sun, 13 Oct 2019 21:51:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36902 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729730AbfJNBvU (ORCPT ); Sun, 13 Oct 2019 21:51:20 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 770BF307D90D; Mon, 14 Oct 2019 01:51:20 +0000 (UTC) Received: from localhost (ovpn-8-17.pek2.redhat.com [10.72.8.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5E7E4100EBA5; Mon, 14 Oct 2019 01:51:19 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , John Garry , Bart Van Assche , Hannes Reinecke , Christoph Hellwig , Thomas Gleixner , Keith Busch Subject: [PATCH V4 5/5] blk-mq: handle requests dispatched from IO scheduler in case that hctx is dead Date: Mon, 14 Oct 2019 09:50:43 +0800 Message-Id: <20191014015043.25029-6-ming.lei@redhat.com> In-Reply-To: <20191014015043.25029-1-ming.lei@redhat.com> References: <20191014015043.25029-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Mon, 14 Oct 2019 01:51:20 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If hctx becomes dead, all in-queue IO requests aimed at this hctx have to be re-submitted, so cover requests queued in scheduler queue. Cc: John Garry Cc: Bart Van Assche Cc: Hannes Reinecke Cc: Christoph Hellwig Cc: Thomas Gleixner Cc: Keith Busch Reviewed-by: Hannes Reinecke Signed-off-by: Ming Lei --- block/blk-mq.c | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 17f0a9ef32a8..06081966549f 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2305,6 +2305,7 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) enum hctx_type type; bool hctx_dead; struct request *rq; + struct elevator_queue *e; hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_dead); ctx = __blk_mq_get_ctx(hctx->queue, cpu); @@ -2315,12 +2316,31 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) hctx_dead = cpumask_first_and(hctx->cpumask, cpu_online_mask) >= nr_cpu_ids; - spin_lock(&ctx->lock); - if (!list_empty(&ctx->rq_lists[type])) { - list_splice_init(&ctx->rq_lists[type], &tmp); - blk_mq_hctx_clear_pending(hctx, ctx); + e = hctx->queue->elevator; + if (!e) { + spin_lock(&ctx->lock); + if (!list_empty(&ctx->rq_lists[type])) { + list_splice_init(&ctx->rq_lists[type], &tmp); + blk_mq_hctx_clear_pending(hctx, ctx); + } + spin_unlock(&ctx->lock); + } else if (hctx_dead) { + LIST_HEAD(sched_tmp); + + while ((rq = e->type->ops.dispatch_request(hctx))) { + if (rq->mq_hctx != hctx) + list_add(&rq->queuelist, &sched_tmp); + else + list_add(&rq->queuelist, &tmp); + } + + while (!list_empty(&sched_tmp)) { + rq = list_entry(sched_tmp.next, struct request, + queuelist); + list_del_init(&rq->queuelist); + blk_mq_sched_insert_request(rq, true, true, true); + } } - spin_unlock(&ctx->lock); if (!hctx_dead) { if (list_empty(&tmp))