From patchwork Tue Dec 21 14:14:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12689679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7243C433F5 for ; Tue, 21 Dec 2021 14:16:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1640096172; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=Lgud5inCptvFs2IZ7POMTstZ/nlcM1O5+SkOS1NXzfo=; b=EeWP6sIMNhpZ9y+sZi1N32mEYcqXISHUm+Kt6Sic3Gt+MQDV33kdr2KZTEK7UAlbgHT0a+ 2NR/PGuI28M8puWNKQjxxTDh+IkYqAbGFfLBrhA3RLTowoV2nODnbgd9ebaRpHbCncSUW5 Mvs3EyAMDW5jW9jr5SS4zfibwk/Fzbo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-486-gAO07PaTMC64zhHF-AVSrw-1; Tue, 21 Dec 2021 09:16:09 -0500 X-MC-Unique: gAO07PaTMC64zhHF-AVSrw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0F95F801B25; Tue, 21 Dec 2021 14:16:05 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AFC5110911AA; Tue, 21 Dec 2021 14:16:04 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 93E6D1806D1C; Tue, 21 Dec 2021 14:16:03 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1BLEG2EF006192 for ; Tue, 21 Dec 2021 09:16:02 -0500 Received: by smtp.corp.redhat.com (Postfix) id DB737519BA; Tue, 21 Dec 2021 14:16:02 +0000 (UTC) Received: from localhost (ovpn-8-30.pek2.redhat.com [10.72.8.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id D9E715DF3A; Tue, 21 Dec 2021 14:15:38 +0000 (UTC) From: Ming Lei To: Jens Axboe , Mike Snitzer Date: Tue, 21 Dec 2021 22:14:57 +0800 Message-Id: <20211221141459.1368176-2-ming.lei@redhat.com> In-Reply-To: <20211221141459.1368176-1-ming.lei@redhat.com> References: <20211221141459.1368176-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-loop: dm-devel@redhat.com Cc: linux-block@vger.kernel.org, dm-devel@redhat.com, Ming Lei Subject: [dm-devel] [PATCH 1/3] block: split having srcu from queue blocking X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Now we reuse queue flag of QUEUE_FLAG_HAS_SRCU for both having srcu and BLK_MQ_F_BLOCKING. Actually they are two things: one is that srcu is allocated inside queue, another is that we need to handle blocking ->queue_rq. So far this way works as expected. dm-rq needs to set BLK_MQ_F_BLOCKING if any underlying queue is marked as BLK_MQ_F_BLOCKING. But dm queue is allocated before tagset is allocated, one doable way is to always allocate SRCU for dm queue, then set BLK_MQ_F_BLOCKING for the tagset if it is required, meantime we can mark the request queue as supporting blocking ->queue_rq. So add one new flag of QUEUE_FLAG_BLOCKING for supporting blocking ->queue_rq only, and use one private field to describe if request queue has allocated srcu instance. Signed-off-by: Ming Lei Reviewed-by: Jeff Moyer --- block/blk-core.c | 2 +- block/blk-mq.c | 6 +++--- block/blk-mq.h | 2 +- block/blk-sysfs.c | 2 +- include/linux/blkdev.h | 5 +++-- 5 files changed, 9 insertions(+), 8 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 10619fd83c1b..7ba806a4e779 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -449,7 +449,7 @@ struct request_queue *blk_alloc_queue(int node_id, bool alloc_srcu) return NULL; if (alloc_srcu) { - blk_queue_flag_set(QUEUE_FLAG_HAS_SRCU, q); + q->has_srcu = true; if (init_srcu_struct(q->srcu) != 0) goto fail_q; } diff --git a/block/blk-mq.c b/block/blk-mq.c index 0d7c9d3e0329..1408a6b8ccdc 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -259,7 +259,7 @@ EXPORT_SYMBOL_GPL(blk_mq_quiesce_queue_nowait); */ void blk_mq_wait_quiesce_done(struct request_queue *q) { - if (blk_queue_has_srcu(q)) + if (blk_queue_blocking(q)) synchronize_srcu(q->srcu); else synchronize_rcu(); @@ -4024,8 +4024,8 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, struct request_queue *q) { - WARN_ON_ONCE(blk_queue_has_srcu(q) != - !!(set->flags & BLK_MQ_F_BLOCKING)); + if (set->flags & BLK_MQ_F_BLOCKING) + blk_queue_flag_set(QUEUE_FLAG_BLOCKING, q); /* mark the queue as mq asap */ q->mq_ops = set->ops; diff --git a/block/blk-mq.h b/block/blk-mq.h index 948791ea2a3e..9601918e2034 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -377,7 +377,7 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, /* run the code block in @dispatch_ops with rcu/srcu read lock held */ #define __blk_mq_run_dispatch_ops(q, check_sleep, dispatch_ops) \ do { \ - if (!blk_queue_has_srcu(q)) { \ + if (!blk_queue_blocking(q)) { \ rcu_read_lock(); \ (dispatch_ops); \ rcu_read_unlock(); \ diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index e20eadfcf5c8..af89fabb58e3 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -736,7 +736,7 @@ static void blk_free_queue_rcu(struct rcu_head *rcu_head) struct request_queue *q = container_of(rcu_head, struct request_queue, rcu_head); - kmem_cache_free(blk_get_queue_kmem_cache(blk_queue_has_srcu(q)), q); + kmem_cache_free(blk_get_queue_kmem_cache(q->has_srcu), q); } /* Unconfigure the I/O scheduler and dissociate from the cgroup controller. */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index c80cfaefc0a8..d84abdb294c4 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -365,6 +365,7 @@ struct request_queue { #endif bool mq_sysfs_init_done; + bool has_srcu; #define BLK_MAX_WRITE_HINTS 5 u64 write_hints[BLK_MAX_WRITE_HINTS]; @@ -385,7 +386,7 @@ struct request_queue { /* Keep blk_queue_flag_name[] in sync with the definitions below */ #define QUEUE_FLAG_STOPPED 0 /* queue is stopped */ #define QUEUE_FLAG_DYING 1 /* queue being torn down */ -#define QUEUE_FLAG_HAS_SRCU 2 /* SRCU is allocated */ +#define QUEUE_FLAG_BLOCKING 2 /* ->queue_rq may block */ #define QUEUE_FLAG_NOMERGES 3 /* disable merge attempts */ #define QUEUE_FLAG_SAME_COMP 4 /* complete on same CPU-group */ #define QUEUE_FLAG_FAIL_IO 5 /* fake timeout */ @@ -423,7 +424,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); #define blk_queue_stopped(q) test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags) #define blk_queue_dying(q) test_bit(QUEUE_FLAG_DYING, &(q)->queue_flags) -#define blk_queue_has_srcu(q) test_bit(QUEUE_FLAG_HAS_SRCU, &(q)->queue_flags) +#define blk_queue_blocking(q) test_bit(QUEUE_FLAG_BLOCKING, &(q)->queue_flags) #define blk_queue_dead(q) test_bit(QUEUE_FLAG_DEAD, &(q)->queue_flags) #define blk_queue_init_done(q) test_bit(QUEUE_FLAG_INIT_DONE, &(q)->queue_flags) #define blk_queue_nomerges(q) test_bit(QUEUE_FLAG_NOMERGES, &(q)->queue_flags) From patchwork Tue Dec 21 14:14:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12689681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B0481C433F5 for ; Tue, 21 Dec 2021 14:16:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1640096208; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=dhrorwmHKg1/tYbJN/6Xg1gq2i52kaYR+vOd4N25PCg=; b=MSm4zXTC4U2kH4sErq/Pm3zHvHz1VcRO9vJaU+RPpdzGiKjGKh/81lEWg28oKqXQjcpFJ6 mq2NRnH/AaSr994zkp9Qqv2wp+JQgPkXYs8yYETsR4gUr6tlg+SD0515kNq/V/FSrtE238 QPWJLjnEc1/jxttZLbpXW4uzqnjJRbs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-590-5yrNcJUiMDqdyHHzIa5SsA-1; Tue, 21 Dec 2021 09:16:42 -0500 X-MC-Unique: 5yrNcJUiMDqdyHHzIa5SsA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 94A9B1B18BC2; Tue, 21 Dec 2021 14:16:38 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7B9245BE37; Tue, 21 Dec 2021 14:16:38 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 46B4A4CA93; Tue, 21 Dec 2021 14:16:38 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1BLEGasM006253 for ; Tue, 21 Dec 2021 09:16:36 -0500 Received: by smtp.corp.redhat.com (Postfix) id 8C854519BB; Tue, 21 Dec 2021 14:16:36 +0000 (UTC) Received: from localhost (ovpn-8-30.pek2.redhat.com [10.72.8.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id C56AF519BA; Tue, 21 Dec 2021 14:16:05 +0000 (UTC) From: Ming Lei To: Jens Axboe , Mike Snitzer Date: Tue, 21 Dec 2021 22:14:58 +0800 Message-Id: <20211221141459.1368176-3-ming.lei@redhat.com> In-Reply-To: <20211221141459.1368176-1-ming.lei@redhat.com> References: <20211221141459.1368176-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-loop: dm-devel@redhat.com Cc: linux-block@vger.kernel.org, dm-devel@redhat.com, Ming Lei Subject: [dm-devel] [PATCH 2/3] block: add blk_alloc_disk_srcu X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Add blk_alloc_disk_srcu() so that we can allocate srcu inside request queue for supporting blocking ->queue_rq(). dm-rq needs this API. Signed-off-by: Ming Lei Reviewed-by: Jeff Moyer --- block/genhd.c | 5 +++-- include/linux/genhd.h | 12 ++++++++---- 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/block/genhd.c b/block/genhd.c index 3c139a1b6f04..d21786fbb7bb 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -1333,12 +1333,13 @@ struct gendisk *__alloc_disk_node(struct request_queue *q, int node_id, } EXPORT_SYMBOL(__alloc_disk_node); -struct gendisk *__blk_alloc_disk(int node, struct lock_class_key *lkclass) +struct gendisk *__blk_alloc_disk(int node, bool alloc_srcu, + struct lock_class_key *lkclass) { struct request_queue *q; struct gendisk *disk; - q = blk_alloc_queue(node, false); + q = blk_alloc_queue(node, alloc_srcu); if (!q) return NULL; diff --git a/include/linux/genhd.h b/include/linux/genhd.h index 6906a45bc761..20259340b962 100644 --- a/include/linux/genhd.h +++ b/include/linux/genhd.h @@ -227,23 +227,27 @@ void blk_drop_partitions(struct gendisk *disk); struct gendisk *__alloc_disk_node(struct request_queue *q, int node_id, struct lock_class_key *lkclass); extern void put_disk(struct gendisk *disk); -struct gendisk *__blk_alloc_disk(int node, struct lock_class_key *lkclass); +struct gendisk *__blk_alloc_disk(int node, bool alloc_srcu, + struct lock_class_key *lkclass); /** - * blk_alloc_disk - allocate a gendisk structure + * __alloc_disk - allocate a gendisk structure * @node_id: numa node to allocate on + * @alloc_srcu: allocate srcu instance for supporting blocking ->queue_rq * * Allocate and pre-initialize a gendisk structure for use with BIO based * drivers. * * Context: can sleep */ -#define blk_alloc_disk(node_id) \ +#define __alloc_disk(node_id, alloc_srcu) \ ({ \ static struct lock_class_key __key; \ \ - __blk_alloc_disk(node_id, &__key); \ + __blk_alloc_disk(node_id, alloc_srcu, &__key); \ }) +#define blk_alloc_disk(node_id) __alloc_disk(node_id, false) +#define blk_alloc_disk_srcu(node_id) __alloc_disk(node_id, true) void blk_cleanup_disk(struct gendisk *disk); int __register_blkdev(unsigned int major, const char *name, From patchwork Tue Dec 21 14:14:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12689683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F1DE5C433F5 for ; Tue, 21 Dec 2021 14:17:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1640096226; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=jMhTRP97wpUZuqojambzQtqLSy6e2ALqOr87612lzFY=; b=JKAQ6jtvC8F5vj410f4qTnOsUZ8/h2nisnW95R/WzgTPJOITZh6VZO70m20SrbcBA8pJBQ bqOkDqStniegaeouLB/dtXI453lE1B5AqjVuVwL+genUKAsxtL0T6kmp7sOhofihgH5O5d +AaLigsiIhIQoikBURjt+0l1rXVo5I0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-500-IyoY4M-AMi6AMs4VNVAgYw-1; Tue, 21 Dec 2021 09:17:02 -0500 X-MC-Unique: IyoY4M-AMi6AMs4VNVAgYw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B7026343CA; Tue, 21 Dec 2021 14:16:58 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9DF865BE2B; Tue, 21 Dec 2021 14:16:58 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 767F21809CB9; Tue, 21 Dec 2021 14:16:58 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1BLEGvCN006282 for ; Tue, 21 Dec 2021 09:16:58 -0500 Received: by smtp.corp.redhat.com (Postfix) id EEB2F4BC58; Tue, 21 Dec 2021 14:16:57 +0000 (UTC) Received: from localhost (ovpn-8-30.pek2.redhat.com [10.72.8.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8A6FD171FF; Tue, 21 Dec 2021 14:16:39 +0000 (UTC) From: Ming Lei To: Jens Axboe , Mike Snitzer Date: Tue, 21 Dec 2021 22:14:59 +0800 Message-Id: <20211221141459.1368176-4-ming.lei@redhat.com> In-Reply-To: <20211221141459.1368176-1-ming.lei@redhat.com> References: <20211221141459.1368176-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-loop: dm-devel@redhat.com Cc: linux-block@vger.kernel.org, dm-devel@redhat.com, Ming Lei Subject: [dm-devel] [PATCH 3/3] dm: mark dm queue as blocking if any underlying is blocking X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com dm request based driver doesn't set BLK_MQ_F_BLOCKING, so dm_queue_rq() is supposed to not sleep. However, blk_insert_cloned_request() is used by dm_queue_rq() for queuing underlying request, but the underlying queue may be marked as BLK_MQ_F_BLOCKING, so blk_insert_cloned_request() may become to block current context, then rcu warning is triggered. Fixes the issue by marking dm request based queue as BLK_MQ_F_BLOCKING if any underlying queue is marked as BLK_MQ_F_BLOCKING, meantime we need to allocate srcu beforehand. Signed-off-by: Ming Lei Reviewed-by: Mike Snitzer Acked-by: Jeff Moyer --- drivers/md/dm-rq.c | 5 ++++- drivers/md/dm-rq.h | 3 ++- drivers/md/dm-table.c | 14 ++++++++++++++ drivers/md/dm.c | 5 +++-- drivers/md/dm.h | 1 + 5 files changed, 24 insertions(+), 4 deletions(-) diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index 579ab6183d4d..2297d37c62a9 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -535,7 +535,8 @@ static const struct blk_mq_ops dm_mq_ops = { .init_request = dm_mq_init_request, }; -int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t) +int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t, + bool blocking) { struct dm_target *immutable_tgt; int err; @@ -550,6 +551,8 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t) md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_STACKING; md->tag_set->nr_hw_queues = dm_get_blk_mq_nr_hw_queues(); md->tag_set->driver_data = md; + if (blocking) + md->tag_set->flags |= BLK_MQ_F_BLOCKING; md->tag_set->cmd_size = sizeof(struct dm_rq_target_io); immutable_tgt = dm_table_get_immutable_target(t); diff --git a/drivers/md/dm-rq.h b/drivers/md/dm-rq.h index 1eea0da641db..5f3729f277d7 100644 --- a/drivers/md/dm-rq.h +++ b/drivers/md/dm-rq.h @@ -30,7 +30,8 @@ struct dm_rq_clone_bio_info { struct bio clone; }; -int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t); +int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t, + bool blocking); void dm_mq_cleanup_mapped_device(struct mapped_device *md); void dm_start_queue(struct request_queue *q); diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index aa173f5bdc3d..e4bdd4f757a3 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1875,6 +1875,20 @@ static bool dm_table_supports_write_zeroes(struct dm_table *t) return true; } +/* If the device can block inside ->queue_rq */ +static int device_is_io_blocking(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) +{ + struct request_queue *q = bdev_get_queue(dev->bdev); + + return blk_queue_blocking(q); +} + +bool dm_table_has_blocking_dev(struct dm_table *t) +{ + return dm_table_any_dev_attr(t, device_is_io_blocking, NULL); +} + static int device_not_nowait_capable(struct dm_target *ti, struct dm_dev *dev, sector_t start, sector_t len, void *data) { diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 280918cdcabd..2f72877752dd 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1761,7 +1761,7 @@ static struct mapped_device *alloc_dev(int minor) * established. If request-based table is loaded: blk-mq will * override accordingly. */ - md->disk = blk_alloc_disk(md->numa_node_id); + md->disk = blk_alloc_disk_srcu(md->numa_node_id); if (!md->disk) goto bad; md->queue = md->disk->queue; @@ -2046,7 +2046,8 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) switch (type) { case DM_TYPE_REQUEST_BASED: md->disk->fops = &dm_rq_blk_dops; - r = dm_mq_init_request_queue(md, t); + r = dm_mq_init_request_queue(md, t, + dm_table_has_blocking_dev(t)); if (r) { DMERR("Cannot initialize queue for request-based dm mapped device"); return r; diff --git a/drivers/md/dm.h b/drivers/md/dm.h index 742d9c80efe1..f7f92b272cce 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -60,6 +60,7 @@ int dm_calculate_queue_limits(struct dm_table *table, struct queue_limits *limits); int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, struct queue_limits *limits); +bool dm_table_has_blocking_dev(struct dm_table *t); struct list_head *dm_table_get_devices(struct dm_table *t); void dm_table_presuspend_targets(struct dm_table *t); void dm_table_presuspend_undo_targets(struct dm_table *t);