From patchwork Mon Jan 6 10:06:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13927145 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D45621DB363; Mon, 6 Jan 2025 10:07:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736158031; cv=none; b=FipA0pmXttZOHohdDX/kmk7hFs90S0uBF2pXeiFbRVjRbwpWr3pekMh6sS3+/UFuSOgEzR429pCBM5ZXQTFHerViD+Mb+VxjJJyYsZ6IaNbsL3U404/MV+yOP+XrihHaO46/imxmkSkP30793haQrb54rfQp71vTJsf2ZRQ9GlI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736158031; c=relaxed/simple; bh=Y53I2ubzmM9MbBq6lu76i2xr32FFqTjusX6UmvgEeTY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kVbsRE/RiKPykFKoZeSrpa0IHTDxFMK9cZ3k8S/gz6+RhlihysZbZJ5VQ7s86MOI0h/NLOa5DhdEGqiZDShkG5Bp0p3dZz+eBF8dFDGeMzwjfGqd+Ff4chHZY7FNWHBGTrKphRvEkZg8TRf88hQ71r/lsJZeHuUjz1kcTeJun/U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=INGW9t1K; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="INGW9t1K" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=D+yK9qy9NNnaFD533w4Zsv3r3RJr0YbI4KOcgKyphos=; b=INGW9t1KBwQyzfmIc/l8lmUkNM V0NpUNH7oKhePQ/ILD/wwK3f3nlODbo2QO3LfQTPBshWFqDd35wZAg/8VvIgjsR5b370N336FvoSA DBfAymLo3RLoGqh84iN4RK731GTiyTpPW+49VdsAWkjOl4P8ruGKZ8NCiNafpPHlycjJfJ9w3MomL cYUr9M60nOcsLwC7Klp+E3ezO2jtRvGdxSPHViUCPOKwT9YE6lK/nM0IrFXptWxvscOkI1DO16inF 10J8dcQMDuuK86UrGFl7B352cSfNQFxSJxdyMGOiQQMnVSUrPoGGWAlhGT1vj11YUzQXBHXkTTuLv E1T77QrQ==; Received: from 2a02-8389-2341-5b80-db6b-99e8-3feb-3b4e.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:db6b:99e8:3feb:3b4e] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.98 #2 (Red Hat Linux)) id 1tUk0j-00000000nTW-2Q0m; Mon, 06 Jan 2025 10:07:02 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Damien Le Moal , Ming Lei , Nilay Shroff , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, nbd@other.debian.org, virtualization@lists.linux.dev, linux-scsi@vger.kernel.org, usb-storage@lists.one-eyed-alien.net Subject: [PATCH 05/10] block: don't update BLK_FEAT_POLL in __blk_mq_update_nr_hw_queues Date: Mon, 6 Jan 2025 11:06:18 +0100 Message-ID: <20250106100645.850445-6-hch@lst.de> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250106100645.850445-1-hch@lst.de> References: <20250106100645.850445-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html When __blk_mq_update_nr_hw_queues changes the number of tag sets, it might have to disable poll queues. Currently it does so by adjusting the BLK_FEAT_POLL, which is a bit against the intent of features that describe hardware / driver capabilities, but more importantly causes nasty lock order problems with the broadly held freeze when updating the number of hardware queues and the limits lock. Fix this by leaving BLK_FEAT_POLL alone, and instead check for the number of sets and poll queues in the bio submission and poll handler. While this adds extra work to the fast path, the variables are in cache lines used by these operations anyway, so it should be cheap enough. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 14 +++++++++++--- block/blk-mq.c | 19 +------------------ block/blk-mq.h | 6 ++++++ 3 files changed, 18 insertions(+), 21 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 666efe8fa202..483c14a50d9f 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -753,6 +753,15 @@ static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q, return BLK_STS_OK; } +static bool bdev_can_poll(struct block_device *bdev) +{ + struct request_queue *q = bdev_get_queue(bdev); + + if (queue_is_mq(q)) + return blk_mq_can_poll(q->tag_set); + return q->limits.features & BLK_FEAT_POLL; +} + /** * submit_bio_noacct - re-submit a bio to the block device layer for I/O * @bio: The bio describing the location in memory and on the device. @@ -805,8 +814,7 @@ void submit_bio_noacct(struct bio *bio) } } - if (!(q->limits.features & BLK_FEAT_POLL) && - (bio->bi_opf & REQ_POLLED)) { + if ((bio->bi_opf & REQ_POLLED) && !bdev_can_poll(bdev)) { bio_clear_polled(bio); goto not_supported; } @@ -935,7 +943,7 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags) return 0; q = bdev_get_queue(bdev); - if (cookie == BLK_QC_T_NONE || !(q->limits.features & BLK_FEAT_POLL)) + if (cookie == BLK_QC_T_NONE || !bdev_can_poll(bdev)) return 0; blk_flush_plug(current->plug, false); diff --git a/block/blk-mq.c b/block/blk-mq.c index 17f10683d640..0a7f059735fa 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4321,12 +4321,6 @@ void blk_mq_release(struct request_queue *q) blk_mq_sysfs_deinit(q); } -static bool blk_mq_can_poll(struct blk_mq_tag_set *set) -{ - return set->nr_maps > HCTX_TYPE_POLL && - set->map[HCTX_TYPE_POLL].nr_queues; -} - struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set, struct queue_limits *lim, void *queuedata) { @@ -4336,9 +4330,7 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set, if (!lim) lim = &default_lim; - lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT; - if (blk_mq_can_poll(set)) - lim->features |= BLK_FEAT_POLL; + lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL; q = blk_alloc_queue(lim, set->numa_node); if (IS_ERR(q)) @@ -5025,8 +5017,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, fallback: blk_mq_update_queue_map(set); list_for_each_entry(q, &set->tag_list, tag_set_list) { - struct queue_limits lim; - blk_mq_realloc_hw_ctxs(set, q); if (q->nr_hw_queues != set->nr_hw_queues) { @@ -5040,13 +5030,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, set->nr_hw_queues = prev_nr_hw_queues; goto fallback; } - lim = queue_limits_start_update(q); - if (blk_mq_can_poll(set)) - lim.features |= BLK_FEAT_POLL; - else - lim.features &= ~BLK_FEAT_POLL; - if (queue_limits_commit_update(q, &lim) < 0) - pr_warn("updating the poll flag failed\n"); blk_mq_map_swqueue(q); } diff --git a/block/blk-mq.h b/block/blk-mq.h index 89a20fffa4b1..ecd7bd7ec609 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -111,6 +111,12 @@ static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q, return ctx->hctxs[blk_mq_get_hctx_type(opf)]; } +static inline bool blk_mq_can_poll(struct blk_mq_tag_set *set) +{ + return set->nr_maps > HCTX_TYPE_POLL && + set->map[HCTX_TYPE_POLL].nr_queues; +} + /* * sysfs helpers */