From patchwork Wed Jul 6 07:03:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907418 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4339CCCA473 for ; Wed, 6 Jul 2022 07:04:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231221AbiGFHEB (ORCPT ); Wed, 6 Jul 2022 03:04:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229740AbiGFHD6 (ORCPT ); Wed, 6 Jul 2022 03:03:58 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A76A2F42; Wed, 6 Jul 2022 00:03:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=59GwLk/57mAU/aV+lYrkyA68Pm/XES/YiUDw0OaPmaE=; b=dFQEjkjoqiNTKENSL1bHP/b6ip t0SKB/cQYl4+dzr7NGpkOMOGui0vtblJoKnFcpK4howCelI5N5EZtAcGEtCs3XmyEy/ZKt+cuzIRF 7CzOSdsqGVz8F41/Av91QO8pJDXByE/g699GrjIhYamNrllQyfUKNIrGcHwW9IFpkQZcNYC1fYhwv b0lI9e1XfL+3BZhoxhuKw/RIPaAqRppWQ+EHN7OmDA1TUIIcLgLFcu+khBw5QJJLUWzAWMbX2T/RJ nFxh81i+JVPfqqvN0G/ZEVzMZvDQEDINvJybJd7w5Xbs2s586EpqSedGW5Qr0C//sPOIZEo0BbGPG zXACS3vA==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z4l-006upM-J0; Wed, 06 Jul 2022 07:03:56 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 01/16] block: remove a superflous ifdef in blkdev.h Date: Wed, 6 Jul 2022 09:03:35 +0200 Message-Id: <20220706070350.1703384-2-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org It doesn't hurt to always have the blk_zone_cond_str prototype, and the two inlines can also be defined unconditionally. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- include/linux/blkdev.h | 3 --- 1 file changed, 3 deletions(-) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b9a94c53c6cd3..270cd0c552924 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -899,8 +899,6 @@ static inline struct request_queue *bdev_get_queue(struct block_device *bdev) return bdev->bd_queue; /* this is never NULL */ } -#ifdef CONFIG_BLK_DEV_ZONED - /* Helper to convert BLK_ZONE_ZONE_XXX to its string format XXX */ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond); @@ -915,7 +913,6 @@ static inline unsigned int bio_zone_is_seq(struct bio *bio) return blk_queue_zone_is_seq(bdev_get_queue(bio->bi_bdev), bio->bi_iter.bi_sector); } -#endif /* CONFIG_BLK_DEV_ZONED */ /* * Return how much of the chunk is left to be used for I/O at a given offset. From patchwork Wed Jul 6 07:03:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9822C43334 for ; Wed, 6 Jul 2022 07:04:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230464AbiGFHEH (ORCPT ); Wed, 6 Jul 2022 03:04:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229740AbiGFHED (ORCPT ); Wed, 6 Jul 2022 03:04:03 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D8C2F42; Wed, 6 Jul 2022 00:04:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=fIbuap1XjSHBxWPy76ctu7Qj+NuYi3Uw4BF3kwSbcMY=; b=ysxVf0/5qfN8m2GJbYcGYKwS0f hb32gkJt518FVIjtTV8mlQ/EuqkNkINMqdOctvbeZ8ZQWhGRSXS2gxfYpZFyEPJl2edPdZ8+UJp/W HPaIvSiA/LPVzaqh0QiO8KNPOqYkXT2Fq8ID5Sfja9Hqt7OXufY+OWxYW8+6H4sJmVu8cc/8ZETRO g9idOKTdDqDRzDbpYQBAMZatw7av7FKlxW4fanPeETymN3AqrLt2tMGT/Wr4UjZD3HMOrSs5DSzcr uUHCm+a8RT25AR9V998lvWDvEFtSx4bXASBQF/Mk+FRlf0QB3yu4hmfUBPO3DS3JdKC6RE2NF3O4l ljReagww==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z4o-006ur8-86; Wed, 06 Jul 2022 07:03:58 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 02/16] block: call blk_queue_free_zone_bitmaps from disk_release Date: Wed, 6 Jul 2022 09:03:36 +0200 Message-Id: <20220706070350.1703384-3-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The zone bitmaps are only used for non-passthrough I/O, so free them as soon as the disk is released. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/blk-sysfs.c | 2 -- block/genhd.c | 1 + 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 58cb9cb9f48cd..7590810cf13fc 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -776,8 +776,6 @@ static void blk_release_queue(struct kobject *kobj) blk_free_queue_stats(q->stats); kfree(q->poll_stat); - blk_queue_free_zone_bitmaps(q); - if (queue_is_mq(q)) blk_mq_release(q); diff --git a/block/genhd.c b/block/genhd.c index b1fb7e058b9cc..d0bdeb93e922c 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -1165,6 +1165,7 @@ static void disk_release(struct device *dev) disk_release_events(disk); kfree(disk->random); + blk_queue_free_zone_bitmaps(disk->queue); xa_destroy(&disk->part_tbl); disk->queue->disk = NULL; From patchwork Wed Jul 6 07:03:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EC86C43334 for ; Wed, 6 Jul 2022 07:04:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231355AbiGFHEP (ORCPT ); Wed, 6 Jul 2022 03:04:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230518AbiGFHEJ (ORCPT ); Wed, 6 Jul 2022 03:04:09 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD38B2636; Wed, 6 Jul 2022 00:04:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=nZi1kfn57Y1hhMnabWrMJiw6rFdnP+pNl/S/lqre+18=; b=yszetCI9YDR431bQ9pVrdi9g42 jv7vxw/M342QzcR9y43wzcynjwXPDSMUN8FulbhneqllmTCty32/KQd8MEE86cFFiHTpk1AjQtJqr 7EJsxqxr/aiHNagLm61jzqk00jK692jjDQyQrz5MgM2cmTNQXd6z63Hj9HkvAwOEHn1co69BqXw7b G5643YudH0coaOP3aNEEGNROk6bMQ/Ijn3Jle50hCEZwQ0xQbiU3q+ZkJQgAhmZPREKGWY1Kh26Yc obkUKGmXdUuFwOrjPI4s9d+yvjb59d1Y1X8KOvxo4pTPIDX1euXMnvdfokdPkBj9wLqbCsHeveysG 83wep9nQ==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z4q-006usR-Sk; Wed, 06 Jul 2022 07:04:01 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 03/16] block: use bdev_is_zoned instead of open coding it Date: Wed, 6 Jul 2022 09:03:37 +0200 Message-Id: <20220706070350.1703384-4-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use bdev_is_zoned in all places where a block_device is available instead of open coding it. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/bio.c | 2 +- block/blk-core.c | 6 +++--- block/blk-mq.h | 2 +- block/blk-zoned.c | 9 ++++----- drivers/md/dm-table.c | 2 +- drivers/md/dm-zone.c | 2 +- drivers/md/dm.c | 2 +- 7 files changed, 12 insertions(+), 13 deletions(-) diff --git a/block/bio.c b/block/bio.c index 933ea32109547..888ee81ea3034 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1033,7 +1033,7 @@ int bio_add_zone_append_page(struct bio *bio, struct page *page, if (WARN_ON_ONCE(bio_op(bio) != REQ_OP_ZONE_APPEND)) return 0; - if (WARN_ON_ONCE(!blk_queue_is_zoned(q))) + if (WARN_ON_ONCE(!bdev_is_zoned(bio->bi_bdev))) return 0; return bio_add_hw_page(q, bio, page, len, offset, diff --git a/block/blk-core.c b/block/blk-core.c index 5ad7bd93077c8..6bcca0b686de4 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -569,7 +569,7 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q, int nr_sectors = bio_sectors(bio); /* Only applicable to zoned block devices */ - if (!blk_queue_is_zoned(q)) + if (!bdev_is_zoned(bio->bi_bdev)) return BLK_STS_NOTSUPP; /* The bio sector must point to the start of a sequential zone */ @@ -775,11 +775,11 @@ void submit_bio_noacct(struct bio *bio) case REQ_OP_ZONE_OPEN: case REQ_OP_ZONE_CLOSE: case REQ_OP_ZONE_FINISH: - if (!blk_queue_is_zoned(q)) + if (!bdev_is_zoned(bio->bi_bdev)) goto not_supported; break; case REQ_OP_ZONE_RESET_ALL: - if (!blk_queue_is_zoned(q) || !blk_queue_zone_resetall(q)) + if (!bdev_is_zoned(bio->bi_bdev) || !blk_queue_zone_resetall(q)) goto not_supported; break; case REQ_OP_WRITE_ZEROES: diff --git a/block/blk-mq.h b/block/blk-mq.h index 54e20edf0da30..31d75a83a562d 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -317,7 +317,7 @@ static inline struct blk_plug *blk_mq_plug(struct request_queue *q, * For regular block devices or read operations, use the context plug * which may be NULL if blk_start_plug() was not executed. */ - if (!blk_queue_is_zoned(q) || !op_is_write(bio_op(bio))) + if (!bdev_is_zoned(bio->bi_bdev) || !op_is_write(bio_op(bio))) return current->plug; /* Zoned block device write operation case: do not plug the BIO */ diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 38cd840d88387..90a5c9cc80ab3 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -149,8 +149,7 @@ int blkdev_report_zones(struct block_device *bdev, sector_t sector, struct gendisk *disk = bdev->bd_disk; sector_t capacity = get_capacity(disk); - if (!blk_queue_is_zoned(bdev_get_queue(bdev)) || - WARN_ON_ONCE(!disk->fops->report_zones)) + if (!bdev_is_zoned(bdev) || WARN_ON_ONCE(!disk->fops->report_zones)) return -EOPNOTSUPP; if (!nr_zones || sector >= capacity) @@ -268,7 +267,7 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, struct bio *bio = NULL; int ret = 0; - if (!blk_queue_is_zoned(q)) + if (!bdev_is_zoned(bdev)) return -EOPNOTSUPP; if (bdev_read_only(bdev)) @@ -350,7 +349,7 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode, if (!q) return -ENXIO; - if (!blk_queue_is_zoned(q)) + if (!bdev_is_zoned(bdev)) return -ENOTTY; if (copy_from_user(&rep, argp, sizeof(struct blk_zone_report))) @@ -408,7 +407,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, if (!q) return -ENXIO; - if (!blk_queue_is_zoned(q)) + if (!bdev_is_zoned(bdev)) return -ENOTTY; if (!(mode & FMODE_WRITE)) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index bd539afbfe88f..b36b528e56cff 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1623,7 +1623,7 @@ static int device_not_matches_zone_sectors(struct dm_target *ti, struct dm_dev * struct request_queue *q = bdev_get_queue(dev->bdev); unsigned int *zone_sectors = data; - if (!blk_queue_is_zoned(q)) + if (!bdev_is_zoned(dev->bdev)) return 0; return blk_queue_zone_sectors(q) != *zone_sectors; diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index 3e7b1fe1580b9..ae616b87c91ae 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -270,7 +270,7 @@ static int device_not_zone_append_capable(struct dm_target *ti, struct dm_dev *dev, sector_t start, sector_t len, void *data) { - return !blk_queue_is_zoned(bdev_get_queue(dev->bdev)); + return !bdev_is_zoned(dev->bdev); } static bool dm_table_supports_zone_append(struct dm_table *t) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 8872f9c636889..33d3799bb66ec 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1033,7 +1033,7 @@ static void clone_endio(struct bio *bio) } if (static_branch_unlikely(&zoned_enabled) && - unlikely(blk_queue_is_zoned(bdev_get_queue(bio->bi_bdev)))) + unlikely(bdev_is_zoned(bio->bi_bdev))) dm_zone_endio(io, bio); if (endio) { From patchwork Wed Jul 6 07:03:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C1BFCCA473 for ; Wed, 6 Jul 2022 07:04:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231250AbiGFHEI (ORCPT ); Wed, 6 Jul 2022 03:04:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231246AbiGFHEG (ORCPT ); Wed, 6 Jul 2022 03:04:06 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 717996312; Wed, 6 Jul 2022 00:04:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Q99xwmZ516+YqR7K1z4qPxcfNQ29PzHNUl8eoY+y9Tc=; b=j7a4xNT/EjShuyQwe9vF1CGTYa ykeZuKvKA82oeIK8bcFfOpjGiMwZsOMokw2arF8LfrcjzVMSGIqTv/z0G/2NWlb1gQBWkfiupPJmo Fg/Nh0693GVBTCREaFTYpK6/4kjpC74NDK6Qwr2Os+SwxLJm4nG56hs8OqE/oEkhif/ThCu9pmP0H QjSOgvtpnsRB1hslOjgmFYAm1teUK8sYcj4VvoWd/+YbgJ3gfk5RPoY0VCJmz1OioB5Pw8w8bDFk2 piuQ4i7Hf/ydCL0g4VjEClckVIfsxc5mHT70+9Dq0V3MP0ieiWzYcdlJ0+3E+vW8tsiRS4jsZAkmv HMN9M7Ng==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z4t-006uty-J4; Wed, 06 Jul 2022 07:04:03 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 04/16] block: simplify blk_mq_plug Date: Wed, 6 Jul 2022 09:03:38 +0200 Message-Id: <20220706070350.1703384-5-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Drop the unused q argument, and invert the check to move the exception into a branch and the regular path as the normal return. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/blk-core.c | 2 +- block/blk-merge.c | 2 +- block/blk-mq.c | 2 +- block/blk-mq.h | 18 ++++++++---------- 4 files changed, 11 insertions(+), 13 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 6bcca0b686de4..bc16e9bae2dc4 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -719,7 +719,7 @@ void submit_bio_noacct(struct bio *bio) might_sleep(); - plug = blk_mq_plug(q, bio); + plug = blk_mq_plug(bio); if (plug && plug->nowait) bio->bi_opf |= REQ_NOWAIT; diff --git a/block/blk-merge.c b/block/blk-merge.c index 0f5f42ebd0bb0..5abf5aa5a5f0e 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -1051,7 +1051,7 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, struct blk_plug *plug; struct request *rq; - plug = blk_mq_plug(q, bio); + plug = blk_mq_plug(bio); if (!plug || rq_list_empty(plug->mq_list)) return false; diff --git a/block/blk-mq.c b/block/blk-mq.c index 15c7c5c4ad222..dc714dff73001 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2804,7 +2804,7 @@ static void bio_set_ioprio(struct bio *bio) void blk_mq_submit_bio(struct bio *bio) { struct request_queue *q = bdev_get_queue(bio->bi_bdev); - struct blk_plug *plug = blk_mq_plug(q, bio); + struct blk_plug *plug = blk_mq_plug(bio); const int is_sync = op_is_sync(bio->bi_opf); struct request *rq; unsigned int nr_segs = 1; diff --git a/block/blk-mq.h b/block/blk-mq.h index 31d75a83a562d..e694ec67d646a 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -294,7 +294,6 @@ static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap) /* * blk_mq_plug() - Get caller context plug - * @q: request queue * @bio : the bio being submitted by the caller context * * Plugging, by design, may delay the insertion of BIOs into the elevator in @@ -305,23 +304,22 @@ static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap) * order. While this is not a problem with regular block devices, this ordering * change can cause write BIO failures with zoned block devices as these * require sequential write patterns to zones. Prevent this from happening by - * ignoring the plug state of a BIO issuing context if the target request queue - * is for a zoned block device and the BIO to plug is a write operation. + * ignoring the plug state of a BIO issuing context if it is for a zoned block + * device and the BIO to plug is a write operation. * * Return current->plug if the bio can be plugged and NULL otherwise */ -static inline struct blk_plug *blk_mq_plug(struct request_queue *q, - struct bio *bio) +static inline struct blk_plug *blk_mq_plug( struct bio *bio) { + /* Zoned block device write operation case: do not plug the BIO */ + if (bdev_is_zoned(bio->bi_bdev) && op_is_write(bio_op(bio))) + return NULL; + /* * For regular block devices or read operations, use the context plug * which may be NULL if blk_start_plug() was not executed. */ - if (!bdev_is_zoned(bio->bi_bdev) || !op_is_write(bio_op(bio))) - return current->plug; - - /* Zoned block device write operation case: do not plug the BIO */ - return NULL; + return current->plug; } /* Free all requests on the list */ From patchwork Wed Jul 6 07:03:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907424 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46DD9CCA480 for ; Wed, 6 Jul 2022 07:04:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231340AbiGFHER (ORCPT ); Wed, 6 Jul 2022 03:04:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231318AbiGFHEN (ORCPT ); Wed, 6 Jul 2022 03:04:13 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53DA020F75; Wed, 6 Jul 2022 00:04:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Bip3iAl8JhwETwL7n/7XazTZVp+vXf4V6KMw3HUT+xM=; b=pxG2PPARX2KeJtjeHzfEzU3tZQ Swsw2emwSsjx46MxQNdvpTEd5QovyJ6hsdyIujz6irsn1DVybCkK+LYDUQ3eZdUzHBOeijbsq4eiv fZbkHSBEDGvq5Qe+fM7wTGLnjceW99bP9L6nRpXkr8ToKqvw2GrTVo8HRvlGBl9MZ2aFK+Ybnsblh gVrWlsxdOC8CODPP4yGFqxAZV+yguzqM62u2t4bZsANFwPNjRgayeAfsXgT7QEX0+dreP24qdC/Cw Emd0R9j5zLdaOOFAa2PbIRGdEAcbdgzTw+fPBLsl6Jr3jZJDRcr+cbB0d//ngvCoVLVX5r3z3mVHD Nuh3XwWg==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z4w-006uvm-45; Wed, 06 Jul 2022 07:04:06 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 05/16] block: simplify blk_check_zone_append Date: Wed, 6 Jul 2022 09:03:39 +0200 Message-Id: <20220706070350.1703384-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use the bdev based helpers instead of open coding them. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/blk-core.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index bc16e9bae2dc4..b530ce7b370c4 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -565,7 +565,6 @@ static int blk_partition_remap(struct bio *bio) static inline blk_status_t blk_check_zone_append(struct request_queue *q, struct bio *bio) { - sector_t pos = bio->bi_iter.bi_sector; int nr_sectors = bio_sectors(bio); /* Only applicable to zoned block devices */ @@ -573,8 +572,8 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q, return BLK_STS_NOTSUPP; /* The bio sector must point to the start of a sequential zone */ - if (pos & (blk_queue_zone_sectors(q) - 1) || - !blk_queue_zone_is_seq(q, pos)) + if (bio->bi_iter.bi_sector & (bdev_zone_sectors(bio->bi_bdev) - 1) || + !bio_zone_is_seq(bio)) return BLK_STS_IOERR; /* From patchwork Wed Jul 6 07:03:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEB60CCA47C for ; Wed, 6 Jul 2022 07:04:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231332AbiGFHEQ (ORCPT ); Wed, 6 Jul 2022 03:04:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231319AbiGFHEN (ORCPT ); Wed, 6 Jul 2022 03:04:13 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 948816312; Wed, 6 Jul 2022 00:04:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=vNH0Q4N3xXjSeujMuTq4H5y0o6pA+J4+EeXQ5TSHAYU=; b=c8dIEG6UGnp7N/x/aGvv0mGRKU uvOR6JIZ4J0scU9GFiZ3SlOxGJ96nuC0k0Mmoz7ITqoAYtjLquccQ8gWWEDSpETTaQoK853+Qk8Ml /EZ9694AKEi4W/YX5lqNbhtQknXVyYTNarhpCXS853hkSQtToT4TlVXL5176SbI4yU/6kvOnCZrky XWBju0UPxMDdsaF8vOqxFtqav/0qDzGf9nnZ2YPeInnNwpapnozWD3Gqp0xIDhkRT4f/ynfSdaSmc 5yX6nn9HjP0DV7W5WxSF57T0Y7sFrSZQOnPGp2hCfHRQK+hn2TH5/x/qnGW0sWfqgOLzfcEDlhoB6 vuXGhlKA==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z4y-006uxQ-O9; Wed, 06 Jul 2022 07:04:09 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 06/16] block: pass a gendisk to blk_queue_set_zoned Date: Wed, 6 Jul 2022 09:03:40 +0200 Message-Id: <20220706070350.1703384-7-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Prepare for storing the zone related field in struct gendisk instead of struct request_queue. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/blk-settings.c | 9 +++++---- block/partitions/core.c | 2 +- drivers/block/null_blk/zoned.c | 2 +- drivers/nvme/host/zns.c | 2 +- drivers/scsi/sd.c | 6 +++--- drivers/scsi/sd_zbc.c | 2 +- include/linux/blkdev.h | 2 +- 7 files changed, 13 insertions(+), 12 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 6ccceb421ed2f..35b7bba306a83 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -893,18 +893,19 @@ static bool disk_has_partitions(struct gendisk *disk) } /** - * blk_queue_set_zoned - configure a disk queue zoned model. + * disk_set_zoned - configure the zoned model for a disk * @disk: the gendisk of the queue to configure * @model: the zoned model to set * - * Set the zoned model of the request queue of @disk according to @model. + * Set the zoned model of @disk to @model. + * * When @model is BLK_ZONED_HM (host managed), this should be called only * if zoned block device support is enabled (CONFIG_BLK_DEV_ZONED option). * If @model specifies BLK_ZONED_HA (host aware), the effective model used * depends on CONFIG_BLK_DEV_ZONED settings and on the existence of partitions * on the disk. */ -void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model) +void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model) { struct request_queue *q = disk->queue; @@ -948,7 +949,7 @@ void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model) blk_queue_clear_zone_settings(q); } } -EXPORT_SYMBOL_GPL(blk_queue_set_zoned); +EXPORT_SYMBOL_GPL(disk_set_zoned); int bdev_alignment_offset(struct block_device *bdev) { diff --git a/block/partitions/core.c b/block/partitions/core.c index 7dc487f5b03cd..1a45b1dd64918 100644 --- a/block/partitions/core.c +++ b/block/partitions/core.c @@ -330,7 +330,7 @@ static struct block_device *add_partition(struct gendisk *disk, int partno, case BLK_ZONED_HA: pr_info("%s: disabling host aware zoned block device support due to partitions\n", disk->disk_name); - blk_queue_set_zoned(disk, BLK_ZONED_NONE); + disk_set_zoned(disk, BLK_ZONED_NONE); break; case BLK_ZONED_NONE: break; diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 2fdd7b20c224e..b47bbd114058d 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -159,7 +159,7 @@ int null_register_zoned_dev(struct nullb *nullb) struct nullb_device *dev = nullb->dev; struct request_queue *q = nullb->q; - blk_queue_set_zoned(nullb->disk, BLK_ZONED_HM); + disk_set_zoned(nullb->disk, BLK_ZONED_HM); blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c index 9f81beb4df4ef..0ed15c2fd56de 100644 --- a/drivers/nvme/host/zns.c +++ b/drivers/nvme/host/zns.c @@ -109,7 +109,7 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf) goto free_data; } - blk_queue_set_zoned(ns->disk, BLK_ZONED_HM); + disk_set_zoned(ns->disk, BLK_ZONED_HM); blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); blk_queue_max_open_zones(q, le32_to_cpu(id->mor) + 1); blk_queue_max_active_zones(q, le32_to_cpu(id->mar) + 1); diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index cb587e488601c..eb02d939dd448 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -2934,15 +2934,15 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp) if (sdkp->device->type == TYPE_ZBC) { /* Host-managed */ - blk_queue_set_zoned(sdkp->disk, BLK_ZONED_HM); + disk_set_zoned(sdkp->disk, BLK_ZONED_HM); } else { sdkp->zoned = zoned; if (sdkp->zoned == 1) { /* Host-aware */ - blk_queue_set_zoned(sdkp->disk, BLK_ZONED_HA); + disk_set_zoned(sdkp->disk, BLK_ZONED_HA); } else { /* Regular disk or drive managed disk */ - blk_queue_set_zoned(sdkp->disk, BLK_ZONED_NONE); + disk_set_zoned(sdkp->disk, BLK_ZONED_NONE); } } diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index 6acc4f406eb8c..0f5823b674685 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -929,7 +929,7 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, u8 buf[SD_BUF_SIZE]) /* * This can happen for a host aware disk with partitions. * The block device zone model was already cleared by - * blk_queue_set_zoned(). Only free the scsi disk zone + * disk_set_zoned(). Only free the scsi disk zone * information and exit early. */ sd_zbc_free_zone_info(sdkp); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 270cd0c552924..416faa0137820 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -291,7 +291,7 @@ struct queue_limits { typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx, void *data); -void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model); +void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model); #ifdef CONFIG_BLK_DEV_ZONED From patchwork Wed Jul 6 07:03:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FA04CCA482 for ; Wed, 6 Jul 2022 07:04:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231365AbiGFHER (ORCPT ); Wed, 6 Jul 2022 03:04:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231360AbiGFHEP (ORCPT ); Wed, 6 Jul 2022 03:04:15 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7E8021E00; Wed, 6 Jul 2022 00:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=CUL+EY0o8IFmm69r3ji/sQwweIJZqelgGUxZWiP3WN4=; b=qwY3EIV4gH3sfpI/p8Yhhbgw4H z/TnvuGxhkegGAUg3gCMgj4QoSl02FiyMqpaAI1ZOvYZNrYoD+xslbWIh0pbmF1cWnMGE/Ye2N385 JT5tuq5PzJyxEMFG7IoK6QdNaL0JHHvFtwbOtt+kaK7hQ/KRuEvbPcbrhsR68kf7cIsS0ohsiP3hN slh6ZKEfQ9lJq1QjvCgqy9YsYmMHz1xC0UyC91KqgWjErrqNrMxb+6B6OVqtOocvfQkcUCCnT9Bd+ UtyTHQic6Tq1xJ+TlEA2dxz9pR5mVpJnADIogjcAp+yXHMUbZNTa/9N4lX0f33U18BKBAuzRD7q3w kW9HApAQ==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z51-006uzw-DV; Wed, 06 Jul 2022 07:04:11 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 07/16] block: pass a gendisk to blk_queue_clear_zone_settings Date: Wed, 6 Jul 2022 09:03:41 +0200 Message-Id: <20220706070350.1703384-8-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Switch to a gendisk based API in preparation for moving all zone related fields from the request_queue to the gendisk. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/blk-settings.c | 2 +- block/blk-zoned.c | 4 +++- block/blk.h | 4 ++-- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 35b7bba306a83..8bb9eef5310eb 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -946,7 +946,7 @@ void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model) blk_queue_zone_write_granularity(q, queue_logical_block_size(q)); } else { - blk_queue_clear_zone_settings(q); + disk_clear_zone_settings(disk); } } EXPORT_SYMBOL_GPL(disk_set_zoned); diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 90a5c9cc80ab3..82a4fa89678ce 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -622,8 +622,10 @@ int blk_revalidate_disk_zones(struct gendisk *disk, } EXPORT_SYMBOL_GPL(blk_revalidate_disk_zones); -void blk_queue_clear_zone_settings(struct request_queue *q) +void disk_clear_zone_settings(struct gendisk *disk) { + struct request_queue *q = disk->queue; + blk_mq_freeze_queue(q); blk_queue_free_zone_bitmaps(q); diff --git a/block/blk.h b/block/blk.h index 58ad50cacd2d5..7482a3a441dd9 100644 --- a/block/blk.h +++ b/block/blk.h @@ -406,10 +406,10 @@ static inline int blk_iolatency_init(struct request_queue *q) { return 0; } #ifdef CONFIG_BLK_DEV_ZONED void blk_queue_free_zone_bitmaps(struct request_queue *q); -void blk_queue_clear_zone_settings(struct request_queue *q); +void disk_clear_zone_settings(struct gendisk *disk); #else static inline void blk_queue_free_zone_bitmaps(struct request_queue *q) {} -static inline void blk_queue_clear_zone_settings(struct request_queue *q) {} +static inline void disk_clear_zone_settings(struct gendisk *disk) {} #endif int blk_alloc_ext_minor(void); From patchwork Wed Jul 6 07:03:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84CAFC43334 for ; Wed, 6 Jul 2022 07:04:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231360AbiGFHE3 (ORCPT ); Wed, 6 Jul 2022 03:04:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231362AbiGFHER (ORCPT ); Wed, 6 Jul 2022 03:04:17 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8AEB117C; Wed, 6 Jul 2022 00:04:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=2tdXy2TaxFCYKMq6lHCdfNYplXEZHdjH7nYJjCJ72U0=; b=wzgUvPZsf9uj18614eKDaZruCR vThi9GXqRZ49+cPEjb6m0StcLgcM5bFOUUswR1BcN6sOOnOSSRET+dK5/5fDncMMnCTVVsg9gvVOG 8ahZkejGasRADaIsGPC1jF/n03RkbImS3DKolcxSryizMkRHqwUe7U9V3gT3bUql1BQ+LVXS3PoVG k9ztw2mRgwMDDvJ/wl7YiUNgFUxysqMlcTN6ysuehv0xBE5VSvgBurRqCWlNwN/HGpAWgpwQMc4rE xtHLJen34r8PN1aLVZWZuPJnw027e1FRcZc+Cf6mLHfw3rc1AbJE+OB+io6ZX92yzde8rH0oL5I24 mwqU2oqw==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z54-006v2H-2b; Wed, 06 Jul 2022 07:04:14 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 08/16] block: pass a gendisk to blk_queue_free_zone_bitmaps Date: Wed, 6 Jul 2022 09:03:42 +0200 Message-Id: <20220706070350.1703384-9-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Switch to a gendisk based API in preparation for moving all zone related fields from the request_queue to the gendisk. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/blk-zoned.c | 8 +++++--- block/blk.h | 4 ++-- block/genhd.c | 2 +- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 82a4fa89678ce..0d431394cf90c 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -449,8 +449,10 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, return ret; } -void blk_queue_free_zone_bitmaps(struct request_queue *q) +void disk_free_zone_bitmaps(struct gendisk *disk) { + struct request_queue *q = disk->queue; + kfree(q->conv_zones_bitmap); q->conv_zones_bitmap = NULL; kfree(q->seq_zones_wlock); @@ -612,7 +614,7 @@ int blk_revalidate_disk_zones(struct gendisk *disk, ret = 0; } else { pr_warn("%s: failed to revalidate zones\n", disk->disk_name); - blk_queue_free_zone_bitmaps(q); + disk_free_zone_bitmaps(disk); } blk_mq_unfreeze_queue(q); @@ -628,7 +630,7 @@ void disk_clear_zone_settings(struct gendisk *disk) blk_mq_freeze_queue(q); - blk_queue_free_zone_bitmaps(q); + disk_free_zone_bitmaps(disk); blk_queue_flag_clear(QUEUE_FLAG_ZONE_RESETALL, q); q->required_elevator_features &= ~ELEVATOR_F_ZBD_SEQ_WRITE; q->nr_zones = 0; diff --git a/block/blk.h b/block/blk.h index 7482a3a441dd9..b71e22c97d773 100644 --- a/block/blk.h +++ b/block/blk.h @@ -405,10 +405,10 @@ static inline int blk_iolatency_init(struct request_queue *q) { return 0; } #endif #ifdef CONFIG_BLK_DEV_ZONED -void blk_queue_free_zone_bitmaps(struct request_queue *q); +void disk_free_zone_bitmaps(struct gendisk *disk); void disk_clear_zone_settings(struct gendisk *disk); #else -static inline void blk_queue_free_zone_bitmaps(struct request_queue *q) {} +static inline void disk_free_zone_bitmaps(struct gendisk *disk) {} static inline void disk_clear_zone_settings(struct gendisk *disk) {} #endif diff --git a/block/genhd.c b/block/genhd.c index d0bdeb93e922c..9d30f159c59ac 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -1165,7 +1165,7 @@ static void disk_release(struct device *dev) disk_release_events(disk); kfree(disk->random); - blk_queue_free_zone_bitmaps(disk->queue); + disk_free_zone_bitmaps(disk); xa_destroy(&disk->part_tbl); disk->queue->disk = NULL; From patchwork Wed Jul 6 07:03:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E9ECC43334 for ; Wed, 6 Jul 2022 07:04:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231493AbiGFHEb (ORCPT ); Wed, 6 Jul 2022 03:04:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231410AbiGFHEU (ORCPT ); Wed, 6 Jul 2022 03:04:20 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9123B20BDC; Wed, 6 Jul 2022 00:04:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=BMSKltgti8wB9dwtOAjLWAbeE/YUcsITT2h3tQ+s1Yg=; b=cXNoc6QwPRom1uq3aIwwF7k84m +sCzRSITvVtkpUV/5VSr3HlqIya7ePxoWj4cni6WRANiTq8qZKlqTLDCwqK6WTNtL75EKCHCvKFED RhS1YL7dYKJ0Xdu5/wvb/cjnmqn/LZUyJDT+gaYCkTaR30PpjDI9nyIMaFYSHMTNTxhzefouFgdWZ 7aaddcvx26Tmomh8GAbFK1i+WWSrPjosh3XTd0Syyuahg/zUobpnGZfXeC1khH9f2xuazhvOrx1OV qNy7R9gk9FSsMnuRNrdva6Zbv9icS17lslmeNIHwYC6toak+eApM3uTowC5NjHW+Pfxeu9DqsUoOo /5xDNynA==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z56-006v47-O9; Wed, 06 Jul 2022 07:04:17 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 09/16] block: remove queue_max_open_zones and queue_max_active_zones Date: Wed, 6 Jul 2022 09:03:43 +0200 Message-Id: <20220706070350.1703384-10-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Always use the bdev based helpers instead. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/blk-sysfs.c | 4 ++-- include/linux/blkdev.h | 37 ++++++++++--------------------------- 2 files changed, 12 insertions(+), 29 deletions(-) diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 7590810cf13fc..5ce72345ac666 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -330,12 +330,12 @@ static ssize_t queue_nr_zones_show(struct request_queue *q, char *page) static ssize_t queue_max_open_zones_show(struct request_queue *q, char *page) { - return queue_var_show(queue_max_open_zones(q), page); + return queue_var_show(bdev_max_open_zones(q->disk->part0), page); } static ssize_t queue_max_active_zones_show(struct request_queue *q, char *page) { - return queue_var_show(queue_max_active_zones(q), page); + return queue_var_show(bdev_max_active_zones(q->disk->part0), page); } static ssize_t queue_nomerges_show(struct request_queue *q, char *page) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 416faa0137820..7d4105d23b0a4 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -702,21 +702,22 @@ static inline void blk_queue_max_open_zones(struct request_queue *q, q->max_open_zones = max_open_zones; } -static inline unsigned int queue_max_open_zones(const struct request_queue *q) -{ - return q->max_open_zones; -} - static inline void blk_queue_max_active_zones(struct request_queue *q, unsigned int max_active_zones) { q->max_active_zones = max_active_zones; } -static inline unsigned int queue_max_active_zones(const struct request_queue *q) +static inline unsigned int bdev_max_open_zones(struct block_device *bdev) +{ + return bdev->bd_disk->queue->max_open_zones; +} + +static inline unsigned int bdev_max_active_zones(struct block_device *bdev) { - return q->max_active_zones; + return bdev->bd_disk->queue->max_active_zones; } + #else /* CONFIG_BLK_DEV_ZONED */ static inline unsigned int blk_queue_nr_zones(struct request_queue *q) { @@ -732,11 +733,11 @@ static inline unsigned int blk_queue_zone_no(struct request_queue *q, { return 0; } -static inline unsigned int queue_max_open_zones(const struct request_queue *q) +static inline unsigned int bdev_max_open_zones(struct block_device *bdev) { return 0; } -static inline unsigned int queue_max_active_zones(const struct request_queue *q) +static inline unsigned int bdev_max_active_zones(struct block_device *bdev) { return 0; } @@ -1314,24 +1315,6 @@ static inline sector_t bdev_zone_sectors(struct block_device *bdev) return 0; } -static inline unsigned int bdev_max_open_zones(struct block_device *bdev) -{ - struct request_queue *q = bdev_get_queue(bdev); - - if (q) - return queue_max_open_zones(q); - return 0; -} - -static inline unsigned int bdev_max_active_zones(struct block_device *bdev) -{ - struct request_queue *q = bdev_get_queue(bdev); - - if (q) - return queue_max_active_zones(q); - return 0; -} - static inline int queue_dma_alignment(const struct request_queue *q) { return q ? q->dma_alignment : 511; From patchwork Wed Jul 6 07:03:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FBCBC43334 for ; Wed, 6 Jul 2022 07:04:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230518AbiGFHEk (ORCPT ); Wed, 6 Jul 2022 03:04:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231462AbiGFHE1 (ORCPT ); Wed, 6 Jul 2022 03:04:27 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0667A22288; Wed, 6 Jul 2022 00:04:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=IMHOIAnKHAGrrTSPKy1afvOAZW2c4mK+sFcOyUZMZF4=; b=AiQBbMmgkx8eBZ6WY/htNrRzgC 078iLVJjmcYY15WKLv62ZbdcvDXs8y3BBZi6A+eY5LqyDTEpE52yZmf6aREhG0nv9CRy6w9uR7wbk 7NKMub6Po7BflPUDP/iEnoUQj1SwDzL9+B9o4H5y17t9s17E/Hr8gDrZzLP3mF0wWqdEoqUvuFaz8 5AB2w261A3VHxdgL7dtuGzSyJtoLSrZjM2v/pv1cBenC1yjGhnn4EC+UqVomLfZX7sGUsijCZG1/T q75LNs2hAcMlT9XTZ1bhH0BMPD+owyu0upS30kTlqK208fMWkzryEI0K009cQh1zvkONFglOzKjBq FnNH9Jvg==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z59-006v79-BF; Wed, 06 Jul 2022 07:04:19 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 10/16] block: pass a gendisk to blk_queue_max_open_zones and blk_queue_max_active_zones Date: Wed, 6 Jul 2022 09:03:44 +0200 Message-Id: <20220706070350.1703384-11-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Switch to a gendisk based API in preparation for moving all zone related fields from the request_queue to the gendisk. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk/zoned.c | 4 ++-- drivers/nvme/host/zns.c | 4 ++-- drivers/scsi/sd_zbc.c | 6 +++--- include/linux/blkdev.h | 8 ++++---- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index b47bbd114058d..576ab3ed082a5 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -174,8 +174,8 @@ int null_register_zoned_dev(struct nullb *nullb) } blk_queue_max_zone_append_sectors(q, dev->zone_size_sects); - blk_queue_max_open_zones(q, dev->zone_max_open); - blk_queue_max_active_zones(q, dev->zone_max_active); + disk_set_max_open_zones(nullb->disk, dev->zone_max_open); + disk_set_max_active_zones(nullb->disk, dev->zone_max_active); return 0; } diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c index 0ed15c2fd56de..12316ab51bda6 100644 --- a/drivers/nvme/host/zns.c +++ b/drivers/nvme/host/zns.c @@ -111,8 +111,8 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf) disk_set_zoned(ns->disk, BLK_ZONED_HM); blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); - blk_queue_max_open_zones(q, le32_to_cpu(id->mor) + 1); - blk_queue_max_active_zones(q, le32_to_cpu(id->mar) + 1); + disk_set_max_open_zones(ns->disk, le32_to_cpu(id->mor) + 1); + disk_set_max_active_zones(ns->disk, le32_to_cpu(id->mar) + 1); free_data: kfree(id); return status; diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index 0f5823b674685..b4106f8997342 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -950,10 +950,10 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, u8 buf[SD_BUF_SIZE]) blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); if (sdkp->zones_max_open == U32_MAX) - blk_queue_max_open_zones(q, 0); + disk_set_max_open_zones(disk, 0); else - blk_queue_max_open_zones(q, sdkp->zones_max_open); - blk_queue_max_active_zones(q, 0); + disk_set_max_open_zones(disk, sdkp->zones_max_open); + disk_set_max_active_zones(disk, 0); nr_zones = round_up(sdkp->capacity, zone_blocks) >> ilog2(zone_blocks); /* diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 7d4105d23b0a4..c05e1cc05c265 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -696,16 +696,16 @@ static inline bool blk_queue_zone_is_seq(struct request_queue *q, return !test_bit(blk_queue_zone_no(q, sector), q->conv_zones_bitmap); } -static inline void blk_queue_max_open_zones(struct request_queue *q, +static inline void disk_set_max_open_zones(struct gendisk *disk, unsigned int max_open_zones) { - q->max_open_zones = max_open_zones; + disk->queue->max_open_zones = max_open_zones; } -static inline void blk_queue_max_active_zones(struct request_queue *q, +static inline void disk_set_max_active_zones(struct gendisk *disk, unsigned int max_active_zones) { - q->max_active_zones = max_active_zones; + disk->queue->max_active_zones = max_active_zones; } static inline unsigned int bdev_max_open_zones(struct block_device *bdev) From patchwork Wed Jul 6 07:03:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 943A6CCA473 for ; Wed, 6 Jul 2022 07:04:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231401AbiGFHEl (ORCPT ); Wed, 6 Jul 2022 03:04:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231480AbiGFHE3 (ORCPT ); Wed, 6 Jul 2022 03:04:29 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6490222A9; Wed, 6 Jul 2022 00:04:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=vx9zizG8McQTDOgXi2cDOfNDuVSWXeO3PSlfkTahMRU=; b=nmS7+zaaKSlOnAOZNAfb9VXr4n auUhKUl/DiY/FGGno1jSjKna5P6xqmrWf8t86Ted2/coG63dE+cIUfp5IM40UGu9tnD2fK247L2/G R3eFIylouWLlHXnSbkcJqoBnA1HDGsF8MjfZ76ku6O08Ur96ADpvgcl15K3iCu5AHahUOwKpW3Xta J0xtDuPbnKvMWpLMZagsEo5VBvWP15aO5JFJMlyc/XdHZpeKcjLoL+xQbF301LcGQAeqDnIhfNalL zosplBc644CouFTfR8ukT9Cm3P3wI/13MG7pduasY/dEoNZW8OFXTROEIDOWJ/pgKlBguVuUwn3sB KSbLgt1g==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z5B-006v90-UM; Wed, 06 Jul 2022 07:04:22 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 11/16] block: replace blkdev_nr_zones with bdev_nr_zones Date: Wed, 6 Jul 2022 09:03:45 +0200 Message-Id: <20220706070350.1703384-12-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Pass a block_device instead of a request_queue as that is what most callers have at hand. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Johannes Thumshirn Reviewed-by: Damien Le Moal Acked-by: Damien Le Moal --- block/blk-zoned.c | 15 ++++++++------- block/ioctl.c | 2 +- drivers/block/null_blk/zoned.c | 2 +- drivers/md/dm-zone.c | 2 +- drivers/md/dm-zoned-target.c | 5 ++--- drivers/nvme/target/zns.c | 6 +++--- fs/zonefs/super.c | 17 ++++++++--------- include/linux/blkdev.h | 4 ++-- 8 files changed, 26 insertions(+), 27 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 0d431394cf90c..2dec25d8aa3bd 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -108,21 +108,22 @@ void __blk_req_zone_write_unlock(struct request *rq) EXPORT_SYMBOL_GPL(__blk_req_zone_write_unlock); /** - * blkdev_nr_zones - Get number of zones - * @disk: Target gendisk + * bdev_nr_zones - Get number of zones + * @bdev: Target device * * Return the total number of zones of a zoned block device. For a block * device without zone capabilities, the number of zones is always 0. */ -unsigned int blkdev_nr_zones(struct gendisk *disk) +unsigned int bdev_nr_zones(struct block_device *bdev) { - sector_t zone_sectors = blk_queue_zone_sectors(disk->queue); + sector_t zone_sectors = bdev_zone_sectors(bdev); - if (!blk_queue_is_zoned(disk->queue)) + if (!bdev_is_zoned(bdev)) return 0; - return (get_capacity(disk) + zone_sectors - 1) >> ilog2(zone_sectors); + return (bdev_nr_sectors(bdev) + zone_sectors - 1) >> + ilog2(zone_sectors); } -EXPORT_SYMBOL_GPL(blkdev_nr_zones); +EXPORT_SYMBOL_GPL(bdev_nr_zones); /** * blkdev_report_zones - Get zones information diff --git a/block/ioctl.c b/block/ioctl.c index 46949f1b0dba5..60121e89052bc 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -495,7 +495,7 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode, case BLKGETZONESZ: return put_uint(argp, bdev_zone_sectors(bdev)); case BLKGETNRZONES: - return put_uint(argp, blkdev_nr_zones(bdev->bd_disk)); + return put_uint(argp, bdev_nr_zones(bdev)); case BLKROGET: return put_int(argp, bdev_read_only(bdev) != 0); case BLKSSZGET: /* get block device logical block size */ diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 576ab3ed082a5..e62c52e964259 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -170,7 +170,7 @@ int null_register_zoned_dev(struct nullb *nullb) return ret; } else { blk_queue_chunk_sectors(q, dev->zone_size_sects); - q->nr_zones = blkdev_nr_zones(nullb->disk); + q->nr_zones = bdev_nr_zones(nullb->disk->part0); } blk_queue_max_zone_append_sectors(q, dev->zone_size_sects); diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index ae616b87c91ae..6d105abe12415 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -301,7 +301,7 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) * correct value to be exposed in sysfs queue/nr_zones. */ WARN_ON_ONCE(queue_is_mq(q)); - q->nr_zones = blkdev_nr_zones(md->disk); + q->nr_zones = bdev_nr_zones(md->disk->part0); /* Check if zone append is natively supported */ if (dm_table_supports_zone_append(t)) { diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c index 0ec5d8b9b1a4e..6ba6ef44b00e2 100644 --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -793,8 +793,7 @@ static int dmz_fixup_devices(struct dm_target *ti) } zone_nr_sectors = blk_queue_zone_sectors(q); zoned_dev->zone_nr_sectors = zone_nr_sectors; - zoned_dev->nr_zones = - blkdev_nr_zones(zoned_dev->bdev->bd_disk); + zoned_dev->nr_zones = bdev_nr_zones(zoned_dev->bdev); } } else { reg_dev = NULL; @@ -805,7 +804,7 @@ static int dmz_fixup_devices(struct dm_target *ti) } q = bdev_get_queue(zoned_dev->bdev); zoned_dev->zone_nr_sectors = blk_queue_zone_sectors(q); - zoned_dev->nr_zones = blkdev_nr_zones(zoned_dev->bdev->bd_disk); + zoned_dev->nr_zones = bdev_nr_zones(zoned_dev->bdev); } if (reg_dev) { diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c index 82b61acf7a72b..c4c99b832daf2 100644 --- a/drivers/nvme/target/zns.c +++ b/drivers/nvme/target/zns.c @@ -60,7 +60,7 @@ bool nvmet_bdev_zns_enable(struct nvmet_ns *ns) if (ns->bdev->bd_disk->queue->conv_zones_bitmap) return false; - ret = blkdev_report_zones(ns->bdev, 0, blkdev_nr_zones(bd_disk), + ret = blkdev_report_zones(ns->bdev, 0, bdev_nr_zones(ns->bdev), validate_conv_zones_cb, NULL); if (ret < 0) return false; @@ -241,7 +241,7 @@ static unsigned long nvmet_req_nr_zones_from_slba(struct nvmet_req *req) { unsigned int sect = nvmet_lba_to_sect(req->ns, req->cmd->zmr.slba); - return blkdev_nr_zones(req->ns->bdev->bd_disk) - + return bdev_nr_zones(req->ns->bdev) - (sect >> ilog2(bdev_zone_sectors(req->ns->bdev))); } @@ -386,7 +386,7 @@ static int zmgmt_send_scan_cb(struct blk_zone *z, unsigned i, void *d) static u16 nvmet_bdev_zone_mgmt_emulate_all(struct nvmet_req *req) { struct block_device *bdev = req->ns->bdev; - unsigned int nr_zones = blkdev_nr_zones(bdev->bd_disk); + unsigned int nr_zones = bdev_nr_zones(bdev); struct request_queue *q = bdev_get_queue(bdev); struct bio *bio = NULL; sector_t sector = 0; diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index 053299758deb9..9c0eef1ff32a0 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -1394,7 +1394,7 @@ static void zonefs_init_dir_inode(struct inode *parent, struct inode *inode, { struct super_block *sb = parent->i_sb; - inode->i_ino = blkdev_nr_zones(sb->s_bdev->bd_disk) + type + 1; + inode->i_ino = bdev_nr_zones(sb->s_bdev) + type + 1; inode_init_owner(&init_user_ns, inode, parent, S_IFDIR | 0555); inode->i_op = &zonefs_dir_inode_operations; inode->i_fop = &simple_dir_operations; @@ -1540,7 +1540,7 @@ static int zonefs_create_zgroup(struct zonefs_zone_data *zd, /* * The first zone contains the super block: skip it. */ - end = zd->zones + blkdev_nr_zones(sb->s_bdev->bd_disk); + end = zd->zones + bdev_nr_zones(sb->s_bdev); for (zone = &zd->zones[1]; zone < end; zone = next) { next = zone + 1; @@ -1635,8 +1635,8 @@ static int zonefs_get_zone_info(struct zonefs_zone_data *zd) struct block_device *bdev = zd->sb->s_bdev; int ret; - zd->zones = kvcalloc(blkdev_nr_zones(bdev->bd_disk), - sizeof(struct blk_zone), GFP_KERNEL); + zd->zones = kvcalloc(bdev_nr_zones(bdev), sizeof(struct blk_zone), + GFP_KERNEL); if (!zd->zones) return -ENOMEM; @@ -1648,9 +1648,9 @@ static int zonefs_get_zone_info(struct zonefs_zone_data *zd) return ret; } - if (ret != blkdev_nr_zones(bdev->bd_disk)) { + if (ret != bdev_nr_zones(bdev)) { zonefs_err(zd->sb, "Invalid zone report (%d/%u zones)\n", - ret, blkdev_nr_zones(bdev->bd_disk)); + ret, bdev_nr_zones(bdev)); return -EIO; } @@ -1816,8 +1816,7 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent) if (ret) goto cleanup; - zonefs_info(sb, "Mounting %u zones", - blkdev_nr_zones(sb->s_bdev->bd_disk)); + zonefs_info(sb, "Mounting %u zones", bdev_nr_zones(sb->s_bdev)); if (!sbi->s_max_wro_seq_files && !sbi->s_max_active_seq_files && @@ -1833,7 +1832,7 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent) if (!inode) goto cleanup; - inode->i_ino = blkdev_nr_zones(sb->s_bdev->bd_disk); + inode->i_ino = bdev_nr_zones(sb->s_bdev); inode->i_mode = S_IFDIR | 0555; inode->i_ctime = inode->i_mtime = inode->i_atime = current_time(inode); inode->i_op = &zonefs_dir_inode_operations; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index c05e1cc05c265..fa2757ef4a846 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -298,7 +298,7 @@ void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model); #define BLK_ALL_ZONES ((unsigned int)-1) int blkdev_report_zones(struct block_device *bdev, sector_t sector, unsigned int nr_zones, report_zones_cb cb, void *data); -unsigned int blkdev_nr_zones(struct gendisk *disk); +unsigned int bdev_nr_zones(struct block_device *bdev); extern int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, sector_t sectors, sector_t nr_sectors, gfp_t gfp_mask); @@ -312,7 +312,7 @@ extern int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, #else /* CONFIG_BLK_DEV_ZONED */ -static inline unsigned int blkdev_nr_zones(struct gendisk *disk) +static inline unsigned int bdev_nr_zones(struct block_device *bdev) { return 0; } From patchwork Wed Jul 6 07:03:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907428 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4486FCCA482 for ; Wed, 6 Jul 2022 07:04:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231484AbiGFHEn (ORCPT ); Wed, 6 Jul 2022 03:04:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231491AbiGFHEa (ORCPT ); Wed, 6 Jul 2022 03:04:30 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0222222B8; Wed, 6 Jul 2022 00:04:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=tgyqP1bIhsvdZx95BMoRlFVB6nrmkRZfG/R3FTELfmE=; b=5A2bkAoXUE4Y78693ckSsVBIX9 dssQpfBTrXb6Hxp7QHEJBiHqhtYnOkwx83MMcDRHAeMfOkHzNnaTrJTyWdupDgDwBd/1IIH3jNVXt XPau1WYJrulA57hPzDVAtujGM3TIlXmFB6dEi6C9QN7zszMxcyCO1wm9PpyLyVSCRUXOs+kwmBzoE w8h4u8z14aW/23GEh7+khlhTWicW9R9I36JMXpsN4jj+bbVyO7LoHqpJ2u8M2UQzY++FxJa/R/NZs Rd9fTq+/Ihw7Q9KOL5G74mMUmaPja/4h+v6WDoZzrywaW6H0OksVreX/MJAVVMG2VqS+M6I2m9pPl dJrR93Fw==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z5E-006vBt-Gt; Wed, 06 Jul 2022 07:04:24 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 12/16] block: use bdev based helpers in blkdev_zone_mgmt{,all} Date: Wed, 6 Jul 2022 09:03:46 +0200 Message-Id: <20220706070350.1703384-13-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use the bdev based helpers instead of the queue based ones to clean up the code a bit and prepare for storing all zone related fields in struct gendisk. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/blk-zoned.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 2dec25d8aa3bd..c2d8a38f449aa 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -190,8 +190,8 @@ static int blkdev_zone_reset_all_emulated(struct block_device *bdev, gfp_t gfp_mask) { struct request_queue *q = bdev_get_queue(bdev); - sector_t capacity = get_capacity(bdev->bd_disk); - sector_t zone_sectors = blk_queue_zone_sectors(q); + sector_t capacity = bdev_nr_sectors(bdev); + sector_t zone_sectors = bdev_zone_sectors(bdev); unsigned long *need_reset; struct bio *bio = NULL; sector_t sector = 0; @@ -262,8 +262,8 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, gfp_t gfp_mask) { struct request_queue *q = bdev_get_queue(bdev); - sector_t zone_sectors = blk_queue_zone_sectors(q); - sector_t capacity = get_capacity(bdev->bd_disk); + sector_t zone_sectors = bdev_zone_sectors(bdev); + sector_t capacity = bdev_nr_sectors(bdev); sector_t end_sector = sector + nr_sectors; struct bio *bio = NULL; int ret = 0; From patchwork Wed Jul 6 07:03:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F0E4C43334 for ; Wed, 6 Jul 2022 07:04:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231520AbiGFHEx (ORCPT ); Wed, 6 Jul 2022 03:04:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231518AbiGFHEh (ORCPT ); Wed, 6 Jul 2022 03:04:37 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49E956312; Wed, 6 Jul 2022 00:04:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=YBz/LhwBZw9yI8kZ+GIiGhL/9U8OwJhoN7e643gX1Pk=; b=OrWKhJ6iJkZoV/tAGt7bYxwSdQ nMVrHGB4lhjk7qf+22T39kBOgI6dTI9aZbfMpZfs9nmZ12o1h4JNUSOsmH3uSXYgTCt7XTSIGmstZ /CAaZ+ZSbbC3K12MmV7ipAwfvGNzhG4K8bc2YuG91q3md1xczFEMcyX7KSHCDg2mwaeQtrIX+OZ29 DiAguk5VzOqg5FT1pMYdJlwEj+mj7aRQbFXzWLwNaJaWE92KNWq6y/+6c/9EDTFubZkzsIb+aktoz 54R27FLAlwzC/MF8g9FszmTEVlLRfiZVc0bCc0RRCLqP4A7Nkdps2OHJPdDmblGC+gfVwEvL7YdxT NDAhwOiA==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z5H-006vEM-3p; Wed, 06 Jul 2022 07:04:27 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Subject: [PATCH 13/16] nvmet:: use bdev based helpers in nvmet_bdev_zone_mgmt_emulate_all Date: Wed, 6 Jul 2022 09:03:47 +0200 Message-Id: <20220706070350.1703384-14-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use the bdev based helpers instead of the queue based ones to clean up the code a bit and prepare for storing all zone related fields in struct gendisk. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal --- drivers/nvme/target/zns.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c index c4c99b832daf2..9d8717126ab31 100644 --- a/drivers/nvme/target/zns.c +++ b/drivers/nvme/target/zns.c @@ -413,7 +413,7 @@ static u16 nvmet_bdev_zone_mgmt_emulate_all(struct nvmet_req *req) ret = 0; } - while (sector < get_capacity(bdev->bd_disk)) { + while (sector < bdev_nr_sectors(bdev)) { if (test_bit(blk_queue_zone_no(q, sector), d.zbitmap)) { bio = blk_next_bio(bio, bdev, 0, zsa_req_op(req->cmd->zms.zsa) | REQ_SYNC, @@ -422,7 +422,7 @@ static u16 nvmet_bdev_zone_mgmt_emulate_all(struct nvmet_req *req) /* This may take a while, so be nice to others */ cond_resched(); } - sector += blk_queue_zone_sectors(q); + sector += bdev_zone_sectors(bdev); } if (bio) { From patchwork Wed Jul 6 07:03:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C9A7C43334 for ; Wed, 6 Jul 2022 07:04:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231435AbiGFHEz (ORCPT ); Wed, 6 Jul 2022 03:04:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231368AbiGFHEh (ORCPT ); Wed, 6 Jul 2022 03:04:37 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D829322506; Wed, 6 Jul 2022 00:04:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=lbcR7xScrQBbM/M7Onk92pBquOkxA9lxlNUV20FvRgo=; b=TkC8kn0TnmRgriDmUmm87JtRKu jo0TPz/fKql0iR62iMRDCBAkVq0aL2Vn0pBB0rtgOXNQON1DnyJPTHpJUHbTYDpgPuHGxllUzUk5j AXc+dO1mZfBg1B50kVR6SDtncue9th2Yz/VYs4pRPfLJ5hRP35pweltcPOlzVq4H2WZHHg2NzQjSk pa9KNFrOWZ6W2c1lNbPFwdD+fOTLlVuvGSUzSA6OzjZrXQiLZvRltnnmMmQ4QeA36lN/O+J0Q9L0+ FnF1LYKkUX9l9aXl1kh7qTIPCva3NFt4Rf7uDQEdcOGupVpirRQUsa8NnpBjz8/YDx4psCOFC6j6W ovXtOp3Q==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z5J-006vGt-S3; Wed, 06 Jul 2022 07:04:30 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 14/16] dm-zoned: cleanup dmz_fixup_devices Date: Wed, 6 Jul 2022 09:03:48 +0200 Message-Id: <20220706070350.1703384-15-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use the bdev based helpers where applicable and move the zoned_dev into the scope where it is actually used. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/md/dm-zoned-target.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c index 6ba6ef44b00e2..95b132b52f332 100644 --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -764,8 +764,7 @@ static void dmz_put_zoned_device(struct dm_target *ti) static int dmz_fixup_devices(struct dm_target *ti) { struct dmz_target *dmz = ti->private; - struct dmz_dev *reg_dev, *zoned_dev; - struct request_queue *q; + struct dmz_dev *reg_dev = NULL; sector_t zone_nr_sectors = 0; int i; @@ -780,31 +779,32 @@ static int dmz_fixup_devices(struct dm_target *ti) return -EINVAL; } for (i = 1; i < dmz->nr_ddevs; i++) { - zoned_dev = &dmz->dev[i]; + struct dmz_dev *zoned_dev = &dmz->dev[i]; + struct block_device *bdev = zoned_dev->bdev; + if (zoned_dev->flags & DMZ_BDEV_REGULAR) { ti->error = "Secondary disk is not a zoned device"; return -EINVAL; } - q = bdev_get_queue(zoned_dev->bdev); if (zone_nr_sectors && - zone_nr_sectors != blk_queue_zone_sectors(q)) { + zone_nr_sectors != bdev_zone_sectors(bdev)) { ti->error = "Zone nr sectors mismatch"; return -EINVAL; } - zone_nr_sectors = blk_queue_zone_sectors(q); + zone_nr_sectors = bdev_zone_sectors(bdev); zoned_dev->zone_nr_sectors = zone_nr_sectors; - zoned_dev->nr_zones = bdev_nr_zones(zoned_dev->bdev); + zoned_dev->nr_zones = bdev_nr_zones(bdev); } } else { - reg_dev = NULL; - zoned_dev = &dmz->dev[0]; + struct dmz_dev *zoned_dev = &dmz->dev[0]; + struct block_device *bdev = zoned_dev->bdev; + if (zoned_dev->flags & DMZ_BDEV_REGULAR) { ti->error = "Disk is not a zoned device"; return -EINVAL; } - q = bdev_get_queue(zoned_dev->bdev); - zoned_dev->zone_nr_sectors = blk_queue_zone_sectors(q); - zoned_dev->nr_zones = bdev_nr_zones(zoned_dev->bdev); + zoned_dev->zone_nr_sectors = bdev_zone_sectors(bdev); + zoned_dev->nr_zones = bdev_nr_zones(bdev); } if (reg_dev) { From patchwork Wed Jul 6 07:03:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B565DC43334 for ; Wed, 6 Jul 2022 07:05:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231559AbiGFHFD (ORCPT ); Wed, 6 Jul 2022 03:05:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231444AbiGFHEj (ORCPT ); Wed, 6 Jul 2022 03:04:39 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 066D5222BD; Wed, 6 Jul 2022 00:04:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=4VSfcLnPjPfU7j//JLAYFJ37MiZvdmoWhgvI49ISqsc=; b=c4emuxCCMYrX/9sHS92r5OfQX3 YEJjRFsVWLDs9i2qpUHlRr6yNZiLgtWjeczCsuYU/sLfGFvoYQo3jQWpUJj8ffobF9KctwyhwY0LB +s9VvLxsO7g7i5cDZK0Z+LFmCjvm2cYRcN1+LIN1Kx9jUCGrYVYf7XbyR2NxnTjqsWxirHB3zz78G Wya3s/j/zaD05sIRZBHO0v1F/32TbietbNJyKHqyP51niqsHLzN4amKbv9hsVMZg5UQoD5cmQsiQ8 MqhG3JsppU5WyFfKCOIElVsoI3Zs06cS2156XP5ibj7Ye8Uz2tVc1WfF1vRDkq12ySmyOLhwb0MYX RpS+bUiQ==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z5M-006vJc-JN; Wed, 06 Jul 2022 07:04:33 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 15/16] block: remove blk_queue_zone_sectors Date: Wed, 6 Jul 2022 09:03:49 +0200 Message-Id: <20220706070350.1703384-16-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Always use bdev_zone_sectors instead. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/md/dm-table.c | 4 +--- drivers/md/dm-zone.c | 10 ++++++---- include/linux/blkdev.h | 11 +++-------- 3 files changed, 10 insertions(+), 15 deletions(-) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index b36b528e56cff..df904b7e95ce3 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1620,13 +1620,11 @@ static bool dm_table_supports_zoned_model(struct dm_table *t, static int device_not_matches_zone_sectors(struct dm_target *ti, struct dm_dev *dev, sector_t start, sector_t len, void *data) { - struct request_queue *q = bdev_get_queue(dev->bdev); unsigned int *zone_sectors = data; if (!bdev_is_zoned(dev->bdev)) return 0; - - return blk_queue_zone_sectors(q) != *zone_sectors; + return bdev_zone_sectors(dev->bdev) != *zone_sectors; } /* diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index 6d105abe12415..842c31019b513 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -334,7 +334,7 @@ static int dm_update_zone_wp_offset_cb(struct blk_zone *zone, unsigned int idx, static int dm_update_zone_wp_offset(struct mapped_device *md, unsigned int zno, unsigned int *wp_ofst) { - sector_t sector = zno * blk_queue_zone_sectors(md->queue); + sector_t sector = zno * bdev_zone_sectors(md->disk->part0); unsigned int noio_flag; struct dm_table *t; int srcu_idx, ret; @@ -373,7 +373,7 @@ struct orig_bio_details { static bool dm_zone_map_bio_begin(struct mapped_device *md, unsigned int zno, struct bio *clone) { - sector_t zsectors = blk_queue_zone_sectors(md->queue); + sector_t zsectors = bdev_zone_sectors(md->disk->part0); unsigned int zwp_offset = READ_ONCE(md->zwp_offset[zno]); /* @@ -443,7 +443,7 @@ static blk_status_t dm_zone_map_bio_end(struct mapped_device *md, unsigned int z return BLK_STS_OK; case REQ_OP_ZONE_FINISH: WRITE_ONCE(md->zwp_offset[zno], - blk_queue_zone_sectors(md->queue)); + bdev_zone_sectors(md->disk->part0)); return BLK_STS_OK; case REQ_OP_WRITE_ZEROES: case REQ_OP_WRITE: @@ -593,6 +593,7 @@ void dm_zone_endio(struct dm_io *io, struct bio *clone) { struct mapped_device *md = io->md; struct request_queue *q = md->queue; + struct gendisk *disk = md->disk; struct bio *orig_bio = io->orig_bio; unsigned int zwp_offset; unsigned int zno; @@ -608,7 +609,8 @@ void dm_zone_endio(struct dm_io *io, struct bio *clone) */ if (clone->bi_status == BLK_STS_OK && bio_op(clone) == REQ_OP_ZONE_APPEND) { - sector_t mask = (sector_t)blk_queue_zone_sectors(q) - 1; + sector_t mask = + (sector_t)bdev_zone_sectors(disk->part0) - 1; orig_bio->bi_iter.bi_sector += clone->bi_iter.bi_sector & mask; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index fa2757ef4a846..21b97f7115dcb 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -667,11 +667,6 @@ static inline bool blk_queue_is_zoned(struct request_queue *q) } } -static inline sector_t blk_queue_zone_sectors(struct request_queue *q) -{ - return blk_queue_is_zoned(q) ? q->limits.chunk_sectors : 0; -} - #ifdef CONFIG_BLK_DEV_ZONED static inline unsigned int blk_queue_nr_zones(struct request_queue *q) { @@ -1310,9 +1305,9 @@ static inline sector_t bdev_zone_sectors(struct block_device *bdev) { struct request_queue *q = bdev_get_queue(bdev); - if (q) - return blk_queue_zone_sectors(q); - return 0; + if (!blk_queue_is_zoned(q)) + return 0; + return q->limits.chunk_sectors; } static inline int queue_dma_alignment(const struct request_queue *q) From patchwork Wed Jul 6 07:03:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12907433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 853CECCA473 for ; Wed, 6 Jul 2022 07:05:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231319AbiGFHFD (ORCPT ); Wed, 6 Jul 2022 03:05:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231561AbiGFHEj (ORCPT ); Wed, 6 Jul 2022 03:04:39 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6476921E00; Wed, 6 Jul 2022 00:04:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=OAJN63VQ9xFVh9aNau1xYB2Sv2sdy5S7db7CDsPQmuE=; b=a98sBRoHWHtgJdLiULMxpsqbmH ijknG6VWpS4fIG6XdJtcneWJNPpmFsGY5oQCQckxnxSNk9F5oYNCgQDRAC0C8RqFGoTN7lT/kv1rm o9G28z/yXPPwTqBT1oAaxWXG2QLFinUacb3GvfsS7kDQFgblKehDb8aa7zY8+nLVsTsIeFNfEvkx0 7zmo2l4XxBaCD+hp3bmbpkalmjf3Kw2d9mha3VY9GtmWf1vNNY30MWOhnygb72eoEHBUpB2sueTcF o63bRYxQDfB8gqyHwu4GNzQYpvenWATtbP31Ik1r67+fxYq41Np0+Kbo576vJGaPI1V7p2U3PEEFl ob7lepXg==; Received: from [2001:4bb8:189:3c4a:f22c:c36a:4e84:c723] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o8z5P-006vM7-65; Wed, 06 Jul 2022 07:04:35 +0000 From: Christoph Hellwig To: Jens Axboe , Damien Le Moal Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, Chaitanya Kulkarni , Johannes Thumshirn Subject: [PATCH 16/16] block: move zone related fields to struct gendisk Date: Wed, 6 Jul 2022 09:03:50 +0200 Message-Id: <20220706070350.1703384-17-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220706070350.1703384-1-hch@lst.de> References: <20220706070350.1703384-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Move the zone related fields that are currently stored in struct request_queue to struct gendisk as these are part of the highlevel block layer API and are only used for non-passthrough I/O that requires the gendisk. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- block/blk-mq-debugfs-zoned.c | 6 +-- block/blk-sysfs.c | 2 +- block/blk-zoned.c | 45 ++++++++--------- drivers/block/null_blk/zoned.c | 2 +- drivers/md/dm-zone.c | 74 +++++++++++++-------------- drivers/nvme/host/multipath.c | 2 +- drivers/nvme/target/zns.c | 4 +- drivers/scsi/sd_zbc.c | 2 +- include/linux/blk-mq.h | 8 +-- include/linux/blkdev.h | 91 ++++++++++++++++------------------ 10 files changed, 111 insertions(+), 125 deletions(-) diff --git a/block/blk-mq-debugfs-zoned.c b/block/blk-mq-debugfs-zoned.c index 038cb627c8689..a77b099c34b7a 100644 --- a/block/blk-mq-debugfs-zoned.c +++ b/block/blk-mq-debugfs-zoned.c @@ -11,11 +11,11 @@ int queue_zone_wlock_show(void *data, struct seq_file *m) struct request_queue *q = data; unsigned int i; - if (!q->seq_zones_wlock) + if (!q->disk->seq_zones_wlock) return 0; - for (i = 0; i < q->nr_zones; i++) - if (test_bit(i, q->seq_zones_wlock)) + for (i = 0; i < q->disk->nr_zones; i++) + if (test_bit(i, q->disk->seq_zones_wlock)) seq_printf(m, "%u\n", i); return 0; diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 5ce72345ac666..c0303026752d5 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -325,7 +325,7 @@ static ssize_t queue_zoned_show(struct request_queue *q, char *page) static ssize_t queue_nr_zones_show(struct request_queue *q, char *page) { - return queue_var_show(blk_queue_nr_zones(q), page); + return queue_var_show(disk_nr_zones(q->disk), page); } static ssize_t queue_max_open_zones_show(struct request_queue *q, char *page) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index c2d8a38f449aa..7c017458d5ce5 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -57,10 +57,10 @@ EXPORT_SYMBOL_GPL(blk_zone_cond_str); */ bool blk_req_needs_zone_write_lock(struct request *rq) { - if (!rq->q->seq_zones_wlock) + if (blk_rq_is_passthrough(rq)) return false; - if (blk_rq_is_passthrough(rq)) + if (!rq->q->disk->seq_zones_wlock) return false; switch (req_op(rq)) { @@ -77,7 +77,7 @@ bool blk_req_zone_write_trylock(struct request *rq) { unsigned int zno = blk_rq_zone_no(rq); - if (test_and_set_bit(zno, rq->q->seq_zones_wlock)) + if (test_and_set_bit(zno, rq->q->disk->seq_zones_wlock)) return false; WARN_ON_ONCE(rq->rq_flags & RQF_ZONE_WRITE_LOCKED); @@ -90,7 +90,7 @@ EXPORT_SYMBOL_GPL(blk_req_zone_write_trylock); void __blk_req_zone_write_lock(struct request *rq) { if (WARN_ON_ONCE(test_and_set_bit(blk_rq_zone_no(rq), - rq->q->seq_zones_wlock))) + rq->q->disk->seq_zones_wlock))) return; WARN_ON_ONCE(rq->rq_flags & RQF_ZONE_WRITE_LOCKED); @@ -101,9 +101,9 @@ EXPORT_SYMBOL_GPL(__blk_req_zone_write_lock); void __blk_req_zone_write_unlock(struct request *rq) { rq->rq_flags &= ~RQF_ZONE_WRITE_LOCKED; - if (rq->q->seq_zones_wlock) + if (rq->q->disk->seq_zones_wlock) WARN_ON_ONCE(!test_and_clear_bit(blk_rq_zone_no(rq), - rq->q->seq_zones_wlock)); + rq->q->disk->seq_zones_wlock)); } EXPORT_SYMBOL_GPL(__blk_req_zone_write_unlock); @@ -189,7 +189,7 @@ static int blk_zone_need_reset_cb(struct blk_zone *zone, unsigned int idx, static int blkdev_zone_reset_all_emulated(struct block_device *bdev, gfp_t gfp_mask) { - struct request_queue *q = bdev_get_queue(bdev); + struct gendisk *disk = bdev->bd_disk; sector_t capacity = bdev_nr_sectors(bdev); sector_t zone_sectors = bdev_zone_sectors(bdev); unsigned long *need_reset; @@ -197,19 +197,18 @@ static int blkdev_zone_reset_all_emulated(struct block_device *bdev, sector_t sector = 0; int ret; - need_reset = blk_alloc_zone_bitmap(q->node, q->nr_zones); + need_reset = blk_alloc_zone_bitmap(disk->queue->node, disk->nr_zones); if (!need_reset) return -ENOMEM; - ret = bdev->bd_disk->fops->report_zones(bdev->bd_disk, 0, - q->nr_zones, blk_zone_need_reset_cb, - need_reset); + ret = disk->fops->report_zones(disk, 0, disk->nr_zones, + blk_zone_need_reset_cb, need_reset); if (ret < 0) goto out_free_need_reset; ret = 0; while (sector < capacity) { - if (!test_bit(blk_queue_zone_no(q, sector), need_reset)) { + if (!test_bit(disk_zone_no(disk, sector), need_reset)) { sector += zone_sectors; continue; } @@ -452,12 +451,10 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, void disk_free_zone_bitmaps(struct gendisk *disk) { - struct request_queue *q = disk->queue; - - kfree(q->conv_zones_bitmap); - q->conv_zones_bitmap = NULL; - kfree(q->seq_zones_wlock); - q->seq_zones_wlock = NULL; + kfree(disk->conv_zones_bitmap); + disk->conv_zones_bitmap = NULL; + kfree(disk->seq_zones_wlock); + disk->seq_zones_wlock = NULL; } struct blk_revalidate_zone_args { @@ -607,9 +604,9 @@ int blk_revalidate_disk_zones(struct gendisk *disk, blk_mq_freeze_queue(q); if (ret > 0) { blk_queue_chunk_sectors(q, args.zone_sectors); - q->nr_zones = args.nr_zones; - swap(q->seq_zones_wlock, args.seq_zones_wlock); - swap(q->conv_zones_bitmap, args.conv_zones_bitmap); + disk->nr_zones = args.nr_zones; + swap(disk->seq_zones_wlock, args.seq_zones_wlock); + swap(disk->conv_zones_bitmap, args.conv_zones_bitmap); if (update_driver_data) update_driver_data(disk); ret = 0; @@ -634,9 +631,9 @@ void disk_clear_zone_settings(struct gendisk *disk) disk_free_zone_bitmaps(disk); blk_queue_flag_clear(QUEUE_FLAG_ZONE_RESETALL, q); q->required_elevator_features &= ~ELEVATOR_F_ZBD_SEQ_WRITE; - q->nr_zones = 0; - q->max_open_zones = 0; - q->max_active_zones = 0; + disk->nr_zones = 0; + disk->max_open_zones = 0; + disk->max_active_zones = 0; q->limits.chunk_sectors = 0; q->limits.zone_write_granularity = 0; q->limits.max_zone_append_sectors = 0; diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index e62c52e964259..64b06caab9843 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -170,7 +170,7 @@ int null_register_zoned_dev(struct nullb *nullb) return ret; } else { blk_queue_chunk_sectors(q, dev->zone_size_sects); - q->nr_zones = bdev_nr_zones(nullb->disk->part0); + nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0); } blk_queue_max_zone_append_sectors(q, dev->zone_size_sects); diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index 842c31019b513..2b89cde30c9e9 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -139,13 +139,11 @@ bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) void dm_cleanup_zoned_dev(struct mapped_device *md) { - struct request_queue *q = md->queue; - - if (q) { - kfree(q->conv_zones_bitmap); - q->conv_zones_bitmap = NULL; - kfree(q->seq_zones_wlock); - q->seq_zones_wlock = NULL; + if (md->disk) { + kfree(md->disk->conv_zones_bitmap); + md->disk->conv_zones_bitmap = NULL; + kfree(md->disk->seq_zones_wlock); + md->disk->seq_zones_wlock = NULL; } kvfree(md->zwp_offset); @@ -179,31 +177,31 @@ static int dm_zone_revalidate_cb(struct blk_zone *zone, unsigned int idx, void *data) { struct mapped_device *md = data; - struct request_queue *q = md->queue; + struct gendisk *disk = md->disk; switch (zone->type) { case BLK_ZONE_TYPE_CONVENTIONAL: - if (!q->conv_zones_bitmap) { - q->conv_zones_bitmap = - kcalloc(BITS_TO_LONGS(q->nr_zones), + if (!disk->conv_zones_bitmap) { + disk->conv_zones_bitmap = + kcalloc(BITS_TO_LONGS(disk->nr_zones), sizeof(unsigned long), GFP_NOIO); - if (!q->conv_zones_bitmap) + if (!disk->conv_zones_bitmap) return -ENOMEM; } - set_bit(idx, q->conv_zones_bitmap); + set_bit(idx, disk->conv_zones_bitmap); break; case BLK_ZONE_TYPE_SEQWRITE_REQ: case BLK_ZONE_TYPE_SEQWRITE_PREF: - if (!q->seq_zones_wlock) { - q->seq_zones_wlock = - kcalloc(BITS_TO_LONGS(q->nr_zones), + if (!disk->seq_zones_wlock) { + disk->seq_zones_wlock = + kcalloc(BITS_TO_LONGS(disk->nr_zones), sizeof(unsigned long), GFP_NOIO); - if (!q->seq_zones_wlock) + if (!disk->seq_zones_wlock) return -ENOMEM; } if (!md->zwp_offset) { md->zwp_offset = - kvcalloc(q->nr_zones, sizeof(unsigned int), + kvcalloc(disk->nr_zones, sizeof(unsigned int), GFP_KERNEL); if (!md->zwp_offset) return -ENOMEM; @@ -228,7 +226,7 @@ static int dm_zone_revalidate_cb(struct blk_zone *zone, unsigned int idx, */ static int dm_revalidate_zones(struct mapped_device *md, struct dm_table *t) { - struct request_queue *q = md->queue; + struct gendisk *disk = md->disk; unsigned int noio_flag; int ret; @@ -236,7 +234,7 @@ static int dm_revalidate_zones(struct mapped_device *md, struct dm_table *t) * Check if something changed. If yes, cleanup the current resources * and reallocate everything. */ - if (!q->nr_zones || q->nr_zones != md->nr_zones) + if (!disk->nr_zones || disk->nr_zones != md->nr_zones) dm_cleanup_zoned_dev(md); if (md->nr_zones) return 0; @@ -246,17 +244,17 @@ static int dm_revalidate_zones(struct mapped_device *md, struct dm_table *t) * operations in this context are done as if GFP_NOIO was specified. */ noio_flag = memalloc_noio_save(); - ret = dm_blk_do_report_zones(md, t, 0, q->nr_zones, + ret = dm_blk_do_report_zones(md, t, 0, disk->nr_zones, dm_zone_revalidate_cb, md); memalloc_noio_restore(noio_flag); if (ret < 0) goto err; - if (ret != q->nr_zones) { + if (ret != disk->nr_zones) { ret = -EIO; goto err; } - md->nr_zones = q->nr_zones; + md->nr_zones = disk->nr_zones; return 0; @@ -301,7 +299,7 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) * correct value to be exposed in sysfs queue/nr_zones. */ WARN_ON_ONCE(queue_is_mq(q)); - q->nr_zones = bdev_nr_zones(md->disk->part0); + md->disk->nr_zones = bdev_nr_zones(md->disk->part0); /* Check if zone append is natively supported */ if (dm_table_supports_zone_append(t)) { @@ -466,26 +464,26 @@ static blk_status_t dm_zone_map_bio_end(struct mapped_device *md, unsigned int z } } -static inline void dm_zone_lock(struct request_queue *q, - unsigned int zno, struct bio *clone) +static inline void dm_zone_lock(struct gendisk *disk, unsigned int zno, + struct bio *clone) { if (WARN_ON_ONCE(bio_flagged(clone, BIO_ZONE_WRITE_LOCKED))) return; - wait_on_bit_lock_io(q->seq_zones_wlock, zno, TASK_UNINTERRUPTIBLE); + wait_on_bit_lock_io(disk->seq_zones_wlock, zno, TASK_UNINTERRUPTIBLE); bio_set_flag(clone, BIO_ZONE_WRITE_LOCKED); } -static inline void dm_zone_unlock(struct request_queue *q, - unsigned int zno, struct bio *clone) +static inline void dm_zone_unlock(struct gendisk *disk, unsigned int zno, + struct bio *clone) { if (!bio_flagged(clone, BIO_ZONE_WRITE_LOCKED)) return; - WARN_ON_ONCE(!test_bit(zno, q->seq_zones_wlock)); - clear_bit_unlock(zno, q->seq_zones_wlock); + WARN_ON_ONCE(!test_bit(zno, disk->seq_zones_wlock)); + clear_bit_unlock(zno, disk->seq_zones_wlock); smp_mb__after_atomic(); - wake_up_bit(q->seq_zones_wlock, zno); + wake_up_bit(disk->seq_zones_wlock, zno); bio_clear_flag(clone, BIO_ZONE_WRITE_LOCKED); } @@ -520,7 +518,6 @@ int dm_zone_map_bio(struct dm_target_io *tio) struct dm_io *io = tio->io; struct dm_target *ti = tio->ti; struct mapped_device *md = io->md; - struct request_queue *q = md->queue; struct bio *clone = &tio->clone; struct orig_bio_details orig_bio_details; unsigned int zno; @@ -536,7 +533,7 @@ int dm_zone_map_bio(struct dm_target_io *tio) /* Lock the target zone */ zno = bio_zone_no(clone); - dm_zone_lock(q, zno, clone); + dm_zone_lock(md->disk, zno, clone); orig_bio_details.nr_sectors = bio_sectors(clone); orig_bio_details.op = bio_op(clone); @@ -546,7 +543,7 @@ int dm_zone_map_bio(struct dm_target_io *tio) * both valid, and if the bio is a zone append, remap it to a write. */ if (!dm_zone_map_bio_begin(md, zno, clone)) { - dm_zone_unlock(q, zno, clone); + dm_zone_unlock(md->disk, zno, clone); return DM_MAPIO_KILL; } @@ -570,12 +567,12 @@ int dm_zone_map_bio(struct dm_target_io *tio) sts = dm_zone_map_bio_end(md, zno, &orig_bio_details, *tio->len_ptr); if (sts != BLK_STS_OK) - dm_zone_unlock(q, zno, clone); + dm_zone_unlock(md->disk, zno, clone); break; case DM_MAPIO_REQUEUE: case DM_MAPIO_KILL: default: - dm_zone_unlock(q, zno, clone); + dm_zone_unlock(md->disk, zno, clone); sts = BLK_STS_IOERR; break; } @@ -592,7 +589,6 @@ int dm_zone_map_bio(struct dm_target_io *tio) void dm_zone_endio(struct dm_io *io, struct bio *clone) { struct mapped_device *md = io->md; - struct request_queue *q = md->queue; struct gendisk *disk = md->disk; struct bio *orig_bio = io->orig_bio; unsigned int zwp_offset; @@ -651,5 +647,5 @@ void dm_zone_endio(struct dm_io *io, struct bio *clone) zwp_offset - bio_sectors(orig_bio); } - dm_zone_unlock(q, zno, clone); + dm_zone_unlock(disk, zno, clone); } diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index ccf9a6da8f6e1..f26640ccb9555 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -830,7 +830,7 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id) ns->head->disk->queue); #ifdef CONFIG_BLK_DEV_ZONED if (blk_queue_is_zoned(ns->queue) && ns->head->disk) - ns->head->disk->queue->nr_zones = ns->queue->nr_zones; + ns->head->disk->nr_zones = ns->disk->nr_zones; #endif } diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c index 9d8717126ab31..c0ee21fcab816 100644 --- a/drivers/nvme/target/zns.c +++ b/drivers/nvme/target/zns.c @@ -57,7 +57,7 @@ bool nvmet_bdev_zns_enable(struct nvmet_ns *ns) * zones, reject the device. Otherwise, use report zones to detect if * the device has conventional zones. */ - if (ns->bdev->bd_disk->queue->conv_zones_bitmap) + if (ns->bdev->bd_disk->conv_zones_bitmap) return false; ret = blkdev_report_zones(ns->bdev, 0, bdev_nr_zones(ns->bdev), @@ -414,7 +414,7 @@ static u16 nvmet_bdev_zone_mgmt_emulate_all(struct nvmet_req *req) } while (sector < bdev_nr_sectors(bdev)) { - if (test_bit(blk_queue_zone_no(q, sector), d.zbitmap)) { + if (test_bit(disk_zone_no(bdev->bd_disk, sector), d.zbitmap)) { bio = blk_next_bio(bio, bdev, 0, zsa_req_op(req->cmd->zms.zsa) | REQ_SYNC, GFP_KERNEL); diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index b4106f8997342..b8c97456506ac 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -855,7 +855,7 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp) if (sdkp->zone_info.zone_blocks == zone_blocks && sdkp->zone_info.nr_zones == nr_zones && - disk->queue->nr_zones == nr_zones) + disk->nr_zones == nr_zones) goto unlock; flags = memalloc_noio_save(); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 43aad0da3305d..1b0b753609975 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -1123,12 +1123,12 @@ void blk_dump_rq_flags(struct request *, char *); #ifdef CONFIG_BLK_DEV_ZONED static inline unsigned int blk_rq_zone_no(struct request *rq) { - return blk_queue_zone_no(rq->q, blk_rq_pos(rq)); + return disk_zone_no(rq->q->disk, blk_rq_pos(rq)); } static inline unsigned int blk_rq_zone_is_seq(struct request *rq) { - return blk_queue_zone_is_seq(rq->q, blk_rq_pos(rq)); + return disk_zone_is_seq(rq->q->disk, blk_rq_pos(rq)); } bool blk_req_needs_zone_write_lock(struct request *rq); @@ -1150,8 +1150,8 @@ static inline void blk_req_zone_write_unlock(struct request *rq) static inline bool blk_req_zone_is_write_locked(struct request *rq) { - return rq->q->seq_zones_wlock && - test_bit(blk_rq_zone_no(rq), rq->q->seq_zones_wlock); + return rq->q->disk->seq_zones_wlock && + test_bit(blk_rq_zone_no(rq), rq->q->disk->seq_zones_wlock); } static inline bool blk_req_can_dispatch_to_zone(struct request *rq) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 21b97f7115dcb..22c477fadc0f3 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -164,6 +164,29 @@ struct gendisk { #ifdef CONFIG_BLK_DEV_INTEGRITY struct kobject integrity_kobj; #endif /* CONFIG_BLK_DEV_INTEGRITY */ + +#ifdef CONFIG_BLK_DEV_ZONED + /* + * Zoned block device information for request dispatch control. + * nr_zones is the total number of zones of the device. This is always + * 0 for regular block devices. conv_zones_bitmap is a bitmap of nr_zones + * bits which indicates if a zone is conventional (bit set) or + * sequential (bit clear). seq_zones_wlock is a bitmap of nr_zones + * bits which indicates if a zone is write locked, that is, if a write + * request targeting the zone was dispatched. + * + * Reads of this information must be protected with blk_queue_enter() / + * blk_queue_exit(). Modifying this information is only allowed while + * no requests are being processed. See also blk_mq_freeze_queue() and + * blk_mq_unfreeze_queue(). + */ + unsigned int nr_zones; + unsigned int max_open_zones; + unsigned int max_active_zones; + unsigned long *conv_zones_bitmap; + unsigned long *seq_zones_wlock; +#endif /* CONFIG_BLK_DEV_ZONED */ + #if IS_ENABLED(CONFIG_CDROM) struct cdrom_device_info *cdi; #endif @@ -467,31 +490,6 @@ struct request_queue { unsigned int required_elevator_features; -#ifdef CONFIG_BLK_DEV_ZONED - /* - * Zoned block device information for request dispatch control. - * nr_zones is the total number of zones of the device. This is always - * 0 for regular block devices. conv_zones_bitmap is a bitmap of nr_zones - * bits which indicates if a zone is conventional (bit set) or - * sequential (bit clear). seq_zones_wlock is a bitmap of nr_zones - * bits which indicates if a zone is write locked, that is, if a write - * request targeting the zone was dispatched. All three fields are - * initialized by the low level device driver (e.g. scsi/sd.c). - * Stacking drivers (device mappers) may or may not initialize - * these fields. - * - * Reads of this information must be protected with blk_queue_enter() / - * blk_queue_exit(). Modifying this information is only allowed while - * no requests are being processed. See also blk_mq_freeze_queue() and - * blk_mq_unfreeze_queue(). - */ - unsigned int nr_zones; - unsigned long *conv_zones_bitmap; - unsigned long *seq_zones_wlock; - unsigned int max_open_zones; - unsigned int max_active_zones; -#endif /* CONFIG_BLK_DEV_ZONED */ - int node; #ifdef CONFIG_BLK_DEV_IO_TRACE struct blk_trace __rcu *blk_trace; @@ -668,63 +666,59 @@ static inline bool blk_queue_is_zoned(struct request_queue *q) } #ifdef CONFIG_BLK_DEV_ZONED -static inline unsigned int blk_queue_nr_zones(struct request_queue *q) +static inline unsigned int disk_nr_zones(struct gendisk *disk) { - return blk_queue_is_zoned(q) ? q->nr_zones : 0; + return blk_queue_is_zoned(disk->queue) ? disk->nr_zones : 0; } -static inline unsigned int blk_queue_zone_no(struct request_queue *q, - sector_t sector) +static inline unsigned int disk_zone_no(struct gendisk *disk, sector_t sector) { - if (!blk_queue_is_zoned(q)) + if (!blk_queue_is_zoned(disk->queue)) return 0; - return sector >> ilog2(q->limits.chunk_sectors); + return sector >> ilog2(disk->queue->limits.chunk_sectors); } -static inline bool blk_queue_zone_is_seq(struct request_queue *q, - sector_t sector) +static inline bool disk_zone_is_seq(struct gendisk *disk, sector_t sector) { - if (!blk_queue_is_zoned(q)) + if (!blk_queue_is_zoned(disk->queue)) return false; - if (!q->conv_zones_bitmap) + if (!disk->conv_zones_bitmap) return true; - return !test_bit(blk_queue_zone_no(q, sector), q->conv_zones_bitmap); + return !test_bit(disk_zone_no(disk, sector), disk->conv_zones_bitmap); } static inline void disk_set_max_open_zones(struct gendisk *disk, unsigned int max_open_zones) { - disk->queue->max_open_zones = max_open_zones; + disk->max_open_zones = max_open_zones; } static inline void disk_set_max_active_zones(struct gendisk *disk, unsigned int max_active_zones) { - disk->queue->max_active_zones = max_active_zones; + disk->max_active_zones = max_active_zones; } static inline unsigned int bdev_max_open_zones(struct block_device *bdev) { - return bdev->bd_disk->queue->max_open_zones; + return bdev->bd_disk->max_open_zones; } static inline unsigned int bdev_max_active_zones(struct block_device *bdev) { - return bdev->bd_disk->queue->max_active_zones; + return bdev->bd_disk->max_active_zones; } #else /* CONFIG_BLK_DEV_ZONED */ -static inline unsigned int blk_queue_nr_zones(struct request_queue *q) +static inline unsigned int disk_nr_zones(struct gendisk *disk) { return 0; } -static inline bool blk_queue_zone_is_seq(struct request_queue *q, - sector_t sector) +static inline bool disk_zone_is_seq(struct gendisk *disk, sector_t sector) { return false; } -static inline unsigned int blk_queue_zone_no(struct request_queue *q, - sector_t sector) +static inline unsigned int disk_zone_no(struct gendisk *disk, sector_t sector) { return 0; } @@ -732,6 +726,7 @@ static inline unsigned int bdev_max_open_zones(struct block_device *bdev) { return 0; } + static inline unsigned int bdev_max_active_zones(struct block_device *bdev) { return 0; @@ -900,14 +895,12 @@ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond); static inline unsigned int bio_zone_no(struct bio *bio) { - return blk_queue_zone_no(bdev_get_queue(bio->bi_bdev), - bio->bi_iter.bi_sector); + return disk_zone_no(bio->bi_bdev->bd_disk, bio->bi_iter.bi_sector); } static inline unsigned int bio_zone_is_seq(struct bio *bio) { - return blk_queue_zone_is_seq(bdev_get_queue(bio->bi_bdev), - bio->bi_iter.bi_sector); + return disk_zone_is_seq(bio->bi_bdev->bd_disk, bio->bi_iter.bi_sector); } /*