From patchwork Wed Jan 31 13:03:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539418 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 578C05A7A1; Wed, 31 Jan 2024 13:04:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706261; cv=none; b=RtSkbQ7N0G3DXahfwt7Y/M5LfYbvtJS4Qdsu87Cp1ZZsCSQyezjiDYW/Z0Gru9PM3eYpIFP+Wfb4vJBlKuP5bxf7+KApysbY+paiYjjkyGvx+b7PpM1bLUeplNTDf6brzhblcTssTFRyDrAMSL0Q60aGWTRNSWs1RPhNUw/RVSY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706261; c=relaxed/simple; bh=FDSQLaQOYfUul0ELSJ7p3yJYde5XcqfYxhkm/DBdhxo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=smdyob7xsdECrAl4HS2en/I9z0yKA/lk/wqvdefdx/Gx7iNOIkeGoDSnXXPmIcbgM0N0hjUGHZgq79iFS324dKJ14imk8S0aHeeYQ7l3XiQYmBg5oXD+yHidRN1BZXLwO53KtiSdtQiNUtvZTlut1SddPnmSDGVqlVw0rz++RsM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=ADSC4klr; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ADSC4klr" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=QvYgcCDnDV6B45UcO+y7cOtEkmcXxiY3hFDFlp1G2XE=; b=ADSC4klr7WBb3y/dmqxLJ6GlDK gLg0C94PDxpvw8n55nSttOEZ3vysPkvCiOaTOeyz4XAa1udBbp4RzORSgqhTkWFQuo33gCtK0TB/d g9ijV7tAPRbt1CTuCCauT+9u6tI8Q7nxImxtHSpQo4aBziLImaa80ZQ+9YxfCc++mDUMtOjJOpMtC VgOsKRUbj9LttYEpkE0oL8cZV6hG38gbgZeNgqo7KIfMSEaumDczgLauwj/AsNUgYQGIO5x5vgGui FQJw4Ew4izFFh97wWhYamE9X6Grp0t5SnkoKn8KoX60HJF4odjxd3gPb6fy92MYhpTjScs3GHag/L AGyNe9CQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAG9-00000003V1q-2XPJ; Wed, 31 Jan 2024 13:04:10 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Hannes Reinecke Subject: [PATCH 01/14] block: move max_{open,active}_zones to struct queue_limits Date: Wed, 31 Jan 2024 14:03:47 +0100 Message-Id: <20240131130400.625836-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html The maximum number of open and active zones is a limit on the queue and should be places there so that we can including it in the upcoming queue limits batch update API. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- include/linux/blkdev.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 99e4f5e722132c..4a2e82c7971c86 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -189,8 +189,6 @@ struct gendisk { * blk_mq_unfreeze_queue(). */ unsigned int nr_zones; - unsigned int max_open_zones; - unsigned int max_active_zones; unsigned long *conv_zones_bitmap; unsigned long *seq_zones_wlock; #endif /* CONFIG_BLK_DEV_ZONED */ @@ -307,6 +305,8 @@ struct queue_limits { unsigned char discard_misaligned; unsigned char raid_partial_stripes_expensive; bool zoned; + unsigned int max_open_zones; + unsigned int max_active_zones; /* * Drivers that set dma_alignment to less than 511 must be prepared to @@ -639,23 +639,23 @@ static inline bool disk_zone_is_seq(struct gendisk *disk, sector_t sector) static inline void disk_set_max_open_zones(struct gendisk *disk, unsigned int max_open_zones) { - disk->max_open_zones = max_open_zones; + disk->queue->limits.max_open_zones = max_open_zones; } static inline void disk_set_max_active_zones(struct gendisk *disk, unsigned int max_active_zones) { - disk->max_active_zones = max_active_zones; + disk->queue->limits.max_active_zones = max_active_zones; } static inline unsigned int bdev_max_open_zones(struct block_device *bdev) { - return bdev->bd_disk->max_open_zones; + return bdev->bd_disk->queue->limits.max_open_zones; } static inline unsigned int bdev_max_active_zones(struct block_device *bdev) { - return bdev->bd_disk->max_active_zones; + return bdev->bd_disk->queue->limits.max_active_zones; } #else /* CONFIG_BLK_DEV_ZONED */ From patchwork Wed Jan 31 13:03:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539417 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67EF077F2C; Wed, 31 Jan 2024 13:04:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706261; cv=none; b=QEguQ7jyEB4TOAQQvVBMXDuoczx4n0zWdkGye68+5mFhpRZwQgaCFFOOT8ECj7NdT1Z/fCxDqn+TUWxZXsl/L/PlEfKoKucVpoX4GRzDvKHO98Vnz1AR0Cj0I8EV17llw7ZLpcOYhEepciLqn3OKYxqlepng18RH6SVtMBnNfP8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706261; c=relaxed/simple; bh=HWmRzX+MS2ao2YLzjFNMmWBJOxmtor3o2sM3jKtZFUA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Cg2cZ/9XK/wV2tAJaxh4nM3K8jCuHGq54oXzN2jdhFUougV0LepK+WA0sy/IqKpRRF8V3Ua+qmPXDkAfllE3j1/wZ2pD4Jwb3va97QwFy5RIGmWMJ9BOBqRVJuzIvdQQd5lzqjQaEEQe7XoHy6C7aB/iAjrw3P9b3wnrNjENTCY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=klzjcug2; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="klzjcug2" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Vy2dtmCCFHWeGg2pgvyCIiNfo0aIeUt8gxcpNzBja74=; b=klzjcug2JniM+jCovxmj0Vfd2Q tmHAtUHkHfFNzMZjjKJ0erv8UsB0FaA1eOKqHiyn3VlUUxYEVvZUSb6bEGO+7o5j9sOzXd6PlAtsp 5rbxtLKgUd3HAkueJLWzZQnI+6PSpu8lOoGT0urK6b+yIDQ4pJxkRbTWMR3VRtuIRAr99ENTw6JsI zxLR65fEN9/iAyNDV8d+WQacU3auUXB67IckQKBa7/VPLFnPu2cxmMtEH6TW7QEIVbiRS09AaxxM3 Ygt1Mr+qOLo2SlQOBu/8cCAyZtROSrP8Kqkk0fN5Mf7Myi9lg9kvfSN0zRIxTZZU41OXUig7+tYGk 6VhEKunQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGC-00000003V3Y-2ojc; Wed, 31 Jan 2024 13:04:13 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Hannes Reinecke Subject: [PATCH 02/14] block: refactor disk_update_readahead Date: Wed, 31 Jan 2024 14:03:48 +0100 Message-Id: <20240131130400.625836-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Factor out a blk_apply_bdi_limits limits helper that can be used with an explicit queue_limits argument, which will be useful later. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- block/blk-settings.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 06ea91e51b8b2e..f16d3fec6658e5 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -85,6 +85,17 @@ void blk_set_stacking_limits(struct queue_limits *lim) } EXPORT_SYMBOL(blk_set_stacking_limits); +static void blk_apply_bdi_limits(struct backing_dev_info *bdi, + struct queue_limits *lim) +{ + /* + * For read-ahead of large files to be effective, we need to read ahead + * at least twice the optimal I/O size. + */ + bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES); + bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT; +} + /** * blk_queue_bounce_limit - set bounce buffer limit for queue * @q: the request queue for the device @@ -393,15 +404,7 @@ EXPORT_SYMBOL(blk_queue_alignment_offset); void disk_update_readahead(struct gendisk *disk) { - struct request_queue *q = disk->queue; - - /* - * For read-ahead of large files to be effective, we need to read ahead - * at least twice the optimal I/O size. - */ - disk->bdi->ra_pages = - max(queue_io_opt(q) * 2 / PAGE_SIZE, VM_READAHEAD_PAGES); - disk->bdi->io_pages = queue_max_sectors(q) >> (PAGE_SHIFT - 9); + blk_apply_bdi_limits(disk->bdi, &disk->queue->limits); } EXPORT_SYMBOL_GPL(disk_update_readahead); From patchwork Wed Jan 31 13:03:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539419 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFABC79DCC; Wed, 31 Jan 2024 13:04:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706262; cv=none; b=jIFVlU8Heh9e8SvSvDAzfiRICtztYltgeYEKbsrzmgO/tISRq+SGhWrwUUE9Ur621H3ZysGe8aWPQOimTbBSRMyOHQkadED2++f1R1YPELtq1LvdjS85xJJyWcf1QJThE1B+YmLb8WUI0N/8LuKAgLN8hVtns/SQ70K5WTQhFl4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706262; c=relaxed/simple; bh=sJ4mcHMMq2Xgw4IEskA73Pf7aJBxCNJabGCy6XJDptU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=PSzk8tDjZrM478BSyaEkKL4TI8Bkm8XyOpnMZZsfHpi0AFoLMX2kp/wYOrAaF+O3tiZG93Et+30TjV20RrJp8AM3eBnJdML5nGqM8VrsQAS5mzVfWGZBkJP/zC2lu6q/m+CFNOayT9APe6+tzFtV1ozq25YOqGFNQdyRIuAe0M4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=NoZ75ZCo; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="NoZ75ZCo" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=lrmsNwlbVGNdyzMiqYXkm/BxT9wwHdelhLaRqSntLcU=; b=NoZ75ZCodmdYtmuODRSICeL64P 5f3t01EPJmiP/CjSJAcAlrfTbSsVb7wy0xr9yQ4/FKbZ/SYk12fKVU44TSsl45Hsq6uHa0/C/enmZ FDwEwwjLLBi9tri/5Cz2i763+zxbcAZwbdqnugM+1ru+ZzOIubc4Wy1l2jB1vr59padr6bUFKf+hI gkMbRoMbhjUMVjky/xD6fWS67qOGdHqEfPkYDqLWzg86koonto+DxLWeAAKb1kpaow1YtoKYPU+JN Fyzd5Gjk5dOTy9qazh3ZfjCJ8mcbG/bKy6R680LWUaSCeX1KoxkVQZ5lUdSn89TJRNJc9uBkeVNAA eRrQeoXw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGF-00000003V5N-487x; Wed, 31 Jan 2024 13:04:16 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, John Garry , Hannes Reinecke Subject: [PATCH 03/14] block: add an API to atomically update queue limits Date: Wed, 31 Jan 2024 14:03:49 +0100 Message-Id: <20240131130400.625836-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a new queue_limits_{start,commit}_update pair of functions that allows taking an atomic snapshot of queue limits, update it, and commit it if it passes validity checking. Signed-off-by: Christoph Hellwig Reviewed-by: John Garry Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 1 + block/blk-settings.c | 182 +++++++++++++++++++++++++++++++++++++++++ block/blk.h | 1 + include/linux/blkdev.h | 23 ++++++ 4 files changed, 207 insertions(+) diff --git a/block/blk-core.c b/block/blk-core.c index 11342af420d0c4..09f4a44a4aa3cc 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -424,6 +424,7 @@ struct request_queue *blk_alloc_queue(int node_id) mutex_init(&q->debugfs_mutex); mutex_init(&q->sysfs_lock); mutex_init(&q->sysfs_dir_lock); + mutex_init(&q->limits_lock); mutex_init(&q->rq_qos_mutex); spin_lock_init(&q->queue_lock); diff --git a/block/blk-settings.c b/block/blk-settings.c index f16d3fec6658e5..0706367f6f014f 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -96,6 +96,188 @@ static void blk_apply_bdi_limits(struct backing_dev_info *bdi, bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT; } +static int blk_validate_zoned_limits(struct queue_limits *lim) +{ + if (!lim->zoned) { + if (WARN_ON_ONCE(lim->max_open_zones) || + WARN_ON_ONCE(lim->max_active_zones) || + WARN_ON_ONCE(lim->zone_write_granularity) || + WARN_ON_ONCE(lim->max_zone_append_sectors)) + return -EINVAL; + return 0; + } + + if (WARN_ON_ONCE(!IS_ENABLED(CONFIG_BLK_DEV_ZONED))) + return -EINVAL; + + if (lim->zone_write_granularity < lim->logical_block_size) + lim->zone_write_granularity = lim->logical_block_size; + + if (lim->max_zone_append_sectors) { + /* + * The Zone Append size is limited by the maximum I/O size + * and the zone size given that it can't span zones. + */ + lim->max_zone_append_sectors = + min3(lim->max_hw_sectors, + lim->max_zone_append_sectors, + lim->chunk_sectors); + } + + return 0; +} + +/* + * Check that the limits in lim are valid, initialize defaults for unset + * values, and cap values based on others where needed. + */ +int blk_validate_limits(struct queue_limits *lim) +{ + unsigned int max_hw_sectors; + + /* + * Unless otherwise specified, default to 512 byte logical blocks and a + * physical block size equal to the logical block size. + */ + if (!lim->logical_block_size) + lim->logical_block_size = SECTOR_SIZE; + if (lim->physical_block_size < lim->logical_block_size) + lim->physical_block_size = lim->logical_block_size; + + /* + * The minimum I/O size defaults to the physical block size unless + * explicitly overridden. + */ + if (lim->io_min < lim->physical_block_size) + lim->io_min = lim->physical_block_size; + + /* + * max_hw_sectors has a somewhat weird default for historical reason, + * but driver really should set their own instead of relying on this + * value. + * + * The block layer relies on the fact that every driver can + * handle at lest a page worth of data per I/O, and needs the value + * aligned to the logical block size. + */ + if (!lim->max_hw_sectors) + lim->max_hw_sectors = BLK_SAFE_MAX_SECTORS; + if (WARN_ON_ONCE(lim->max_hw_sectors < PAGE_SECTORS)) + return -EINVAL; + lim->max_hw_sectors = round_down(lim->max_hw_sectors, + lim->logical_block_size >> SECTOR_SHIFT); + + /* + * The actual max_sectors value is a complex beast and also takes the + * max_dev_sectors value (set by SCSI ULPs) and a user configurable + * value into account. The ->max_sectors value is always calculated + * from these, so directly setting it won't have any effect. + */ + max_hw_sectors = min_not_zero(lim->max_hw_sectors, + lim->max_dev_sectors); + if (lim->max_user_sectors) { + if (lim->max_user_sectors > max_hw_sectors || + lim->max_user_sectors < PAGE_SIZE / SECTOR_SIZE) + return -EINVAL; + lim->max_sectors = min(max_hw_sectors, lim->max_user_sectors); + } else { + lim->max_sectors = min(max_hw_sectors, BLK_DEF_MAX_SECTORS_CAP); + } + lim->max_sectors = round_down(lim->max_sectors, + lim->logical_block_size >> SECTOR_SHIFT); + + /* + * Random default for the maximum number of sectors. Driver should not + * rely on this and set their own. + */ + if (!lim->max_segments) + lim->max_segments = BLK_MAX_SEGMENTS; + + lim->max_discard_sectors = lim->max_hw_discard_sectors; + if (!lim->max_discard_segments) + lim->max_discard_segments = 1; + + if (lim->discard_granularity < lim->physical_block_size) + lim->discard_granularity = lim->physical_block_size; + + /* + * By default there is no limit on the segment boundary alignment, + * but if there is one it can't be smaller than the page size as + * that would break all the normal I/O patterns. + */ + if (!lim->seg_boundary_mask) + lim->seg_boundary_mask = BLK_SEG_BOUNDARY_MASK; + if (WARN_ON_ONCE(lim->seg_boundary_mask < PAGE_SIZE - 1)) + return -EINVAL; + + /* + * The maximum segment size has an odd historic 64k default that + * drivers probably should override. Just like the I/O size we + * require drivers to at least handle a full page per segment. + */ + if (!lim->max_segment_size) + lim->max_segment_size = BLK_MAX_SEGMENT_SIZE; + if (WARN_ON_ONCE(lim->max_segment_size < PAGE_SIZE)) + return -EINVAL; + + /* + * Devices that require a virtual boundary do not support scatter/gather + * I/O natively, but instead require a descriptor list entry for each + * page (which might not be identical to the Linux PAGE_SIZE). Because + * of that they are not limited by our notion of "segment size". + */ + if (lim->virt_boundary_mask) { + if (WARN_ON_ONCE(lim->max_segment_size && + lim->max_segment_size != UINT_MAX)) + return -EINVAL; + lim->max_segment_size = UINT_MAX; + } + + /* + * We require drivers to at least do logical block aligned I/O, but + * historically could not check for that due to the separate calls + * to set the limits. Once the transition is finished the check + * below should be narrowed down to check the logical block size. + */ + if (!lim->dma_alignment) + lim->dma_alignment = SECTOR_SIZE - 1; + if (WARN_ON_ONCE(lim->dma_alignment > PAGE_SIZE)) + return -EINVAL; + + if (lim->alignment_offset) { + lim->alignment_offset &= (lim->physical_block_size - 1); + lim->misaligned = 0; + } + + return blk_validate_zoned_limits(lim); +} + +/** + * queue_limits_commit_update - commit an atomic update of queue limits + * @q: queue to update + * @lim: limits to apply + * + * Apply the limits in @lim that were obtained from queue_limits_start_update() + * and updated by the caller to @q. + * + * Returns 0 if successful, else a negative error code. + */ +int queue_limits_commit_update(struct request_queue *q, + struct queue_limits *lim) + __releases(q->limits_lock) +{ + int error = blk_validate_limits(lim); + + if (!error) { + q->limits = *lim; + if (q->disk) + blk_apply_bdi_limits(q->disk->bdi, lim); + } + mutex_unlock(&q->limits_lock); + return error; +} +EXPORT_SYMBOL_GPL(queue_limits_commit_update); + /** * blk_queue_bounce_limit - set bounce buffer limit for queue * @q: the request queue for the device diff --git a/block/blk.h b/block/blk.h index 1ef920f72e0f87..58b5dbac2a487d 100644 --- a/block/blk.h +++ b/block/blk.h @@ -447,6 +447,7 @@ static inline void bio_release_page(struct bio *bio, struct page *page) unpin_user_page(page); } +int blk_validate_limits(struct queue_limits *lim); struct request_queue *blk_alloc_queue(int node_id); int disk_scan_partitions(struct gendisk *disk, blk_mode_t mode); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 4a2e82c7971c86..5b5d3b238de1e7 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -473,6 +473,7 @@ struct request_queue { struct mutex sysfs_lock; struct mutex sysfs_dir_lock; + struct mutex limits_lock; /* * for reusing dead hctx instance in case of updating @@ -861,6 +862,28 @@ static inline unsigned int blk_chunk_sectors_left(sector_t offset, return chunk_sectors - (offset & (chunk_sectors - 1)); } +/** + * queue_limits_start_update - start an atomic update of queue limits + * @q: queue to update + * + * This functions starts an atomic update of the queue limits. It takes a lock + * to prevent other updates and returns a snapshot of the current limits that + * the caller can modify. The caller must call queue_limits_commit_update() + * to finish the update. + * + * Context: process context. The caller must have frozen the queue or ensured + * that there is outstanding I/O by other means. + */ +static inline struct queue_limits +queue_limits_start_update(struct request_queue *q) + __acquires(q->limits_lock) +{ + mutex_lock(&q->limits_lock); + return q->limits; +} +int queue_limits_commit_update(struct request_queue *q, + struct queue_limits *lim); + /* * Access functions for manipulating queue properties */ From patchwork Wed Jan 31 13:03:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539420 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 090207BB05; Wed, 31 Jan 2024 13:04:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706265; cv=none; b=GSVk0NofXLS2SsctCw1NaF9z6q4ScFoeAUJt3sIUmG4mIyjCes8FHjXiXyj7yGsHJEnZFopeOo2+sWQy6O4DnG8jcSmMCrmZEC6geTIds28SP76mrP6c/29omukmVnE4d51zMwTPgazXiCWFe6Ct4IVzGteYJy2iexzxjSSBHX0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706265; c=relaxed/simple; bh=VZx5b3T0lX4C5kvEx2UQdF/fjgUtGDeMKfujf38L8vs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ISoSxe05bPFMoBkLcESlN86plZMjWEZ36xCkuJgRMJTtBGte0eFB5oeFK1NCJaG6g2rI9Is1cYxExoTOqqngRrzYcSgP9nhaqSmYDkdg+GO89JFGw2bB2xFzaZvUsx406elxgd0wgz6LOkX2BW8Bh25atSe/gwEG+5FhzL9fBVY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=ofYsAAfj; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ofYsAAfj" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=WWYHxYmv/F7W+tTtCkbzwH+04Mr5rrrF52Upn6qLTyE=; b=ofYsAAfjpytcnSFTthLV6aC1BN aaw8reznDZkj0At2bVW3GMtHpw8xzos5BYRoIRJk6rk5rBAEtN3PqzmEkF4AZIJVldvjxDAQiWz1V ubLPSbN7iKazP+S8X/LDdEmc/QvS9z5Wg6IpHB7ePjjJ9RMgt5rKUhXngGGNmYmVEHIS7KNEuPW/q NVxryxAsH9vWCqrJ+V7T9++6uUQURIZbGh+pOO/WIbbIqLCev3v4mJruSytAwkT5bDvYpYdJzdDew OUYEdieZruuDGa4bpsdnuzuXNdp7xBO6aNYPsfX/o2cRgVDnP2npZb3SmlIh2Cg5l6hWoqQtP1Ake NAk4zXCA==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGJ-00000003V7N-1omM; Wed, 31 Jan 2024 13:04:20 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, John Garry , Hannes Reinecke Subject: [PATCH 04/14] block: use queue_limits_commit_update in queue_max_sectors_store Date: Wed, 31 Jan 2024 14:03:50 +0100 Message-Id: <20240131130400.625836-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Convert queue_max_sectors_store to use queue_limits_commit_update to check and update the max_sectors limit and freeze the queue before doing so to ensure we don't have requests in flight while changing the limits. Note that this removes the previously held queue_lock that doesn't protect against any other reader or writer. Signed-off-by: Christoph Hellwig Reviewed-by: John Garry Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- block/blk-sysfs.c | 37 ++++++++++++------------------------- 1 file changed, 12 insertions(+), 25 deletions(-) diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 6b2429cad81af1..26607f9825cb05 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -226,35 +226,22 @@ static ssize_t queue_zone_append_max_show(struct request_queue *q, char *page) static ssize_t queue_max_sectors_store(struct request_queue *q, const char *page, size_t count) { - unsigned long var; - unsigned int max_sectors_kb, - max_hw_sectors_kb = queue_max_hw_sectors(q) >> 1, - page_kb = 1 << (PAGE_SHIFT - 10); - ssize_t ret = queue_var_store(&var, page, count); + unsigned long max_sectors_kb; + struct queue_limits lim; + ssize_t ret; + int err; + ret = queue_var_store(&max_sectors_kb, page, count); if (ret < 0) return ret; - max_sectors_kb = (unsigned int)var; - max_hw_sectors_kb = min_not_zero(max_hw_sectors_kb, - q->limits.max_dev_sectors >> 1); - if (max_sectors_kb == 0) { - q->limits.max_user_sectors = 0; - max_sectors_kb = min(max_hw_sectors_kb, - BLK_DEF_MAX_SECTORS_CAP >> 1); - } else { - if (max_sectors_kb > max_hw_sectors_kb || - max_sectors_kb < page_kb) - return -EINVAL; - q->limits.max_user_sectors = max_sectors_kb << 1; - } - - spin_lock_irq(&q->queue_lock); - q->limits.max_sectors = max_sectors_kb << 1; - if (q->disk) - q->disk->bdi->io_pages = max_sectors_kb >> (PAGE_SHIFT - 10); - spin_unlock_irq(&q->queue_lock); - + blk_mq_freeze_queue(q); + lim = queue_limits_start_update(q); + lim.max_user_sectors = max_sectors_kb << 1; + err = queue_limits_commit_update(q, &lim); + blk_mq_unfreeze_queue(q); + if (err) + return err; return ret; } From patchwork Wed Jan 31 13:03:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539421 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8855979DD0; Wed, 31 Jan 2024 13:04:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706270; cv=none; b=A4EOwLXEBYkdfCVeAvFEf1KoTnKU4KQBmB4zabuj5E2QmNx3x8oGeH9yg3UIMhJiak86UlHRuTeFgcIk/VvIpLX0i1h9H+33O5qA5HqedH39BfayBAQXBh9uEzqBCc73FYlAryqr9zw7HDcNlxVqNwdq08SVaUEJBlN1qJFllyk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706270; c=relaxed/simple; bh=c9fNlqQ7tOvbpzyVsMaQxh0IJYSSH9Drtqeu1uiIrtI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pgUdoX4szwgwhmwGioM3Y3PfGVPsflqw0xf+dkT/8G0yL5ypaEXptVnB62ELFdJKmkQ1f+tNiTSZJUgEFPh4JtWXmdhv4Looff0blsQEMBoC6VYvEeDIu6o5LElKDm+SLlj/7S76DpwAaiyhh1ulT8Wi8bFTz2h6Ei8TU9Ujhbg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=gImixnwt; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="gImixnwt" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=JWR/Cni6aRqx+hqSnSPYRfx3wBcmjrsmufMb+Gi/s1E=; b=gImixnwtwqDZ7KorO7rND21tsF TYqr74SuQMoaWK0uc5j5I7J9F+4xrofvSYPjlOVzZhAS+4aj4TDJncU3+fLmquLlBVq+MgqQuXWZT GyEBDdk8c4IvoCpMuCzQjprOG3GZXbSQ7lKEYQ1S+ldFPoLyKR0v2jdptGzxXynMbpLWHAq9AHmjy eg+dFheVPU6ljj9Xolewwff4djkpnUs4EQR4ofHCMdZ9iSFuTTIks7LG69ArZSJ5KYGOCgwO/SIAL VSPP/xYSN5OBhb68G+KGak7473voPxGD6amllgIzqzDYNLoFQKdBxKrSvgpEWDF/cnNpjnAXlOENj lxKpkxGQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGN-00000003V9w-06zo; Wed, 31 Jan 2024 13:04:23 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Hannes Reinecke Subject: [PATCH 05/14] block: add a max_user_discard_sectors queue limit Date: Wed, 31 Jan 2024 14:03:51 +0100 Message-Id: <20240131130400.625836-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a new max_user_discard_sectors limit that mirrors max_user_sectors and stores the value that the user manually set. This now allows updates of the max_hw_discard_sectors to not worry about the user limit. Signed-off-by: Christoph Hellwig Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni Reviewed-by: Keith Busch --- block/blk-settings.c | 12 +++++++++--- block/blk-sysfs.c | 18 +++++++++--------- include/linux/blkdev.h | 1 + 3 files changed, 19 insertions(+), 12 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 0706367f6f014f..50d4a88b80161d 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -47,6 +47,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->max_zone_append_sectors = 0; lim->max_discard_sectors = 0; lim->max_hw_discard_sectors = 0; + lim->max_user_discard_sectors = UINT_MAX; lim->max_secure_erase_sectors = 0; lim->discard_granularity = 512; lim->discard_alignment = 0; @@ -193,7 +194,9 @@ int blk_validate_limits(struct queue_limits *lim) if (!lim->max_segments) lim->max_segments = BLK_MAX_SEGMENTS; - lim->max_discard_sectors = lim->max_hw_discard_sectors; + lim->max_discard_sectors = + min(lim->max_hw_discard_sectors, lim->max_user_discard_sectors); + if (!lim->max_discard_segments) lim->max_discard_segments = 1; @@ -370,8 +373,11 @@ EXPORT_SYMBOL(blk_queue_chunk_sectors); void blk_queue_max_discard_sectors(struct request_queue *q, unsigned int max_discard_sectors) { - q->limits.max_hw_discard_sectors = max_discard_sectors; - q->limits.max_discard_sectors = max_discard_sectors; + struct queue_limits *lim = &q->limits; + + lim->max_hw_discard_sectors = max_discard_sectors; + lim->max_discard_sectors = + min(max_discard_sectors, lim->max_user_discard_sectors); } EXPORT_SYMBOL(blk_queue_max_discard_sectors); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 26607f9825cb05..54e10604ddb1dd 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -174,23 +174,23 @@ static ssize_t queue_discard_max_show(struct request_queue *q, char *page) static ssize_t queue_discard_max_store(struct request_queue *q, const char *page, size_t count) { - unsigned long max_discard; - ssize_t ret = queue_var_store(&max_discard, page, count); + unsigned long max_discard_bytes; + ssize_t ret; + ret = queue_var_store(&max_discard_bytes, page, count); if (ret < 0) return ret; - if (max_discard & (q->limits.discard_granularity - 1)) + if (max_discard_bytes & (q->limits.discard_granularity - 1)) return -EINVAL; - max_discard >>= 9; - if (max_discard > UINT_MAX) + if ((max_discard_bytes >> SECTOR_SHIFT) > UINT_MAX) return -EINVAL; - if (max_discard > q->limits.max_hw_discard_sectors) - max_discard = q->limits.max_hw_discard_sectors; - - q->limits.max_discard_sectors = max_discard; + q->limits.max_user_discard_sectors = max_discard_bytes >> SECTOR_SHIFT; + q->limits.max_discard_sectors = + min_not_zero(q->limits.max_hw_discard_sectors, + q->limits.max_user_discard_sectors); return ret; } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 5b5d3b238de1e7..700ec5055b668d 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -290,6 +290,7 @@ struct queue_limits { unsigned int io_opt; unsigned int max_discard_sectors; unsigned int max_hw_discard_sectors; + unsigned int max_user_discard_sectors; unsigned int max_secure_erase_sectors; unsigned int max_write_zeroes_sectors; unsigned int max_zone_append_sectors; From patchwork Wed Jan 31 13:03:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539422 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 63F497AE68; Wed, 31 Jan 2024 13:04:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706272; cv=none; b=Tp8wTlSP3KrICHNhPrdpLCgUk7A9KJBjl3ac4xv6HhidG4JYMGuMBZzIvGsuloSTblKO9YD7aPqgSZBDjUdreclV8O8Fkr7ShHW5NMiyIbMaytfVBKCAz5AhNthAAgn0Zg7kIyFjBpSugJcncI+RePeldR5g0p82Y9f1mAC/mSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706272; c=relaxed/simple; bh=cTCl2JGW+TFoPFTqRvuoX9sm/vDAcQB7JPcHGhFmVRM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TU/GBBBHQcOWY5BdEGNLOssCB+OsbFOHTk+XtalyB4PwpopE9l7Rl5hPFBjWLhHG50wwXz1/GiKHjLC/aKWEz6zSnVh5w5jYzBcJT+4KR31buj1QbkOwg9ewLQdhzWzYwIcpymfGltFKuPL7BMEpc9qcoDXR4Ee61UY4n3GTvIk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=2Vr20Ukn; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="2Vr20Ukn" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=P7lJeX1MV89nvGvN/Kp1ZllbRc7C3W/7Ktl++U1k1BA=; b=2Vr20Uknrg7FC5tMjPnC6XJ/GK ZSlIlqALAqHgyr0NBJjnGH3H8uT5Of+ELyJgtYUj9o90auD95HS3A7+mbJpo30lpBZ+TrN0V+hta2 XIQnLFqQvsmZixcPXoBD6Fem6oB4UZjlouEscsaILyATdwG5Drn63pMxHWLT1KJXEn5ktabtKzP7M b7tbFMN/0zjc9ZjNC1AKESJ20mzkzmn/HBLOcDh14f5+lt83OJC7NfBwA+CQYWcXxOiGJhy/nRd6T 3vQop9Bx81Zl2OOfBwsN4gHdmYfCzBnTOzjwhbkjcVx2tlsLTu7AsBn81OkncvKBbdq67Zhx49iAF 4hL3nFJw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGQ-00000003VCv-1pry; Wed, 31 Jan 2024 13:04:27 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Hannes Reinecke Subject: [PATCH 06/14] block: use queue_limits_commit_update in queue_discard_max_store Date: Wed, 31 Jan 2024 14:03:52 +0100 Message-Id: <20240131130400.625836-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Convert queue_discard_max_store to use queue_limits_commit_update to check and update the max_discard_sectors limit and freeze the queue before doing so to ensure we don't have requests in flight while changing the limits. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- block/blk-sysfs.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 54e10604ddb1dd..8c8f69d8ba48ee 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -175,7 +175,9 @@ static ssize_t queue_discard_max_store(struct request_queue *q, const char *page, size_t count) { unsigned long max_discard_bytes; + struct queue_limits lim; ssize_t ret; + int err; ret = queue_var_store(&max_discard_bytes, page, count); if (ret < 0) @@ -187,10 +189,14 @@ static ssize_t queue_discard_max_store(struct request_queue *q, if ((max_discard_bytes >> SECTOR_SHIFT) > UINT_MAX) return -EINVAL; - q->limits.max_user_discard_sectors = max_discard_bytes >> SECTOR_SHIFT; - q->limits.max_discard_sectors = - min_not_zero(q->limits.max_hw_discard_sectors, - q->limits.max_user_discard_sectors); + blk_mq_freeze_queue(q); + lim = queue_limits_start_update(q); + lim.max_user_discard_sectors = max_discard_bytes >> SECTOR_SHIFT; + err = queue_limits_commit_update(q, &lim); + blk_mq_unfreeze_queue(q); + + if (err) + return err; return ret; } From patchwork Wed Jan 31 13:03:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539423 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83C4E5A7A1; Wed, 31 Jan 2024 13:04:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706279; cv=none; b=OToort80tEGpr/hh/CCKo7jkV6csRkjzx9p9YA/H32Jw8+RTM3UOGVrgmL+d2JutR52OJxA4sOPzPWpUIQnf+5wCynGL7ULnSDIQRhsxhGD/tMH+Gg7bvEUGPKI5vh7j9LKKUdlrEN7Z3t/e/sQiI8m7NJXfg+7xQeFCuBL38/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706279; c=relaxed/simple; bh=cqGHQfSw3MgECjK4rxpgrHskV170ZGleXLsDPG/oFik=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YUpxOVjKTtF6x4NEL/EYUBXKiCtiDr63g7NAZyjkR8NlNsSPK6m6bcmP4OBgg91MVXQ3GvQ60d1QjwI7A5xUsQWpMKYRVCDOkSDI0/BxhSd7dYEzOYnkfH2KHbCjcc7LjS7wM/LI4QsNsfm3Si+v1TVgzN3VHqmcprG87vX+1Yk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=pyqVcRdq; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="pyqVcRdq" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=LZFoBknjyaDiVcyRevDVi5h74a0C6ddm5joTLFDHcus=; b=pyqVcRdqwPByd7ZDqXV1RNYQgQ 2/3SsEs8g81Yafh6tDXmdrrHngDUbmt+MtAbIKeJ2RtEyn8eNaVuicAgFYOguiYaFFq/x9wRIT0PI 3cU/dRfka96TLB0ZPuZcnXtzXLJyJtNV6pDTPB5D1kez7jVF13P/FHZKPMfjA5+eQyfHiSoVGNwRO e5H/6vypcCTm+8pfVQIy9qszys85mDLD0hx8IbjV4c/GuClJf6rA24Eko7Qb8D7iwSF/Ix4MoqZJw INAePfQpOzh4vVAVqnMIiEx1mlX4Gs0EZsgWfkjSO1Ljhg1I+Cv7s8BpLglEbqz+lw80HZbRuZdlX YjJwD9mw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGT-00000003VFh-3sYR; Wed, 31 Jan 2024 13:04:30 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, John Garry , Hannes Reinecke Subject: [PATCH 07/14] block: pass a queue_limits argument to blk_alloc_queue Date: Wed, 31 Jan 2024 14:03:53 +0100 Message-Id: <20240131130400.625836-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass a queue_limits to blk_alloc_queue and apply it if non-NULL. This will allow allocating queues with valid queue limits instead of setting the values one at a time later. Signed-off-by: Christoph Hellwig Reviewed-by: John Garry Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 30 ++++++++++++++++++++++-------- block/blk-mq.c | 6 +++--- block/blk.h | 2 +- block/genhd.c | 4 ++-- 4 files changed, 28 insertions(+), 14 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 09f4a44a4aa3cc..26d3be06bcc0fd 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -393,24 +393,38 @@ static void blk_timeout_work(struct work_struct *work) { } -struct request_queue *blk_alloc_queue(int node_id) +struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id) { struct request_queue *q; + int error; q = kmem_cache_alloc_node(blk_requestq_cachep, GFP_KERNEL | __GFP_ZERO, node_id); if (!q) - return NULL; + return ERR_PTR(-ENOMEM); q->last_merge = NULL; + if (lim) { + error = blk_validate_limits(lim); + if (error) + goto fail_q; + q->limits = *lim; + } else { + blk_set_default_limits(&q->limits); + } + q->id = ida_alloc(&blk_queue_ida, GFP_KERNEL); - if (q->id < 0) + if (q->id < 0) { + error = q->id; goto fail_q; + } q->stats = blk_alloc_queue_stats(); - if (!q->stats) + if (!q->stats) { + error = -ENOMEM; goto fail_id; + } q->node = node_id; @@ -435,12 +449,12 @@ struct request_queue *blk_alloc_queue(int node_id) * Init percpu_ref in atomic mode so that it's faster to shutdown. * See blk_register_queue() for details. */ - if (percpu_ref_init(&q->q_usage_counter, + error = percpu_ref_init(&q->q_usage_counter, blk_queue_usage_counter_release, - PERCPU_REF_INIT_ATOMIC, GFP_KERNEL)) + PERCPU_REF_INIT_ATOMIC, GFP_KERNEL); + if (error) goto fail_stats; - blk_set_default_limits(&q->limits); q->nr_requests = BLKDEV_DEFAULT_RQ; return q; @@ -451,7 +465,7 @@ struct request_queue *blk_alloc_queue(int node_id) ida_free(&blk_queue_ida, q->id); fail_q: kmem_cache_free(blk_requestq_cachep, q); - return NULL; + return ERR_PTR(error); } /** diff --git a/block/blk-mq.c b/block/blk-mq.c index aa87fcfda1ecfc..2ddbefdeae93e4 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4092,9 +4092,9 @@ static struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set, struct request_queue *q; int ret; - q = blk_alloc_queue(set->numa_node); - if (!q) - return ERR_PTR(-ENOMEM); + q = blk_alloc_queue(NULL, set->numa_node); + if (IS_ERR(q)) + return q; q->queuedata = queuedata; ret = blk_mq_init_allocated_queue(set, q); if (ret) { diff --git a/block/blk.h b/block/blk.h index 58b5dbac2a487d..100c7a02854bfd 100644 --- a/block/blk.h +++ b/block/blk.h @@ -448,7 +448,7 @@ static inline void bio_release_page(struct bio *bio, struct page *page) } int blk_validate_limits(struct queue_limits *lim); -struct request_queue *blk_alloc_queue(int node_id); +struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id); int disk_scan_partitions(struct gendisk *disk, blk_mode_t mode); diff --git a/block/genhd.c b/block/genhd.c index d74fb5b4ae6818..defcd35b421bdd 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -1396,8 +1396,8 @@ struct gendisk *__blk_alloc_disk(int node, struct lock_class_key *lkclass) struct request_queue *q; struct gendisk *disk; - q = blk_alloc_queue(node); - if (!q) + q = blk_alloc_queue(NULL, node); + if (IS_ERR(q)) return NULL; disk = __alloc_disk_node(q, node, lkclass); From patchwork Wed Jan 31 13:03:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539424 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 06F5079DCC; Wed, 31 Jan 2024 13:04:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706281; cv=none; b=MgS2lEl0b04EqmgXvP3Nju9QGrn5FRd7XsuImnFTtEPHerk5MMa96h5hc690YPOEZZJNQEj+2LIVSo0vsVt0mmVEYnHhpUFVYR2zBif89NDk6OiOcuWhwT2PjbfO3Dy+zHPHPVZpIsHZcKiUMJ7DvtqcUZba/kTuwSEc/sHqf6A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706281; c=relaxed/simple; bh=YY9TRcWZqcLsKVlKRDTkZZBV8ZSBUUf8rt9n1gPVWiE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=N2MjnbsEDj5653k9Bp4ao/stvJGjAtf/AaoPq+FNxif9H/mnO2IlbK9R+Rnlh0Z0oCZgLJTSI3S4cf7RvLKOxuPmb3+wCY3ZdXlfMx+P/XxE8Dg1jZzJZkFb0M0L8aPoB3pdQ+6XoxMDJFJ/SD9TJ1dGbRX4nOIlCgRp0wejilE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=26ln06OD; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="26ln06OD" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=tNQyPImFI0Z1uJXeXc5ET7sXmhFv6cTU9NIx9UcvRQI=; b=26ln06ODPYEYSxqyHikJMws7rq eUdhaDkXm2wEBh3q2u/8UiHzLZJ5LSGTd8ihXoNWQusmuPXV1nap/+Dw7aG6y7DrcsGLUDL7n1Qh0 JWUOiAm40Ze/x9wzqk8dTTix0xgKcnDo3opM9APvGlwBP4I07tHwwM7ckf4fBvdFdyCb27WWwJiB/ Z6IV6k/Z1wCYomZsW0XUUDm2OOQb3svd/CC84OlbpHiss73tNNU9uNpnYQMsOjLVB5dfNzhUGK2st i0oCxV34D1/3qlkjgM17jOtGBwcKeILWHmMCDkG3ohFu58bSQ2J/Hed9ISYciWdgmWmMcFOWgTGqq WCuvuFYQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGX-00000003VJ1-2H87; Wed, 31 Jan 2024 13:04:34 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, John Garry , Hannes Reinecke Subject: [PATCH 08/14] block: pass a queue_limits argument to blk_mq_init_queue Date: Wed, 31 Jan 2024 14:03:54 +0100 Message-Id: <20240131130400.625836-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass a queue_limits to blk_mq_init_queue and apply it if non-NULL. This will allow allocating queues with valid queue limits instead of setting the values one at a time later. Also rename the function to blk_mq_alloc_queue as that is a much better name for a function that allocates a queue and always pass the queuedata argument instead of having a separate version for the extra argument. Signed-off-by: Christoph Hellwig Reviewed-by: John Garry Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- block/blk-mq.c | 19 +++++++------------ block/bsg-lib.c | 2 +- drivers/nvme/host/apple.c | 2 +- drivers/nvme/host/core.c | 6 +++--- drivers/scsi/scsi_scan.c | 2 +- drivers/ufs/core/ufshcd.c | 2 +- include/linux/blk-mq.h | 3 ++- 7 files changed, 16 insertions(+), 20 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 2ddbefdeae93e4..87fbf4a6d1dbed 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4086,13 +4086,13 @@ void blk_mq_release(struct request_queue *q) blk_mq_sysfs_deinit(q); } -static struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set, - void *queuedata) +struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set, + struct queue_limits *lim, void *queuedata) { struct request_queue *q; int ret; - q = blk_alloc_queue(NULL, set->numa_node); + q = blk_alloc_queue(lim, set->numa_node); if (IS_ERR(q)) return q; q->queuedata = queuedata; @@ -4103,20 +4103,15 @@ static struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set, } return q; } - -struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *set) -{ - return blk_mq_init_queue_data(set, NULL); -} -EXPORT_SYMBOL(blk_mq_init_queue); +EXPORT_SYMBOL(blk_mq_alloc_queue); /** * blk_mq_destroy_queue - shutdown a request queue * @q: request queue to shutdown * - * This shuts down a request queue allocated by blk_mq_init_queue(). All future + * This shuts down a request queue allocated by blk_mq_alloc_queue(). All future * requests will be failed with -ENODEV. The caller is responsible for dropping - * the reference from blk_mq_init_queue() by calling blk_put_queue(). + * the reference from blk_mq_alloc_queue() by calling blk_put_queue(). * * Context: can sleep */ @@ -4143,7 +4138,7 @@ struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, struct request_queue *q; struct gendisk *disk; - q = blk_mq_init_queue_data(set, queuedata); + q = blk_mq_alloc_queue(set, NULL, queuedata); if (IS_ERR(q)) return ERR_CAST(q); diff --git a/block/bsg-lib.c b/block/bsg-lib.c index b3acdbdb6e7ea8..bcc7dee6abced6 100644 --- a/block/bsg-lib.c +++ b/block/bsg-lib.c @@ -383,7 +383,7 @@ struct request_queue *bsg_setup_queue(struct device *dev, const char *name, if (blk_mq_alloc_tag_set(set)) goto out_tag_set; - q = blk_mq_init_queue(set); + q = blk_mq_alloc_queue(set, NULL, NULL); if (IS_ERR(q)) { ret = PTR_ERR(q); goto out_queue; diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c index 596bb11eeba5a9..02d01614c729dd 100644 --- a/drivers/nvme/host/apple.c +++ b/drivers/nvme/host/apple.c @@ -1515,7 +1515,7 @@ static int apple_nvme_probe(struct platform_device *pdev) goto put_dev; } - anv->ctrl.admin_q = blk_mq_init_queue(&anv->admin_tagset); + anv->ctrl.admin_q = blk_mq_alloc_queue(&anv->admin_tagset, NULL, NULL); if (IS_ERR(anv->ctrl.admin_q)) { ret = -ENOMEM; goto put_dev; diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 85ab0fcf9e8864..d40e6f82bdd753 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -4313,14 +4313,14 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set, if (ret) return ret; - ctrl->admin_q = blk_mq_init_queue(set); + ctrl->admin_q = blk_mq_alloc_queue(set, NULL, NULL); if (IS_ERR(ctrl->admin_q)) { ret = PTR_ERR(ctrl->admin_q); goto out_free_tagset; } if (ctrl->ops->flags & NVME_F_FABRICS) { - ctrl->fabrics_q = blk_mq_init_queue(set); + ctrl->fabrics_q = blk_mq_alloc_queue(set, NULL, NULL); if (IS_ERR(ctrl->fabrics_q)) { ret = PTR_ERR(ctrl->fabrics_q); goto out_cleanup_admin_q; @@ -4384,7 +4384,7 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set, return ret; if (ctrl->ops->flags & NVME_F_FABRICS) { - ctrl->connect_q = blk_mq_init_queue(set); + ctrl->connect_q = blk_mq_alloc_queue(set, NULL, NULL); if (IS_ERR(ctrl->connect_q)) { ret = PTR_ERR(ctrl->connect_q); goto out_free_tag_set; diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c index 44680f65ea1455..9969f4e2f1c3d9 100644 --- a/drivers/scsi/scsi_scan.c +++ b/drivers/scsi/scsi_scan.c @@ -332,7 +332,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget, sdev->sg_reserved_size = INT_MAX; - q = blk_mq_init_queue(&sdev->host->tag_set); + q = blk_mq_alloc_queue(&sdev->host->tag_set, NULL, NULL); if (IS_ERR(q)) { /* release fn is set up in scsi_sysfs_device_initialise, so * have to free and put manually here */ diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 029d017fc1b66b..c502a86db16b30 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -10592,7 +10592,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) err = blk_mq_alloc_tag_set(&hba->tmf_tag_set); if (err < 0) goto out_remove_scsi_host; - hba->tmf_queue = blk_mq_init_queue(&hba->tmf_tag_set); + hba->tmf_queue = blk_mq_alloc_queue(&hba->tmf_tag_set, NULL, NULL); if (IS_ERR(hba->tmf_queue)) { err = PTR_ERR(hba->tmf_queue); goto free_tmf_tag_set; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 7a8150a5f05133..7d42c359e2ab28 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -692,7 +692,8 @@ struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, }) struct gendisk *blk_mq_alloc_disk_for_queue(struct request_queue *q, struct lock_class_key *lkclass); -struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *); +struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set, + struct queue_limits *lim, void *queuedata); int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, struct request_queue *q); void blk_mq_destroy_queue(struct request_queue *); From patchwork Wed Jan 31 13:03:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539425 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50A6D7AE6A; Wed, 31 Jan 2024 13:04:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706287; cv=none; b=Lmua1AI9rts1PDEcd9BxrSIh3HJhnPmun10vOaXSkslbea5fe0pECBGhkWE1JLWk2nFEvBxxdwBkxwr8rI3yv6B5J+QEBFUFk0YVyy6ueXFBJ61cmWNNOnNcL5JyH+8GHHIlUFLf1/kiuk/x6QnIIOjvTa5jfUD3Crj+omXsBZI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706287; c=relaxed/simple; bh=AnWYjKaOpES79kwP8nVAEXo1VjMeHF17j6ZDFmBz624=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Dk7IzqmodQVkz0774ytUAIikGtbeRBzl+q0BzbdO49PBpRfoKx3DMOlntb2T22TXZYtjpngxAW9bn1tX4z2qn3OaePbQnlSHkkY79q5Lh3L/xUIpsV0IPbqgejrNMMV1UmN1bkPNoG2VPGSR7ZdOGl2X8sCba3Vs012wtWK53JU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=vcFhsJZM; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="vcFhsJZM" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=dYLF5TG8TCTAy1WNAWRtxA1URP88LBtLH8FTfOnQIYA=; b=vcFhsJZMC1B1jCpkoS6b876jHi vBHWOCXpAcslibMWiKQJPkuLsWAVoYu/77FEKr8kdHze0edkDsa9L+/HcJfrcO8/pcRzz5TU4c6Sg 1qUjzsZvEm/7Pozj+3FbubxYbrvMrKNyY1/ee/Ux+4yonc5M5CzQAUfOrNK8L9ywuZj213DwDf5gB 34xC+A9KqikC9z9wHTJHxUpGLizHqWj1sXN5P2D/zCxTcNNXW64cJB4SjHgM8fionKkqhu9gfZE2v ixSVu6lVhxgMOXfcN7Hvpxnkeu5mZBmx4lmBcQ2TmeBMeQBYkfbZZT/RYyT0LNq3OcZ2diPa0ClbQ yy21pzbg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGb-00000003VN4-1gys; Wed, 31 Jan 2024 13:04:38 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, John Garry , Hannes Reinecke Subject: [PATCH 09/14] block: pass a queue_limits argument to blk_mq_alloc_disk Date: Wed, 31 Jan 2024 14:03:55 +0100 Message-Id: <20240131130400.625836-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass a queue_limits to blk_mq_alloc_disk and apply it if non-NULL. This will allow allocating queues with valid queue limits instead of setting the values one at a time later. Signed-off-by: Christoph Hellwig Reviewed-by: John Garry Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- arch/um/drivers/ubd_kern.c | 2 +- block/blk-mq.c | 5 +++-- drivers/block/amiflop.c | 2 +- drivers/block/aoe/aoeblk.c | 2 +- drivers/block/ataflop.c | 2 +- drivers/block/floppy.c | 2 +- drivers/block/loop.c | 2 +- drivers/block/mtip32xx/mtip32xx.c | 2 +- drivers/block/nbd.c | 2 +- drivers/block/null_blk/main.c | 2 +- drivers/block/ps3disk.c | 2 +- drivers/block/rbd.c | 2 +- drivers/block/rnbd/rnbd-clt.c | 2 +- drivers/block/sunvdc.c | 2 +- drivers/block/swim.c | 2 +- drivers/block/swim3.c | 2 +- drivers/block/ublk_drv.c | 2 +- drivers/block/virtio_blk.c | 2 +- drivers/block/xen-blkfront.c | 2 +- drivers/block/z2ram.c | 2 +- drivers/cdrom/gdrom.c | 2 +- drivers/memstick/core/ms_block.c | 2 +- drivers/memstick/core/mspro_block.c | 2 +- drivers/mmc/core/queue.c | 2 +- drivers/mtd/mtd_blkdevs.c | 2 +- drivers/mtd/ubi/block.c | 2 +- drivers/nvme/host/core.c | 2 +- drivers/s390/block/dasd_genhd.c | 2 +- drivers/s390/block/scm_blk.c | 2 +- include/linux/blk-mq.h | 7 ++++--- 30 files changed, 35 insertions(+), 33 deletions(-) diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c index 92ee2697ff3984..25f1b18ce7d4e9 100644 --- a/arch/um/drivers/ubd_kern.c +++ b/arch/um/drivers/ubd_kern.c @@ -906,7 +906,7 @@ static int ubd_add(int n, char **error_out) if (err) goto out; - disk = blk_mq_alloc_disk(&ubd_dev->tag_set, ubd_dev); + disk = blk_mq_alloc_disk(&ubd_dev->tag_set, NULL, ubd_dev); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_cleanup_tags; diff --git a/block/blk-mq.c b/block/blk-mq.c index 87fbf4a6d1dbed..b103187e51cf77 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4132,13 +4132,14 @@ void blk_mq_destroy_queue(struct request_queue *q) } EXPORT_SYMBOL(blk_mq_destroy_queue); -struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, +struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, + struct queue_limits *lim, void *queuedata, struct lock_class_key *lkclass) { struct request_queue *q; struct gendisk *disk; - q = blk_mq_alloc_queue(set, NULL, queuedata); + q = blk_mq_alloc_queue(set, lim, queuedata); if (IS_ERR(q)) return ERR_CAST(q); diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c index 2b98114a9fe092..a25414228e4741 100644 --- a/drivers/block/amiflop.c +++ b/drivers/block/amiflop.c @@ -1779,7 +1779,7 @@ static int fd_alloc_disk(int drive, int system) struct gendisk *disk; int err; - disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL); + disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL); if (IS_ERR(disk)) return PTR_ERR(disk); diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c index b1b47d88f5db44..2ff6e2da8cc41c 100644 --- a/drivers/block/aoe/aoeblk.c +++ b/drivers/block/aoe/aoeblk.c @@ -371,7 +371,7 @@ aoeblk_gdalloc(void *vp) goto err_mempool; } - gd = blk_mq_alloc_disk(set, d); + gd = blk_mq_alloc_disk(set, NULL, d); if (IS_ERR(gd)) { pr_err("aoe: cannot allocate block queue for %ld.%d\n", d->aoemajor, d->aoeminor); diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c index 50949207798d2a..cacc4ba942a814 100644 --- a/drivers/block/ataflop.c +++ b/drivers/block/ataflop.c @@ -1994,7 +1994,7 @@ static int ataflop_alloc_disk(unsigned int drive, unsigned int type) { struct gendisk *disk; - disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL); + disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL); if (IS_ERR(disk)) return PTR_ERR(disk); diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index d0e41d52d6a9b5..6f765d221b3814 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -4515,7 +4515,7 @@ static int floppy_alloc_disk(unsigned int drive, unsigned int type) { struct gendisk *disk; - disk = blk_mq_alloc_disk(&tag_sets[drive], NULL); + disk = blk_mq_alloc_disk(&tag_sets[drive], NULL, NULL); if (IS_ERR(disk)) return PTR_ERR(disk); diff --git a/drivers/block/loop.c b/drivers/block/loop.c index f8145499da38c8..3f855cc79c29f5 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -2025,7 +2025,7 @@ static int loop_add(int i) if (err) goto out_free_idr; - disk = lo->lo_disk = blk_mq_alloc_disk(&lo->tag_set, lo); + disk = lo->lo_disk = blk_mq_alloc_disk(&lo->tag_set, NULL, lo); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_cleanup_tags; diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c index b200950e8fb5f9..ac08dea73552f4 100644 --- a/drivers/block/mtip32xx/mtip32xx.c +++ b/drivers/block/mtip32xx/mtip32xx.c @@ -3431,7 +3431,7 @@ static int mtip_block_initialize(struct driver_data *dd) goto block_queue_alloc_tag_error; } - dd->disk = blk_mq_alloc_disk(&dd->tags, dd); + dd->disk = blk_mq_alloc_disk(&dd->tags, NULL, dd); if (IS_ERR(dd->disk)) { dev_err(&dd->pdev->dev, "Unable to allocate request queue\n"); diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 33a8f37bb6a1f5..30ae3cc12e7787 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -1823,7 +1823,7 @@ static struct nbd_device *nbd_dev_add(int index, unsigned int refs) if (err < 0) goto out_free_tags; - disk = blk_mq_alloc_disk(&nbd->tag_set, NULL); + disk = blk_mq_alloc_disk(&nbd->tag_set, NULL, NULL); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_free_idr; diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 36755f263e8ec0..499e0d03bffc0f 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -2136,7 +2136,7 @@ static int null_add_dev(struct nullb_device *dev) goto out_cleanup_queues; nullb->tag_set->timeout = 5 * HZ; - nullb->disk = blk_mq_alloc_disk(nullb->tag_set, nullb); + nullb->disk = blk_mq_alloc_disk(nullb->tag_set, NULL, nullb); if (IS_ERR(nullb->disk)) { rv = PTR_ERR(nullb->disk); goto out_cleanup_tags; diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c index 36d7b36c60c76b..dfd3860df4f880 100644 --- a/drivers/block/ps3disk.c +++ b/drivers/block/ps3disk.c @@ -431,7 +431,7 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev) if (error) goto fail_teardown; - gendisk = blk_mq_alloc_disk(&priv->tag_set, dev); + gendisk = blk_mq_alloc_disk(&priv->tag_set, NULL, dev); if (IS_ERR(gendisk)) { dev_err(&dev->sbd.core, "%s:%u: blk_mq_alloc_disk failed\n", __func__, __LINE__); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 12b5d53ec85645..947cb41c133997 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -4966,7 +4966,7 @@ static int rbd_init_disk(struct rbd_device *rbd_dev) if (err) return err; - disk = blk_mq_alloc_disk(&rbd_dev->tag_set, rbd_dev); + disk = blk_mq_alloc_disk(&rbd_dev->tag_set, NULL, rbd_dev); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_tag_set; diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c index 4044c369d22a5f..d51be4f2df61a3 100644 --- a/drivers/block/rnbd/rnbd-clt.c +++ b/drivers/block/rnbd/rnbd-clt.c @@ -1408,7 +1408,7 @@ static int rnbd_client_setup_device(struct rnbd_clt_dev *dev, dev->size = le64_to_cpu(rsp->nsectors) * le16_to_cpu(rsp->logical_block_size); - dev->gd = blk_mq_alloc_disk(&dev->sess->tag_set, dev); + dev->gd = blk_mq_alloc_disk(&dev->sess->tag_set, NULL, dev); if (IS_ERR(dev->gd)) return PTR_ERR(dev->gd); dev->queue = dev->gd->queue; diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c index 7bf4b48e2282e7..a1f74dd1eae5d5 100644 --- a/drivers/block/sunvdc.c +++ b/drivers/block/sunvdc.c @@ -824,7 +824,7 @@ static int probe_disk(struct vdc_port *port) if (err) return err; - g = blk_mq_alloc_disk(&port->tag_set, port); + g = blk_mq_alloc_disk(&port->tag_set, NULL, port); if (IS_ERR(g)) { printk(KERN_ERR PFX "%s: Could not allocate gendisk.\n", port->vio.name); diff --git a/drivers/block/swim.c b/drivers/block/swim.c index f85b6af414b431..16bdf62067d8b1 100644 --- a/drivers/block/swim.c +++ b/drivers/block/swim.c @@ -820,7 +820,7 @@ static int swim_floppy_init(struct swim_priv *swd) goto exit_put_disks; swd->unit[drive].disk = - blk_mq_alloc_disk(&swd->unit[drive].tag_set, + blk_mq_alloc_disk(&swd->unit[drive].tag_set, NULL, &swd->unit[drive]); if (IS_ERR(swd->unit[drive].disk)) { blk_mq_free_tag_set(&swd->unit[drive].tag_set); diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c index c2bc85826358e9..a04756ac778ee8 100644 --- a/drivers/block/swim3.c +++ b/drivers/block/swim3.c @@ -1210,7 +1210,7 @@ static int swim3_attach(struct macio_dev *mdev, if (rc) goto out_unregister; - disk = blk_mq_alloc_disk(&fs->tag_set, fs); + disk = blk_mq_alloc_disk(&fs->tag_set, NULL, fs); if (IS_ERR(disk)) { rc = PTR_ERR(disk); goto out_free_tag_set; diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 1dfb2e77898ba6..c5b6552707984b 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -2222,7 +2222,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd) goto out_unlock; } - disk = blk_mq_alloc_disk(&ub->tag_set, NULL); + disk = blk_mq_alloc_disk(&ub->tag_set, NULL, NULL); if (IS_ERR(disk)) { ret = PTR_ERR(disk); goto out_unlock; diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 5bf98fd6a651a5..a23fce4eca4408 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1330,7 +1330,7 @@ static int virtblk_probe(struct virtio_device *vdev) if (err) goto out_free_vq; - vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, vblk); + vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, NULL, vblk); if (IS_ERR(vblk->disk)) { err = PTR_ERR(vblk->disk); goto out_free_tags; diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 434fab30677743..4cc2884e748463 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -1136,7 +1136,7 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity, if (err) goto out_release_minors; - gd = blk_mq_alloc_disk(&info->tag_set, info); + gd = blk_mq_alloc_disk(&info->tag_set, NULL, info); if (IS_ERR(gd)) { err = PTR_ERR(gd); goto out_free_tag_set; diff --git a/drivers/block/z2ram.c b/drivers/block/z2ram.c index 11493167b0a848..7c5f4e4d9b5037 100644 --- a/drivers/block/z2ram.c +++ b/drivers/block/z2ram.c @@ -318,7 +318,7 @@ static int z2ram_register_disk(int minor) struct gendisk *disk; int err; - disk = blk_mq_alloc_disk(&tag_set, NULL); + disk = blk_mq_alloc_disk(&tag_set, NULL, NULL); if (IS_ERR(disk)) return PTR_ERR(disk); diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c index d668b174ace92f..1d044779f5e42a 100644 --- a/drivers/cdrom/gdrom.c +++ b/drivers/cdrom/gdrom.c @@ -778,7 +778,7 @@ static int probe_gdrom(struct platform_device *devptr) if (err) goto probe_fail_free_cd_info; - gd.disk = blk_mq_alloc_disk(&gd.tag_set, NULL); + gd.disk = blk_mq_alloc_disk(&gd.tag_set, NULL, NULL); if (IS_ERR(gd.disk)) { err = PTR_ERR(gd.disk); goto probe_fail_free_tag_set; diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c index 04115cd92433bf..d3277c901d16bb 100644 --- a/drivers/memstick/core/ms_block.c +++ b/drivers/memstick/core/ms_block.c @@ -2093,7 +2093,7 @@ static int msb_init_disk(struct memstick_dev *card) if (rc) goto out_release_id; - msb->disk = blk_mq_alloc_disk(&msb->tag_set, card); + msb->disk = blk_mq_alloc_disk(&msb->tag_set, NULL, card); if (IS_ERR(msb->disk)) { rc = PTR_ERR(msb->disk); goto out_free_tag_set; diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c index 5a69ed33999b4c..db0e2a42ca3c32 100644 --- a/drivers/memstick/core/mspro_block.c +++ b/drivers/memstick/core/mspro_block.c @@ -1138,7 +1138,7 @@ static int mspro_block_init_disk(struct memstick_dev *card) if (rc) goto out_release_id; - msb->disk = blk_mq_alloc_disk(&msb->tag_set, card); + msb->disk = blk_mq_alloc_disk(&msb->tag_set, NULL, card); if (IS_ERR(msb->disk)) { rc = PTR_ERR(msb->disk); goto out_free_tag_set; diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index a0a2412f62a730..67ad186d132a69 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -447,7 +447,7 @@ struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card) return ERR_PTR(ret); - disk = blk_mq_alloc_disk(&mq->tag_set, mq); + disk = blk_mq_alloc_disk(&mq->tag_set, NULL, mq); if (IS_ERR(disk)) { blk_mq_free_tag_set(&mq->tag_set); return disk; diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c index f0526dcc216276..b8878a2457afa7 100644 --- a/drivers/mtd/mtd_blkdevs.c +++ b/drivers/mtd/mtd_blkdevs.c @@ -333,7 +333,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new) goto out_kfree_tag_set; /* Create gendisk */ - gd = blk_mq_alloc_disk(new->tag_set, new); + gd = blk_mq_alloc_disk(new->tag_set, NULL, new); if (IS_ERR(gd)) { ret = PTR_ERR(gd); goto out_free_tag_set; diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c index 654bd7372cd8c0..9be87c231a2eba 100644 --- a/drivers/mtd/ubi/block.c +++ b/drivers/mtd/ubi/block.c @@ -393,7 +393,7 @@ int ubiblock_create(struct ubi_volume_info *vi) /* Initialize the gendisk of this ubiblock device */ - gd = blk_mq_alloc_disk(&dev->tag_set, dev); + gd = blk_mq_alloc_disk(&dev->tag_set, NULL, dev); if (IS_ERR(gd)) { ret = PTR_ERR(gd); goto out_free_tags; diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index d40e6f82bdd753..a51e998c5bf399 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3643,7 +3643,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info) if (!ns) return; - disk = blk_mq_alloc_disk(ctrl->tagset, ns); + disk = blk_mq_alloc_disk(ctrl->tagset, NULL, ns); if (IS_ERR(disk)) goto out_free_ns; disk->fops = &nvme_bdev_ops; diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c index 55e3abe94cde2f..4d5bf29215d5fc 100644 --- a/drivers/s390/block/dasd_genhd.c +++ b/drivers/s390/block/dasd_genhd.c @@ -58,7 +58,7 @@ int dasd_gendisk_alloc(struct dasd_block *block) if (rc) return rc; - gdp = blk_mq_alloc_disk(&block->tag_set, block); + gdp = blk_mq_alloc_disk(&block->tag_set, NULL, block); if (IS_ERR(gdp)) { blk_mq_free_tag_set(&block->tag_set); return PTR_ERR(gdp); diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c index ade95e91b3c8db..d05b2e2799a47a 100644 --- a/drivers/s390/block/scm_blk.c +++ b/drivers/s390/block/scm_blk.c @@ -462,7 +462,7 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev) if (ret) goto out; - bdev->gendisk = blk_mq_alloc_disk(&bdev->tag_set, scmdev); + bdev->gendisk = blk_mq_alloc_disk(&bdev->tag_set, NULL, scmdev); if (IS_ERR(bdev->gendisk)) { ret = PTR_ERR(bdev->gendisk); goto out_tag; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 7d42c359e2ab28..390d35fa003295 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -682,13 +682,14 @@ enum { #define BLK_MQ_NO_HCTX_IDX (-1U) -struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, +struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, + struct queue_limits *lim, void *queuedata, struct lock_class_key *lkclass); -#define blk_mq_alloc_disk(set, queuedata) \ +#define blk_mq_alloc_disk(set, lim, queuedata) \ ({ \ static struct lock_class_key __key; \ \ - __blk_mq_alloc_disk(set, queuedata, &__key); \ + __blk_mq_alloc_disk(set, lim, queuedata, &__key); \ }) struct gendisk *blk_mq_alloc_disk_for_queue(struct request_queue *q, struct lock_class_key *lkclass); From patchwork Wed Jan 31 13:03:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539426 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 879997AE7F; Wed, 31 Jan 2024 13:04:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706288; cv=none; b=C4pDr0kRbLRu+nO7ZmF2s73beMrkE/lzPSIJgKB0cmfcnpYys+Iw+1HDluSeOYMvPo9GoeyC1+SmTyWmrLIc1/bWM7Fu+28/vtZQ1pEeYAR9PqITJXr2cyPJp6PDCvjcrexVMe10eSyohmiOuvi7jtdvcRdep6ixFBY76Q8K1Cw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706288; c=relaxed/simple; bh=TB6rGSBAQKatDQUek3oybEdzIOkK5ch6yQah9+F9MMk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XPgrP5vA6rUsejLE2EhzIFPfLV1/DZ68nZqhXjZqEzp55EzfVp3qYIC0jsV1InxtwlNFEtOmlIFnRxwEofTiA7BTQzwXety4EltRY/sjxb9OX15Q4GT5PcfsWvZ9FS0BVZrsPPfDGeFAQWXt22LVYsd6r2DWGZq+xv02P4z1h1k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=G1dhQkQH; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="G1dhQkQH" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Idgx13GhR9HP21n1+f1lUbN/o0tSVo9sjEoHsdggiM8=; b=G1dhQkQHQfjLxkp/9Dn92ANPVT EHSWQOMn7vTgu5+OYFyF4KYfu1o8KE5DtIpIdDUvHNOJpxCyAlMAPGOkqKY6VceRHbmxkY7D/lY2V vOLFV4hHkdn3ztjDUs++4z2G/ZHL1BygzssiIsBuZyfKqzuwjKppsogvgi3DZNwsYpwFSJXZwMGMB v8kvQcKIcVleoi3ln/WiAIgwPZ/1VmVkj2nbyc7hYXIIojFxVQRfDYSiIUlcoa9zUK2F/MXC/ZFJc XaFe9JYg87FXsrkImNoypIWGL2lgpRTUOlT6FUO54csjbTuaJXq4N/jU+TleSN25tRzxhVSMwXwrN 9f0ML73g==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGf-00000003VQy-0i5J; Wed, 31 Jan 2024 13:04:42 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Hannes Reinecke Subject: [PATCH 10/14] virtio_blk: split virtblk_probe Date: Wed, 31 Jan 2024 14:03:56 +0100 Message-Id: <20240131130400.625836-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Split out a virtblk_read_limits helper that just reads the various queue limits to separate it from the higher level probing logic. Signed-off-by: Christoph Hellwig Reviewed-by: Stefan Hajnoczi Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke --- drivers/block/virtio_blk.c | 193 +++++++++++++++++++------------------ 1 file changed, 101 insertions(+), 92 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index a23fce4eca4408..dd46ccd9f84c7d 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1248,31 +1248,17 @@ static const struct blk_mq_ops virtio_mq_ops = { static unsigned int virtblk_queue_depth; module_param_named(queue_depth, virtblk_queue_depth, uint, 0444); -static int virtblk_probe(struct virtio_device *vdev) +static int virtblk_read_limits(struct virtio_blk *vblk) { - struct virtio_blk *vblk; - struct request_queue *q; - int err, index; - + struct request_queue *q = vblk->disk->queue; + struct virtio_device *vdev = vblk->vdev; u32 v, blk_size, max_size, sg_elems, opt_io_size; u32 max_discard_segs = 0; u32 discard_granularity = 0; u16 min_io_size; u8 physical_block_exp, alignment_offset; - unsigned int queue_depth; size_t max_dma_size; - - if (!vdev->config->get) { - dev_err(&vdev->dev, "%s failure: config access disabled\n", - __func__); - return -EINVAL; - } - - err = ida_alloc_range(&vd_index_ida, 0, - minor_to_index(1 << MINORBITS) - 1, GFP_KERNEL); - if (err < 0) - goto out; - index = err; + int err; /* We need to know how many segments before we allocate. */ err = virtio_cread_feature(vdev, VIRTIO_BLK_F_SEG_MAX, @@ -1286,73 +1272,6 @@ static int virtblk_probe(struct virtio_device *vdev) /* Prevent integer overflows and honor max vq size */ sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2); - vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL); - if (!vblk) { - err = -ENOMEM; - goto out_free_index; - } - - mutex_init(&vblk->vdev_mutex); - - vblk->vdev = vdev; - - INIT_WORK(&vblk->config_work, virtblk_config_changed_work); - - err = init_vq(vblk); - if (err) - goto out_free_vblk; - - /* Default queue sizing is to fill the ring. */ - if (!virtblk_queue_depth) { - queue_depth = vblk->vqs[0].vq->num_free; - /* ... but without indirect descs, we use 2 descs per req */ - if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC)) - queue_depth /= 2; - } else { - queue_depth = virtblk_queue_depth; - } - - memset(&vblk->tag_set, 0, sizeof(vblk->tag_set)); - vblk->tag_set.ops = &virtio_mq_ops; - vblk->tag_set.queue_depth = queue_depth; - vblk->tag_set.numa_node = NUMA_NO_NODE; - vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; - vblk->tag_set.cmd_size = - sizeof(struct virtblk_req) + - sizeof(struct scatterlist) * VIRTIO_BLK_INLINE_SG_CNT; - vblk->tag_set.driver_data = vblk; - vblk->tag_set.nr_hw_queues = vblk->num_vqs; - vblk->tag_set.nr_maps = 1; - if (vblk->io_queues[HCTX_TYPE_POLL]) - vblk->tag_set.nr_maps = 3; - - err = blk_mq_alloc_tag_set(&vblk->tag_set); - if (err) - goto out_free_vq; - - vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, NULL, vblk); - if (IS_ERR(vblk->disk)) { - err = PTR_ERR(vblk->disk); - goto out_free_tags; - } - q = vblk->disk->queue; - - virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN); - - vblk->disk->major = major; - vblk->disk->first_minor = index_to_minor(index); - vblk->disk->minors = 1 << PART_BITS; - vblk->disk->private_data = vblk; - vblk->disk->fops = &virtblk_fops; - vblk->index = index; - - /* configure queue flush support */ - virtblk_update_cache_mode(vdev); - - /* If disk is read-only in the host, the guest should obey */ - if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO)) - set_disk_ro(vblk->disk, 1); - /* We can handle whatever the host told us to handle. */ blk_queue_max_segments(q, sg_elems); @@ -1381,7 +1300,7 @@ static int virtblk_probe(struct virtio_device *vdev) dev_err(&vdev->dev, "virtio_blk: invalid block size: 0x%x\n", blk_size); - goto out_cleanup_disk; + return err; } blk_queue_logical_block_size(q, blk_size); @@ -1455,8 +1374,7 @@ static int virtblk_probe(struct virtio_device *vdev) if (!v) { dev_err(&vdev->dev, "virtio_blk: secure_erase_sector_alignment can't be 0\n"); - err = -EINVAL; - goto out_cleanup_disk; + return -EINVAL; } discard_granularity = min_not_zero(discard_granularity, v); @@ -1470,8 +1388,7 @@ static int virtblk_probe(struct virtio_device *vdev) if (!v) { dev_err(&vdev->dev, "virtio_blk: max_secure_erase_sectors can't be 0\n"); - err = -EINVAL; - goto out_cleanup_disk; + return -EINVAL; } blk_queue_max_secure_erase_sectors(q, v); @@ -1485,8 +1402,7 @@ static int virtblk_probe(struct virtio_device *vdev) if (!v) { dev_err(&vdev->dev, "virtio_blk: max_secure_erase_seg can't be 0\n"); - err = -EINVAL; - goto out_cleanup_disk; + return -EINVAL; } max_discard_segs = min_not_zero(max_discard_segs, v); @@ -1511,6 +1427,99 @@ static int virtblk_probe(struct virtio_device *vdev) q->limits.discard_granularity = blk_size; } + return 0; +} + +static int virtblk_probe(struct virtio_device *vdev) +{ + struct virtio_blk *vblk; + struct request_queue *q; + int err, index; + unsigned int queue_depth; + + if (!vdev->config->get) { + dev_err(&vdev->dev, "%s failure: config access disabled\n", + __func__); + return -EINVAL; + } + + err = ida_alloc_range(&vd_index_ida, 0, + minor_to_index(1 << MINORBITS) - 1, GFP_KERNEL); + if (err < 0) + goto out; + index = err; + + vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL); + if (!vblk) { + err = -ENOMEM; + goto out_free_index; + } + + mutex_init(&vblk->vdev_mutex); + + vblk->vdev = vdev; + + INIT_WORK(&vblk->config_work, virtblk_config_changed_work); + + err = init_vq(vblk); + if (err) + goto out_free_vblk; + + /* Default queue sizing is to fill the ring. */ + if (!virtblk_queue_depth) { + queue_depth = vblk->vqs[0].vq->num_free; + /* ... but without indirect descs, we use 2 descs per req */ + if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC)) + queue_depth /= 2; + } else { + queue_depth = virtblk_queue_depth; + } + + memset(&vblk->tag_set, 0, sizeof(vblk->tag_set)); + vblk->tag_set.ops = &virtio_mq_ops; + vblk->tag_set.queue_depth = queue_depth; + vblk->tag_set.numa_node = NUMA_NO_NODE; + vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; + vblk->tag_set.cmd_size = + sizeof(struct virtblk_req) + + sizeof(struct scatterlist) * VIRTIO_BLK_INLINE_SG_CNT; + vblk->tag_set.driver_data = vblk; + vblk->tag_set.nr_hw_queues = vblk->num_vqs; + vblk->tag_set.nr_maps = 1; + if (vblk->io_queues[HCTX_TYPE_POLL]) + vblk->tag_set.nr_maps = 3; + + err = blk_mq_alloc_tag_set(&vblk->tag_set); + if (err) + goto out_free_vq; + + vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, NULL, vblk); + if (IS_ERR(vblk->disk)) { + err = PTR_ERR(vblk->disk); + goto out_free_tags; + } + q = vblk->disk->queue; + + virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN); + + vblk->disk->major = major; + vblk->disk->first_minor = index_to_minor(index); + vblk->disk->minors = 1 << PART_BITS; + vblk->disk->private_data = vblk; + vblk->disk->fops = &virtblk_fops; + vblk->index = index; + + /* configure queue flush support */ + virtblk_update_cache_mode(vdev); + + /* If disk is read-only in the host, the guest should obey */ + if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO)) + set_disk_ro(vblk->disk, 1); + + err = virtblk_read_limits(vblk); + if (err) + goto out_cleanup_disk; + virtblk_update_capacity(vblk, false); virtio_device_ready(vdev); From patchwork Wed Jan 31 13:03:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539427 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AA4779DD0; Wed, 31 Jan 2024 13:04:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706294; cv=none; b=cNFOwAcWZs03X/0LCfuYQzZJqrNlkLJFd6zPEAneEjJzEJV8zmgXeS1OA1PaABsT6NIvfL8VGTsdMD38P5E7O2DMJbaDx0MSP0Onu13H4/g35UMd+7uPe/lFF3q2NzsuomILUbr5HghjQKRVoXELfdBWuL/yHm07GUeqe72ZsiA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706294; c=relaxed/simple; bh=27j2EdjxJX6ZmPdc8ggOL4Jg1Fqdb12WqGHs2211T54=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lHljR21s91EQLsIjqYNZ0P7duhrBVurZlj/TCnTao8MsGI7Pv7n5RMc1goX9oxLkoc9rxq6mEijCoi93tuOGiJBF2nNUIHcg1e2Z8W1bDkwCODOhZK1lkHXf+h4XYPewP8yJc7JFLW5Hu9IFGmKl8jLHnTRCbDQO8wJgT1v5onA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=DkYJNcpP; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="DkYJNcpP" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=5LXlex7D4Ilf/x8xCjyoSbZw/Izp75TGUeBfRvxcl9A=; b=DkYJNcpPPEDlaibCZbLWUqeIjN 9Sfqg+S44kqzk71Muk9wDvNJWfkk0DcBwP/5+Ml4jQEIXEcXmWmYBeTTwM+xK1YYArCkcSgk+DAlM Ul+eVq0B6katGVeYxutxAbePLOALaSE0ZnBlPIIIWi+t3T91GqBMoo9YQS9uvhD2zSnsqCNCRGrVA TttsVweemgNLHAEAu+809i4d7oQlZHEXIpF2T5wSF70rYfeJHZteCRLqX0PGOJqWkR+3O0S7Ot9ap Ggtt0+ZwUTfqPPIAGUxMSphb5WW3d/2qoFf1Mhi2f0lCBOx6PkAZjqPaLZcnann8t3jxhnl3slktK UPkbqYAg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGj-00000003VUc-1eEZ; Wed, 31 Jan 2024 13:04:46 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Hannes Reinecke Subject: [PATCH 11/14] virtio_blk: pass queue_limits to blk_mq_alloc_disk Date: Wed, 31 Jan 2024 14:03:57 +0100 Message-Id: <20240131130400.625836-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Call virtblk_read_limits and most of virtblk_probe_zoned_device before allocating the gendisk and thus request_queue and make them read into a queue_limits structure instead. Pass this initialized queue_limits to blk_mq_alloc_disk to set the queue up with the right parameters from the start and only leave a few final touches for zoned devices to be done just before adding the disk. Signed-off-by: Christoph Hellwig Reviewed-by: Stefan Hajnoczi Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- drivers/block/virtio_blk.c | 130 ++++++++++++++++++------------------- 1 file changed, 64 insertions(+), 66 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index dd46ccd9f84c7d..d8b55874cd5950 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -720,16 +720,15 @@ static int virtblk_report_zones(struct gendisk *disk, sector_t sector, return ret; } -static int virtblk_probe_zoned_device(struct virtio_device *vdev, - struct virtio_blk *vblk, - struct request_queue *q) +static int virtblk_read_zoned_limits(struct virtio_blk *vblk, + struct queue_limits *lim) { + struct virtio_device *vdev = vblk->vdev; u32 v, wg; dev_dbg(&vdev->dev, "probing host-managed zoned device\n"); - disk_set_zoned(vblk->disk); - blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); + lim->zoned = true; virtio_cread(vdev, struct virtio_blk_config, zoned.max_open_zones, &v); @@ -747,8 +746,8 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, dev_warn(&vdev->dev, "zero write granularity reported\n"); return -ENODEV; } - blk_queue_physical_block_size(q, wg); - blk_queue_io_min(q, wg); + lim->physical_block_size = wg; + lim->io_min = wg; dev_dbg(&vdev->dev, "write granularity = %u\n", wg); @@ -764,13 +763,13 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, vblk->zone_sectors); return -ENODEV; } - blk_queue_chunk_sectors(q, vblk->zone_sectors); + lim->chunk_sectors = vblk->zone_sectors; dev_dbg(&vdev->dev, "zone sectors = %u\n", vblk->zone_sectors); if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { dev_warn(&vblk->vdev->dev, "ignoring negotiated F_DISCARD for zoned device\n"); - blk_queue_max_discard_sectors(q, 0); + lim->max_hw_discard_sectors = 0; } virtio_cread(vdev, struct virtio_blk_config, @@ -785,25 +784,21 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, wg, v); return -ENODEV; } - blk_queue_max_zone_append_sectors(q, v); + lim->max_zone_append_sectors = v; dev_dbg(&vdev->dev, "max append sectors = %u\n", v); - return blk_revalidate_disk_zones(vblk->disk, NULL); + return 0; } - #else - /* - * Zoned block device support is not configured in this kernel. - * Host-managed zoned devices can't be supported, but others are - * good to go as regular block devices. + * Zoned block device support is not configured in this kernel, host-managed + * zoned devices can't be supported. */ #define virtblk_report_zones NULL - -static inline int virtblk_probe_zoned_device(struct virtio_device *vdev, - struct virtio_blk *vblk, struct request_queue *q) +static inline int virtblk_read_zoned_limits(struct virtio_blk *vblk, + struct queue_limits *lim) { - dev_err(&vdev->dev, + dev_err(&vblk->vdev->dev, "virtio_blk: zoned devices are not supported"); return -EOPNOTSUPP; } @@ -1248,9 +1243,9 @@ static const struct blk_mq_ops virtio_mq_ops = { static unsigned int virtblk_queue_depth; module_param_named(queue_depth, virtblk_queue_depth, uint, 0444); -static int virtblk_read_limits(struct virtio_blk *vblk) +static int virtblk_read_limits(struct virtio_blk *vblk, + struct queue_limits *lim) { - struct request_queue *q = vblk->disk->queue; struct virtio_device *vdev = vblk->vdev; u32 v, blk_size, max_size, sg_elems, opt_io_size; u32 max_discard_segs = 0; @@ -1273,10 +1268,10 @@ static int virtblk_read_limits(struct virtio_blk *vblk) sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2); /* We can handle whatever the host told us to handle. */ - blk_queue_max_segments(q, sg_elems); + lim->max_segments = sg_elems; /* No real sector limit. */ - blk_queue_max_hw_sectors(q, UINT_MAX); + lim->max_hw_sectors = UINT_MAX; max_dma_size = virtio_max_dma_size(vdev); max_size = max_dma_size > U32_MAX ? U32_MAX : max_dma_size; @@ -1288,7 +1283,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) if (!err) max_size = min(max_size, v); - blk_queue_max_segment_size(q, max_size); + lim->max_segment_size = max_size; /* Host can optionally specify the block size of the device */ err = virtio_cread_feature(vdev, VIRTIO_BLK_F_BLK_SIZE, @@ -1303,35 +1298,34 @@ static int virtblk_read_limits(struct virtio_blk *vblk) return err; } - blk_queue_logical_block_size(q, blk_size); + lim->logical_block_size = blk_size; } else - blk_size = queue_logical_block_size(q); + blk_size = lim->logical_block_size; /* Use topology information if available */ err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, struct virtio_blk_config, physical_block_exp, &physical_block_exp); if (!err && physical_block_exp) - blk_queue_physical_block_size(q, - blk_size * (1 << physical_block_exp)); + lim->physical_block_size = blk_size * (1 << physical_block_exp); err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, struct virtio_blk_config, alignment_offset, &alignment_offset); if (!err && alignment_offset) - blk_queue_alignment_offset(q, blk_size * alignment_offset); + lim->alignment_offset = blk_size * alignment_offset; err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, struct virtio_blk_config, min_io_size, &min_io_size); if (!err && min_io_size) - blk_queue_io_min(q, blk_size * min_io_size); + lim->io_min = blk_size * min_io_size; err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, struct virtio_blk_config, opt_io_size, &opt_io_size); if (!err && opt_io_size) - blk_queue_io_opt(q, blk_size * opt_io_size); + lim->io_opt = blk_size * opt_io_size; if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { virtio_cread(vdev, struct virtio_blk_config, @@ -1339,7 +1333,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) virtio_cread(vdev, struct virtio_blk_config, max_discard_sectors, &v); - blk_queue_max_discard_sectors(q, v ? v : UINT_MAX); + lim->max_hw_discard_sectors = v ? v : UINT_MAX; virtio_cread(vdev, struct virtio_blk_config, max_discard_seg, &max_discard_segs); @@ -1348,7 +1342,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) if (virtio_has_feature(vdev, VIRTIO_BLK_F_WRITE_ZEROES)) { virtio_cread(vdev, struct virtio_blk_config, max_write_zeroes_sectors, &v); - blk_queue_max_write_zeroes_sectors(q, v ? v : UINT_MAX); + lim->max_write_zeroes_sectors = v ? v : UINT_MAX; } /* The discard and secure erase limits are combined since the Linux @@ -1391,7 +1385,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) return -EINVAL; } - blk_queue_max_secure_erase_sectors(q, v); + lim->max_secure_erase_sectors = v; virtio_cread(vdev, struct virtio_blk_config, max_secure_erase_seg, &v); @@ -1418,13 +1412,34 @@ static int virtblk_read_limits(struct virtio_blk *vblk) if (!max_discard_segs) max_discard_segs = sg_elems; - blk_queue_max_discard_segments(q, - min(max_discard_segs, MAX_DISCARD_SEGMENTS)); + lim->max_discard_segments = + min(max_discard_segs, MAX_DISCARD_SEGMENTS); if (discard_granularity) - q->limits.discard_granularity = discard_granularity << SECTOR_SHIFT; + lim->discard_granularity = + discard_granularity << SECTOR_SHIFT; else - q->limits.discard_granularity = blk_size; + lim->discard_granularity = blk_size; + } + + if (virtio_has_feature(vdev, VIRTIO_BLK_F_ZONED)) { + u8 model; + + virtio_cread(vdev, struct virtio_blk_config, zoned.model, &model); + switch (model) { + case VIRTIO_BLK_Z_NONE: + case VIRTIO_BLK_Z_HA: + /* treat host-aware devices as non-zoned */ + return 0; + case VIRTIO_BLK_Z_HM: + err = virtblk_read_zoned_limits(vblk, lim); + if (err) + return err; + break; + default: + dev_err(&vdev->dev, "unsupported zone model %d\n", model); + return -EINVAL; + } } return 0; @@ -1433,7 +1448,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) static int virtblk_probe(struct virtio_device *vdev) { struct virtio_blk *vblk; - struct request_queue *q; + struct queue_limits lim = { }; int err, index; unsigned int queue_depth; @@ -1493,12 +1508,15 @@ static int virtblk_probe(struct virtio_device *vdev) if (err) goto out_free_vq; - vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, NULL, vblk); + err = virtblk_read_limits(vblk, &lim); + if (err) + goto out_free_tags; + + vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, &lim, vblk); if (IS_ERR(vblk->disk)) { err = PTR_ERR(vblk->disk); goto out_free_tags; } - q = vblk->disk->queue; virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN); @@ -1516,10 +1534,6 @@ static int virtblk_probe(struct virtio_device *vdev) if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO)) set_disk_ro(vblk->disk, 1); - err = virtblk_read_limits(vblk); - if (err) - goto out_cleanup_disk; - virtblk_update_capacity(vblk, false); virtio_device_ready(vdev); @@ -1527,27 +1541,11 @@ static int virtblk_probe(struct virtio_device *vdev) * All steps that follow use the VQs therefore they need to be * placed after the virtio_device_ready() call above. */ - if (virtio_has_feature(vdev, VIRTIO_BLK_F_ZONED)) { - u8 model; - - virtio_cread(vdev, struct virtio_blk_config, zoned.model, - &model); - switch (model) { - case VIRTIO_BLK_Z_NONE: - case VIRTIO_BLK_Z_HA: - /* Present the host-aware device as non-zoned */ - break; - case VIRTIO_BLK_Z_HM: - err = virtblk_probe_zoned_device(vdev, vblk, q); - if (err) - goto out_cleanup_disk; - break; - default: - dev_err(&vdev->dev, "unsupported zone model %d\n", - model); - err = -EINVAL; + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && lim.zoned) { + blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, vblk->disk->queue); + err = blk_revalidate_disk_zones(vblk->disk, NULL); + if (err) goto out_cleanup_disk; - } } err = device_add_disk(&vdev->dev, vblk->disk, virtblk_attr_groups); From patchwork Wed Jan 31 13:03:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539428 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EDAD79DD0; Wed, 31 Jan 2024 13:04:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706297; cv=none; b=IOjyEnRH4BwythALn9Y7yE4FgrwAMBEajgaqeyge1NOWsFNquYDDF8kia1qYEfHR7RIQKU6rswdH+0FzfxzoyxHN3fZRvvsSD+lb0zoFUh3zsak6lMobc4IxgfhCFS23RBfhZLcfNYpPw4itpxuZa+7wFgzNlNkzxQTxV1Zk8cY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706297; c=relaxed/simple; bh=kKkmUAIzWynZl6gg5EVHCqhu9UkAK9DuWAVUhCwdWQI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=uupFQWMQ42Cr5cN8krwuAEbhpzXD1Pi+JNcXylYg8qJ/pd3IddyT4G4M6xYUj4RzXU4K/+10lG1ZsuXzlWymAa9kClcmPsg+a0bNqgKG2+LOOmOz2Rw7lOZPhu+Cr9klM5554xvFFkBevdW+D5jBLdQvw9iirPXtKbp4GwrI0nU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=bDHt2eGZ; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="bDHt2eGZ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=9brywd7uZhFzDL2uuR9GnUHObVr/2hXCOV/mhgORu+s=; b=bDHt2eGZT2jqkP3udjavLe8Ij6 vv4fFaiVifMW16Uy68/ggdNgdQ0bDFs4BCDpOj7z3q5Xu4+T5P0tdJvx/NyCoJXkR3CCEkAO5IFj3 ewaBYkOzYLurHQed6QBM4WDx3S3Z/jOnZNNy2+HPfiqybsUSWAwfhMQAjiWLlVOgnBfE9Zo5YIDkM LHgvY6482HhbbtBrcfbw19uIBqBiIRTlbmoFYjd0CTslwaX+pAWUbcDTG2WmKCrq6a+9gZjMfbzXI SvptZ/aZvmxVlXN2tj0Yep3uC2F7SX+tL6/4/7d2YDg/st+17ih6DnY2+FjowGNNV51tI0ASOJifh Oxo2sAhQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGn-00000003VYY-0KkE; Wed, 31 Jan 2024 13:04:49 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Hannes Reinecke Subject: [PATCH 12/14] loop: cleanup loop_config_discard Date: Wed, 31 Jan 2024 14:03:58 +0100 Message-Id: <20240131130400.625836-13-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Initialize the local variables for the discard max sectors and granularity to zero as a sensible default, and then merge the calls assigning them to the queue limits. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke --- drivers/block/loop.c | 27 ++++++++------------------- 1 file changed, 8 insertions(+), 19 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 3f855cc79c29f5..7abeb586942677 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -755,7 +755,8 @@ static void loop_config_discard(struct loop_device *lo) struct file *file = lo->lo_backing_file; struct inode *inode = file->f_mapping->host; struct request_queue *q = lo->lo_queue; - u32 granularity, max_discard_sectors; + u32 granularity = 0, max_discard_sectors = 0; + struct kstatfs sbuf; /* * If the backing device is a block device, mirror its zeroing @@ -775,29 +776,17 @@ static void loop_config_discard(struct loop_device *lo) * We use punch hole to reclaim the free space used by the * image a.k.a. discard. */ - } else if (!file->f_op->fallocate) { - max_discard_sectors = 0; - granularity = 0; - - } else { - struct kstatfs sbuf; - + } else if (file->f_op->fallocate && !vfs_statfs(&file->f_path, &sbuf)) { max_discard_sectors = UINT_MAX >> 9; - if (!vfs_statfs(&file->f_path, &sbuf)) - granularity = sbuf.f_bsize; - else - max_discard_sectors = 0; + granularity = sbuf.f_bsize; } - if (max_discard_sectors) { + blk_queue_max_discard_sectors(q, max_discard_sectors); + blk_queue_max_write_zeroes_sectors(q, max_discard_sectors); + if (max_discard_sectors) q->limits.discard_granularity = granularity; - blk_queue_max_discard_sectors(q, max_discard_sectors); - blk_queue_max_write_zeroes_sectors(q, max_discard_sectors); - } else { + else q->limits.discard_granularity = 0; - blk_queue_max_discard_sectors(q, 0); - blk_queue_max_write_zeroes_sectors(q, 0); - } } struct loop_worker { From patchwork Wed Jan 31 13:03:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539429 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A04E07BAF1; Wed, 31 Jan 2024 13:04:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706301; cv=none; b=sCCP6WtY5It8I/jwbcr7b4rvr5XXO9sXac5DRXz0D1utnGxUTQ0q/DAqQsmnJtgN448WVqjTCApNx3xlDcsVBenKaeuz5BgbKfwFZ9+ptz2FJ/ZkGmvggIO/vgLBAekblqyDnIBtKk673gtwql5Nl0o3g1COr0qUOoyNV22Gt6s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706301; c=relaxed/simple; bh=pacC1cn/AH9PVWOkaZAi+JoJacdLTHjt978zbzbqJe4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZALQR51O00ylAl6Ggyp+Vj3HVsWd3oUj9Y5+sYE5ea6wzVFLrcKZ0nPq3injjLztjX1vXRaaLqULzajAXQz38utpWPfF13P7KAn67gvIWubADHBxchGcaYBQUr5lwkeK1W9EzF74eZDAGA/oBjMybLWVCS1oNV3j0LKMbSEs+CM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=tnf44HR2; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="tnf44HR2" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=L873Lxa+31+KXE3rwv/O5G9PEYjkpCTeABnTliJ/uBU=; b=tnf44HR2GmFU6ZSbOhhFCqbyQ7 7rQapB9L7J/K3qKBrneA43P8S5Xd6RWgxwjjiDHJvWjM+ksq2J3hbiYy4yh302Hlv5iw4bLP1i0u8 zIT6kJEXaYnMT+rEIcHPstcaJ/HLYJBFVya2pBC9kD9BFHCvKwEZ5nEbR24pgAGKW38vYXcXJZsOs OOWOxB4IAMkPAKQJN5/gL0/mHv27+Obapr8r/5uT1Wx1WpbXSwM7aiFHom9gUZynQrSxtVYQA2G53 lFlwmVqt3jy+zJ+IHDmQ3b119dvcuDl7MKPDAnimw4C6SF2GtlDN10W6Kw+Ilnosb887TgS8BaulC YTyTjWNQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGq-00000003VcL-3QK3; Wed, 31 Jan 2024 13:04:53 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Hannes Reinecke Subject: [PATCH 13/14] loop: pass queue_limits to blk_mq_alloc_disk Date: Wed, 31 Jan 2024 14:03:59 +0100 Message-Id: <20240131130400.625836-14-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass the max_hw_sector limit loop sets at initialization time directly to blk_mq_alloc_disk instead of updating it right after the allocation. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- drivers/block/loop.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 7abeb586942677..26c8ea79086798 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1971,6 +1971,12 @@ static const struct blk_mq_ops loop_mq_ops = { static int loop_add(int i) { + struct queue_limits lim = { + /* + * Random number picked from the historic block max_sectors cap. + */ + .max_hw_sectors = 2560u, + }; struct loop_device *lo; struct gendisk *disk; int err; @@ -2014,16 +2020,13 @@ static int loop_add(int i) if (err) goto out_free_idr; - disk = lo->lo_disk = blk_mq_alloc_disk(&lo->tag_set, NULL, lo); + disk = lo->lo_disk = blk_mq_alloc_disk(&lo->tag_set, &lim, lo); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_cleanup_tags; } lo->lo_queue = lo->lo_disk->queue; - /* random number picked from the history block max_sectors cap */ - blk_queue_max_hw_sectors(lo->lo_queue, 2560u); - /* * By default, we do buffer IO, so it doesn't make sense to enable * merge because the I/O submitted to backing file is handled page by From patchwork Wed Jan 31 13:04:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13539430 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46BA47B3CB; Wed, 31 Jan 2024 13:05:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706304; cv=none; b=ITxdF9ARqMoIi3AA1KdzgO1wPsnkq+jXCP9n6lmHtWurVRZLHEB2OERcHnHuynxs5CrwU0XUyv0dmVrC5QWbEKX+PDvBCcNeSqGjSZB1vJ92XPPYkvmx7GeDpxiRcoRn8PmJeE931AX72E5IlE4cu9n+3dYkFVKn+SbqW6A8jRw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706706304; c=relaxed/simple; bh=90MAXVsU8DQ5/M8oaifdXchTuoBp/LMtV7PkBl3c3Do=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Rs9SMhCoYexZB2TfeylObSTDXIni48L4oevGZtce7NZWj7fN6iu16gIsUhVqcoyaOe3/NVUy94pNUR5CKhjiRjOn/s82kkOu95RTWetRQAsuxYk86kY1IV/osHXPcwBjA9GVL84gneCeWHyTPjqe35slPcipoJ+GtOSOY0Cqz3g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=rMiY+rFa; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rMiY+rFa" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=4QAQ97DCZjYDip+hpd7eamuJUudqymA8R+j6qvEpqqY=; b=rMiY+rFaUbr/mKljzGWjOtTtkg LCXGqq74mkL6lbwH3D6Ds26m1j+SIvTCsgEOs51iu9xt2jk3eMsxxHZu2vHxie26ByMFo5SX3hMPH 5nUs41PnCJxY3Z3FHlb6FLvE7UDZBzmbwo/1tvMvBh9QhifjD+X1j/JAvC7JPNPDafc4CB/2t1KWt WY0iyReQgBpFymaAvqZkCHB2QDix8ZycqYzfB3Zl6wQvn7vjUuR3i5M+tBeQnALt3oxUVWR6R62ZC WYpujIvaXL9nvb/WnavlCTpuXuuN/8qxgtrrDH3tE4AoBBe6x2KsQrp0utf+q1BegshwcIlcaFVFA BqF0tCCw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVAGu-00000003VgV-2CEI; Wed, 31 Jan 2024 13:04:57 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Hannes Reinecke Subject: [PATCH 14/14] loop: use the atomic queue limits update API Date: Wed, 31 Jan 2024 14:04:00 +0100 Message-Id: <20240131130400.625836-15-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240131130400.625836-1-hch@lst.de> References: <20240131130400.625836-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass the default limits to blk_mq_alloc_disk and then use the queue_limits_{start,commit}_update API to change the limits in an atomic way on existing loop gendisks. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- drivers/block/loop.c | 41 +++++++++++++++++++++++++---------------- 1 file changed, 25 insertions(+), 16 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 26c8ea79086798..28a95fd366fea5 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -750,11 +750,11 @@ static void loop_sysfs_exit(struct loop_device *lo) &loop_attribute_group); } -static void loop_config_discard(struct loop_device *lo) +static void loop_config_discard(struct loop_device *lo, + struct queue_limits *lim) { struct file *file = lo->lo_backing_file; struct inode *inode = file->f_mapping->host; - struct request_queue *q = lo->lo_queue; u32 granularity = 0, max_discard_sectors = 0; struct kstatfs sbuf; @@ -781,12 +781,12 @@ static void loop_config_discard(struct loop_device *lo) granularity = sbuf.f_bsize; } - blk_queue_max_discard_sectors(q, max_discard_sectors); - blk_queue_max_write_zeroes_sectors(q, max_discard_sectors); + lim->max_hw_discard_sectors = max_discard_sectors; + lim->max_write_zeroes_sectors = max_discard_sectors; if (max_discard_sectors) - q->limits.discard_granularity = granularity; + lim->discard_granularity = granularity; else - q->limits.discard_granularity = 0; + lim->discard_granularity = 0; } struct loop_worker { @@ -975,6 +975,20 @@ loop_set_status_from_info(struct loop_device *lo, return 0; } +static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize, + bool update_discard_settings) +{ + struct queue_limits lim; + + lim = queue_limits_start_update(lo->lo_queue); + lim.logical_block_size = bsize; + lim.physical_block_size = bsize; + lim.io_min = bsize; + if (update_discard_settings) + loop_config_discard(lo, &lim); + return queue_limits_commit_update(lo->lo_queue, &lim); +} + static int loop_configure(struct loop_device *lo, blk_mode_t mode, struct block_device *bdev, const struct loop_config *config) @@ -1072,11 +1086,10 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode, else bsize = 512; - blk_queue_logical_block_size(lo->lo_queue, bsize); - blk_queue_physical_block_size(lo->lo_queue, bsize); - blk_queue_io_min(lo->lo_queue, bsize); + error = loop_reconfigure_limits(lo, bsize, true); + if (WARN_ON_ONCE(error)) + goto out_unlock; - loop_config_discard(lo); loop_update_rotational(lo); loop_update_dio(lo); loop_sysfs_init(lo); @@ -1143,9 +1156,7 @@ static void __loop_clr_fd(struct loop_device *lo, bool release) lo->lo_offset = 0; lo->lo_sizelimit = 0; memset(lo->lo_file_name, 0, LO_NAME_SIZE); - blk_queue_logical_block_size(lo->lo_queue, 512); - blk_queue_physical_block_size(lo->lo_queue, 512); - blk_queue_io_min(lo->lo_queue, 512); + loop_reconfigure_limits(lo, 512, false); invalidate_disk(lo->lo_disk); loop_sysfs_exit(lo); /* let user-space know about this change */ @@ -1477,9 +1488,7 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg) invalidate_bdev(lo->lo_device); blk_mq_freeze_queue(lo->lo_queue); - blk_queue_logical_block_size(lo->lo_queue, arg); - blk_queue_physical_block_size(lo->lo_queue, arg); - blk_queue_io_min(lo->lo_queue, arg); + err = loop_reconfigure_limits(lo, arg, false); loop_update_dio(lo); blk_mq_unfreeze_queue(lo->lo_queue);