From patchwork Mon Feb 12 06:45:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552714 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 012A48BFE; Mon, 12 Feb 2024 06:46:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720390; cv=none; b=W1BvG1TBvNAWmLxv5tRT820cs1X/QJ3bsqHIY2LSwAynS71/vwYuSiHM41J6c1omqZ17CFu3kK65KFQbpeWxIgQ+OmDhxygPo5csaNCWgXgP8EN6xOgO4/Srwlbr7D6O9khUB2HkK4+X9bHPj+iKuDBzAOFkO6/YmVqgBQPkwqc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720390; c=relaxed/simple; bh=VTkBlGQ03p+D1+Ehq51Td3ygYXkm/56ivitFkyW2v+k=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SzAWN97ef2vP4//e9XWl3j/eCZY8AYQsh3Udy6SInna1hgSHAVQgg4wPK0ROtwAh8yq6Za+mcZMPm0e0RtTZVzXkL/9pCew6zYkbQslXVyx4831stOi5APz+Xy4Gdo7R4LOoTcoj+zf7IuA9+oL0tqgJVOPMXbs8GE3NTZGzm/g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=OC1eEdeL; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="OC1eEdeL" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=BYX8dCXXtJamilTL6lZO+ppRSoMp9rw4hmsj90YbIFo=; b=OC1eEdeLsqT4CchKv1z6NnCu56 T0SZ89YsWiAlhgTg7bNOAXP8CWNbJE3scHyNjFW50aluDQZORtuuRVu1FvhiPufVbyCmdJnsFEf4R b2LFk76Lyj6R1DJ/7KqF8oj6C6nVRrcLwA4UD+L+7S//86VNwee9gX7or564cpdOQtdYVstTYnnzT bqwYwt0iDS4hLR7TaYpHxVVD5O5vVFkTONdnyhxgT0aQhBkPSKeeCVblimug0e3SOlsIEACZJoG9w uMsbNYoiptQFRk8X1Smg5T8aGShytjdf8Aj4hEOmC2Edl2wcTrK97fccv1a5wVYVhgy+wLmQexR3w F//apsRA==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ56-00000004PYu-44dn; Mon, 12 Feb 2024 06:46:21 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 01/15] block: move max_{open,active}_zones to struct queue_limits Date: Mon, 12 Feb 2024 07:45:55 +0100 Message-Id: <20240212064609.1327143-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html The maximum number of open and active zones is a limit on the queue and should be places there so that we can including it in the upcoming queue limits batch update API. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- include/linux/blkdev.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index d7cac3de65b31b..de9251922f7583 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -189,8 +189,6 @@ struct gendisk { * blk_mq_unfreeze_queue(). */ unsigned int nr_zones; - unsigned int max_open_zones; - unsigned int max_active_zones; unsigned long *conv_zones_bitmap; unsigned long *seq_zones_wlock; #endif /* CONFIG_BLK_DEV_ZONED */ @@ -307,6 +305,8 @@ struct queue_limits { unsigned char discard_misaligned; unsigned char raid_partial_stripes_expensive; bool zoned; + unsigned int max_open_zones; + unsigned int max_active_zones; /* * Drivers that set dma_alignment to less than 511 must be prepared to @@ -639,23 +639,23 @@ static inline bool disk_zone_is_seq(struct gendisk *disk, sector_t sector) static inline void disk_set_max_open_zones(struct gendisk *disk, unsigned int max_open_zones) { - disk->max_open_zones = max_open_zones; + disk->queue->limits.max_open_zones = max_open_zones; } static inline void disk_set_max_active_zones(struct gendisk *disk, unsigned int max_active_zones) { - disk->max_active_zones = max_active_zones; + disk->queue->limits.max_active_zones = max_active_zones; } static inline unsigned int bdev_max_open_zones(struct block_device *bdev) { - return bdev->bd_disk->max_open_zones; + return bdev->bd_disk->queue->limits.max_open_zones; } static inline unsigned int bdev_max_active_zones(struct block_device *bdev) { - return bdev->bd_disk->max_active_zones; + return bdev->bd_disk->queue->limits.max_active_zones; } #else /* CONFIG_BLK_DEV_ZONED */ From patchwork Mon Feb 12 06:45:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552713 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 568FF882E; Mon, 12 Feb 2024 06:46:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720388; cv=none; b=XST1gEAxryUSP/AlK8JFFeyoeuTHXJJWyPHhUAwisz0/Kw0bFu/1khuG7yMYHdYE3dpONPY35eugZtTGDtlgvD6EgYvH0DrTyvEW6Z4IOACSpTPgR4AWglm/6fyOBQ+V+hjl/K5WWYg7eNJq0ohpmP6WklaOtAO6K6w4RYzXp10= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720388; c=relaxed/simple; bh=eO2PLTqZJFLMgRmRGZ2al0YjEjClo+ry2KNvmaSM1Mk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nlHTOfBY+H0arcZEpgWBlkDlRl8iFSdCojwXOpwf72dAzF4LzX2QcGoy0YZoXGKsPb7uBYvsI0xcsQRq5CRUSOKf5eSUvRIqQzulPGkVH2Wlv0c/1PXmX+Rp8VPNAgDDwcZrA6DWZBxRJQY5rrjXt3ZSOndRiFEZukwOPkaGio0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=Et26V4ar; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Et26V4ar" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=FdBZZfIe4Gg0+XVL+ZZDzEMtykzHbZrexnu46bVVgBY=; b=Et26V4ar0/4t3FTXGbyyW16wfH NRjtUB8W+Gf3VNmigQTdiYoTWVEOM/+r2Rs7sRu9Rrl3eUuduHSgIbRtLAGQokitzRyeG2W1eJ6RE EPJcelSptFRwH00Y/MIKcm7KsDsnpvs0jCt6vBy3URcACkMjmJw155/0IDERRCdyvSuIh6hIWTrxe /0EwyP46AxWSpBySh3Hl3hvpJFnx8mq3kjAx5fFX6uBKIMES5zH9B4CMRE4iNBx4zRIglo/4mQFSF CKNhLo/3fKNuWhwa1R13EtvVUb9R+AuCB8kYRt5SOh0xbROvb7csBq83vl5tOS0DdaenLj1ViZ/nE P2aGcq7g==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ59-00000004PZd-37UC; Mon, 12 Feb 2024 06:46:24 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 02/15] block: refactor disk_update_readahead Date: Mon, 12 Feb 2024 07:45:56 +0100 Message-Id: <20240212064609.1327143-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Factor out a blk_apply_bdi_limits limits helper that can be used with an explicit queue_limits argument, which will be useful later. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- block/blk-settings.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 06ea91e51b8b2e..f16d3fec6658e5 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -85,6 +85,17 @@ void blk_set_stacking_limits(struct queue_limits *lim) } EXPORT_SYMBOL(blk_set_stacking_limits); +static void blk_apply_bdi_limits(struct backing_dev_info *bdi, + struct queue_limits *lim) +{ + /* + * For read-ahead of large files to be effective, we need to read ahead + * at least twice the optimal I/O size. + */ + bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES); + bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT; +} + /** * blk_queue_bounce_limit - set bounce buffer limit for queue * @q: the request queue for the device @@ -393,15 +404,7 @@ EXPORT_SYMBOL(blk_queue_alignment_offset); void disk_update_readahead(struct gendisk *disk) { - struct request_queue *q = disk->queue; - - /* - * For read-ahead of large files to be effective, we need to read ahead - * at least twice the optimal I/O size. - */ - disk->bdi->ra_pages = - max(queue_io_opt(q) * 2 / PAGE_SIZE, VM_READAHEAD_PAGES); - disk->bdi->io_pages = queue_max_sectors(q) >> (PAGE_SHIFT - 9); + blk_apply_bdi_limits(disk->bdi, &disk->queue->limits); } EXPORT_SYMBOL_GPL(disk_update_readahead); From patchwork Mon Feb 12 06:45:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552715 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18582C2CD; Mon, 12 Feb 2024 06:46:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720393; cv=none; b=bto5xwwCBOHSuWSeiWA1HkQY+Bmw6FlIgdQnOtU2KlloftehmdFflTb70B9qVF8yq9PM2N5l4Zv+CdXBbS8fGXMSP7xWHztagjiJ/zVxb1afkUXDz5j55sBLbyyhhiO3qDIaWcsc/DunNTVci02YjUiEESnIZFyNDn2lrZUdW5Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720393; c=relaxed/simple; bh=CKgpclj+vim48ZzXigNly6i3DBy7x2MQkLbp82JytWc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=W0CLSKbfFfbn2zeGLyBpEH3P4pJzMiga/0vs6tovduM81qykXLOHNKebznFQk+HxdbHQMnI3iifAhJFPrc7r2CR4wg6R2Q5QBUVnc/q0fnhAjUT+q+wkHF3cM9M3nuUW4XO2N2xHg+Z9DlAmspaW9sKKriTqcOylrB9lagpao1c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=DoULLL3a; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="DoULLL3a" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=p6HSM27GbKcz02IsFZ1aYD7BVqR49cLm+9QPOfHJCYE=; b=DoULLL3aoXgF1aARVqiks3lSig Sw9qi06txzfy2f9e4+HGojqTX3463o8Ncg0PTOWWIsmw4hbry17wq338ChdOUBfVXi7RITtIDlUB1 IarEdPnnQxr3q5IIGowRAcFnZA4J+q0krFoPiCF3xcBohKiISsZu5HSM2/o3yVGNn1kqzu4Nq2F8T CI+Lr0kW+UdP9o2lQ8puo9qcyLeIzbq3RT7HqoOyL3FNRwTIbxLUqRYs9L5Z5v2PWOHoKYvf4WN8S 8/+QdLbL3Rn976wkZpR1TiNl+NbBER8P3rLNBJFJfQrVgP0JTh8wDhOk8GcfAK1qX/wm+uJdURrmk fU4m4n6Q==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5C-00000004Pa7-1s0g; Mon, 12 Feb 2024 06:46:27 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev Subject: [PATCH 03/15] block: decouple blk_set_stacking_limits from blk_set_default_limits Date: Mon, 12 Feb 2024 07:45:57 +0100 Message-Id: <20240212064609.1327143-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html blk_set_stacking_limits uses very little from blk_set_default_limits. Open code these initializations in preparation for rewriting blk_set_default_limits. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal Reviewed-by: Hannes Reinecke --- block/blk-settings.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index f16d3fec6658e5..1cae2db41490d2 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -65,13 +65,16 @@ void blk_set_default_limits(struct queue_limits *lim) * blk_set_stacking_limits - set default limits for stacking devices * @lim: the queue_limits structure to reset * - * Description: - * Returns a queue_limit struct to its default state. Should be used - * by stacking drivers like DM that have no internal limits. + * Prepare queue limits for applying limits from underlying devices using + * blk_stack_limits(). */ void blk_set_stacking_limits(struct queue_limits *lim) { - blk_set_default_limits(lim); + memset(lim, 0, sizeof(*lim)); + lim->logical_block_size = lim->physical_block_size = lim->io_min = 512; + lim->discard_granularity = 512; + lim->dma_alignment = 511; + lim->seg_boundary_mask = BLK_SEG_BOUNDARY_MASK; /* Inherit limits from component devices */ lim->max_segments = USHRT_MAX; From patchwork Mon Feb 12 06:45:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552716 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1409DF41; Mon, 12 Feb 2024 06:46:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720394; cv=none; b=Z5L/DJIzAyPDDJcvxoZEWU7DNDy9GLYLfwDtvri93HgbujVDsRJaenh4cEZdnrqIUk+o5t1LnhhhWqxQfD+0N+ucU5U4NkZA79lodleqBY54XIUn0wCjncUq3LmLZg94MzqOPghQArQa6tmFTFIcLHMEtXEGD0Ri+jWdtyfvrlE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720394; c=relaxed/simple; bh=auEdvccrUz4jctQ1e99TtKG2H+0C9cC1bWhrmTg1Jlg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hCsDhFpV3jR+398DXZQNat1IENEHSrD27qdIXqQe7+GLcuG6MI8eWZOxJRoaanFJ71gQ9UNnoASITSDca4Z90pAaZQ3cGVnbAgTZ+M0SV+jFPSiUbqAXp+HYY60BSXbUfh3sMoO8RAfzvBZm8CplNYusIEggwmy4ZekDW0xMq5o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=zhi8qJJh; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="zhi8qJJh" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=4b5eRHEph3yRko4lq7ZaN4W/jCkJ/qWIhwup5Lem+cw=; b=zhi8qJJhBWuoOUyNi5rwf39V/Z fzqRI490w/J4rwdtx/h8gKM0Zf3nylP3los/wI0CRejobueb/jMoNwIZ9flkXWWPtQPu4JEGTyvCe oH3hAps8gxuzrtPE4CNKemq78mCIBhkOmtEdnA2pG3TKUSEgvrDdkyW2lHiKmMO1SdsA7SmEdsAUc uFpz9HSWtnyQID/Dur4J4b6Hlauqph5848s9y8epg6h5JwyNn+fa2jzkKThYTEqmVvzVmHbxKYcrK IYzcEa7wN9/U2uAJTeXDYKmO4STI5VJ+uUknBkUf3jCv1gWFWCvqze9WgKCi1QqSOLg06uBrhMLlb rmshkOKg==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5F-00000004Pak-0tN0; Mon, 12 Feb 2024 06:46:29 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev Subject: [PATCH 04/15] block: add an API to atomically update queue limits Date: Mon, 12 Feb 2024 07:45:58 +0100 Message-Id: <20240212064609.1327143-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a new queue_limits_{start,commit}_update pair of functions that allows taking an atomic snapshot of queue limits, update it, and commit it if it passes validity checking. Also use the low-level validation helper to implement blk_set_default_limits instead of duplicating the initialization. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal --- block/blk-core.c | 1 + block/blk-settings.c | 228 ++++++++++++++++++++++++++++++++++------- block/blk.h | 4 +- include/linux/blkdev.h | 23 +++++ 4 files changed, 218 insertions(+), 38 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 2b11d8325fde68..cb56724a8dfb25 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -425,6 +425,7 @@ struct request_queue *blk_alloc_queue(int node_id) mutex_init(&q->debugfs_mutex); mutex_init(&q->sysfs_lock); mutex_init(&q->sysfs_dir_lock); + mutex_init(&q->limits_lock); mutex_init(&q->rq_qos_mutex); spin_lock_init(&q->queue_lock); diff --git a/block/blk-settings.c b/block/blk-settings.c index 1cae2db41490d2..27b9b4a2a85395 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -25,42 +25,6 @@ void blk_queue_rq_timeout(struct request_queue *q, unsigned int timeout) } EXPORT_SYMBOL_GPL(blk_queue_rq_timeout); -/** - * blk_set_default_limits - reset limits to default values - * @lim: the queue_limits structure to reset - * - * Description: - * Returns a queue_limit struct to its default state. - */ -void blk_set_default_limits(struct queue_limits *lim) -{ - lim->max_segments = BLK_MAX_SEGMENTS; - lim->max_discard_segments = 1; - lim->max_integrity_segments = 0; - lim->seg_boundary_mask = BLK_SEG_BOUNDARY_MASK; - lim->virt_boundary_mask = 0; - lim->max_segment_size = BLK_MAX_SEGMENT_SIZE; - lim->max_sectors = lim->max_hw_sectors = BLK_SAFE_MAX_SECTORS; - lim->max_user_sectors = lim->max_dev_sectors = 0; - lim->chunk_sectors = 0; - lim->max_write_zeroes_sectors = 0; - lim->max_zone_append_sectors = 0; - lim->max_discard_sectors = 0; - lim->max_hw_discard_sectors = 0; - lim->max_secure_erase_sectors = 0; - lim->discard_granularity = 512; - lim->discard_alignment = 0; - lim->discard_misaligned = 0; - lim->logical_block_size = lim->physical_block_size = lim->io_min = 512; - lim->bounce = BLK_BOUNCE_NONE; - lim->alignment_offset = 0; - lim->io_opt = 0; - lim->misaligned = 0; - lim->zoned = false; - lim->zone_write_granularity = 0; - lim->dma_alignment = 511; -} - /** * blk_set_stacking_limits - set default limits for stacking devices * @lim: the queue_limits structure to reset @@ -99,6 +63,198 @@ static void blk_apply_bdi_limits(struct backing_dev_info *bdi, bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT; } +static int blk_validate_zoned_limits(struct queue_limits *lim) +{ + if (!lim->zoned) { + if (WARN_ON_ONCE(lim->max_open_zones) || + WARN_ON_ONCE(lim->max_active_zones) || + WARN_ON_ONCE(lim->zone_write_granularity) || + WARN_ON_ONCE(lim->max_zone_append_sectors)) + return -EINVAL; + return 0; + } + + if (WARN_ON_ONCE(!IS_ENABLED(CONFIG_BLK_DEV_ZONED))) + return -EINVAL; + + if (lim->zone_write_granularity < lim->logical_block_size) + lim->zone_write_granularity = lim->logical_block_size; + + if (lim->max_zone_append_sectors) { + /* + * The Zone Append size is limited by the maximum I/O size + * and the zone size given that it can't span zones. + */ + lim->max_zone_append_sectors = + min3(lim->max_hw_sectors, + lim->max_zone_append_sectors, + lim->chunk_sectors); + } + + return 0; +} + +/* + * Check that the limits in lim are valid, initialize defaults for unset + * values, and cap values based on others where needed. + */ +static int blk_validate_limits(struct queue_limits *lim) +{ + unsigned int max_hw_sectors; + + /* + * Unless otherwise specified, default to 512 byte logical blocks and a + * physical block size equal to the logical block size. + */ + if (!lim->logical_block_size) + lim->logical_block_size = SECTOR_SIZE; + if (lim->physical_block_size < lim->logical_block_size) + lim->physical_block_size = lim->logical_block_size; + + /* + * The minimum I/O size defaults to the physical block size unless + * explicitly overridden. + */ + if (lim->io_min < lim->physical_block_size) + lim->io_min = lim->physical_block_size; + + /* + * max_hw_sectors has a somewhat weird default for historical reason, + * but driver really should set their own instead of relying on this + * value. + * + * The block layer relies on the fact that every driver can + * handle at lest a page worth of data per I/O, and needs the value + * aligned to the logical block size. + */ + if (!lim->max_hw_sectors) + lim->max_hw_sectors = BLK_SAFE_MAX_SECTORS; + if (WARN_ON_ONCE(lim->max_hw_sectors < PAGE_SECTORS)) + return -EINVAL; + lim->max_hw_sectors = round_down(lim->max_hw_sectors, + lim->logical_block_size >> SECTOR_SHIFT); + + /* + * The actual max_sectors value is a complex beast and also takes the + * max_dev_sectors value (set by SCSI ULPs) and a user configurable + * value into account. The ->max_sectors value is always calculated + * from these, so directly setting it won't have any effect. + */ + max_hw_sectors = min_not_zero(lim->max_hw_sectors, + lim->max_dev_sectors); + if (lim->max_user_sectors) { + if (lim->max_user_sectors > max_hw_sectors || + lim->max_user_sectors < PAGE_SIZE / SECTOR_SIZE) + return -EINVAL; + lim->max_sectors = min(max_hw_sectors, lim->max_user_sectors); + } else { + lim->max_sectors = min(max_hw_sectors, BLK_DEF_MAX_SECTORS_CAP); + } + lim->max_sectors = round_down(lim->max_sectors, + lim->logical_block_size >> SECTOR_SHIFT); + + /* + * Random default for the maximum number of sectors. Driver should not + * rely on this and set their own. + */ + if (!lim->max_segments) + lim->max_segments = BLK_MAX_SEGMENTS; + + lim->max_discard_sectors = lim->max_hw_discard_sectors; + if (!lim->max_discard_segments) + lim->max_discard_segments = 1; + + if (lim->discard_granularity < lim->physical_block_size) + lim->discard_granularity = lim->physical_block_size; + + /* + * By default there is no limit on the segment boundary alignment, + * but if there is one it can't be smaller than the page size as + * that would break all the normal I/O patterns. + */ + if (!lim->seg_boundary_mask) + lim->seg_boundary_mask = BLK_SEG_BOUNDARY_MASK; + if (WARN_ON_ONCE(lim->seg_boundary_mask < PAGE_SIZE - 1)) + return -EINVAL; + + /* + * The maximum segment size has an odd historic 64k default that + * drivers probably should override. Just like the I/O size we + * require drivers to at least handle a full page per segment. + */ + if (!lim->max_segment_size) + lim->max_segment_size = BLK_MAX_SEGMENT_SIZE; + if (WARN_ON_ONCE(lim->max_segment_size < PAGE_SIZE)) + return -EINVAL; + + /* + * Devices that require a virtual boundary do not support scatter/gather + * I/O natively, but instead require a descriptor list entry for each + * page (which might not be identical to the Linux PAGE_SIZE). Because + * of that they are not limited by our notion of "segment size". + */ + if (lim->virt_boundary_mask) { + if (WARN_ON_ONCE(lim->max_segment_size && + lim->max_segment_size != UINT_MAX)) + return -EINVAL; + lim->max_segment_size = UINT_MAX; + } + + /* + * We require drivers to at least do logical block aligned I/O, but + * historically could not check for that due to the separate calls + * to set the limits. Once the transition is finished the check + * below should be narrowed down to check the logical block size. + */ + if (!lim->dma_alignment) + lim->dma_alignment = SECTOR_SIZE - 1; + if (WARN_ON_ONCE(lim->dma_alignment > PAGE_SIZE)) + return -EINVAL; + + if (lim->alignment_offset) { + lim->alignment_offset &= (lim->physical_block_size - 1); + lim->misaligned = 0; + } + + return blk_validate_zoned_limits(lim); +} + +/* + * Set the default limits for a newly allocated queue. @lim contains the + * initial limits set by the driver, which could be no limit in which case + * all fields are cleared to zero. + */ +int blk_set_default_limits(struct queue_limits *lim) +{ + return blk_validate_limits(lim); +} + +/** + * queue_limits_commit_update - commit an atomic update of queue limits + * @q: queue to update + * @lim: limits to apply + * + * Apply the limits in @lim that were obtained from queue_limits_start_update() + * and updated by the caller to @q. + * + * Returns 0 if successful, else a negative error code. + */ +int queue_limits_commit_update(struct request_queue *q, + struct queue_limits *lim) + __releases(q->limits_lock) +{ + int error = blk_validate_limits(lim); + + if (!error) { + q->limits = *lim; + if (q->disk) + blk_apply_bdi_limits(q->disk->bdi, lim); + } + mutex_unlock(&q->limits_lock); + return error; +} +EXPORT_SYMBOL_GPL(queue_limits_commit_update); + /** * blk_queue_bounce_limit - set bounce buffer limit for queue * @q: the request queue for the device diff --git a/block/blk.h b/block/blk.h index 913c93838a01bf..7c30e2ac8ebcd3 100644 --- a/block/blk.h +++ b/block/blk.h @@ -330,7 +330,7 @@ void blk_rq_set_mixed_merge(struct request *rq); bool blk_rq_merge_ok(struct request *rq, struct bio *bio); enum elv_merge blk_try_merge(struct request *rq, struct bio *bio); -void blk_set_default_limits(struct queue_limits *lim); +int blk_set_default_limits(struct queue_limits *lim); int blk_dev_init(void); /* @@ -448,7 +448,7 @@ static inline void bio_release_page(struct bio *bio, struct page *page) unpin_user_page(page); } -struct request_queue *blk_alloc_queue(int node_id); +struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id); int disk_scan_partitions(struct gendisk *disk, blk_mode_t mode); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index de9251922f7583..97c01efed68253 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -473,6 +473,7 @@ struct request_queue { struct mutex sysfs_lock; struct mutex sysfs_dir_lock; + struct mutex limits_lock; /* * for reusing dead hctx instance in case of updating @@ -861,6 +862,28 @@ static inline unsigned int blk_chunk_sectors_left(sector_t offset, return chunk_sectors - (offset & (chunk_sectors - 1)); } +/** + * queue_limits_start_update - start an atomic update of queue limits + * @q: queue to update + * + * This functions starts an atomic update of the queue limits. It takes a lock + * to prevent other updates and returns a snapshot of the current limits that + * the caller can modify. The caller must call queue_limits_commit_update() + * to finish the update. + * + * Context: process context. The caller must have frozen the queue or ensured + * that there is outstanding I/O by other means. + */ +static inline struct queue_limits +queue_limits_start_update(struct request_queue *q) + __acquires(q->limits_lock) +{ + mutex_lock(&q->limits_lock); + return q->limits; +} +int queue_limits_commit_update(struct request_queue *q, + struct queue_limits *lim); + /* * Access functions for manipulating queue properties */ From patchwork Mon Feb 12 06:45:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552717 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 540BCF514; Mon, 12 Feb 2024 06:46:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720397; cv=none; b=Gjj/irqoHIFEBDnrjV7opow3g7loj+4MFFsea1uIYf1FSC3mACmunUpyY3sQ6fGBoztSmiPGKqvPewxzTVUJQMS1i9/vY24L22hbnugf2M4PYPOF6H+LSpqbfTWXr3GGi6T5zeCH5QUWARPcDGrHIP5fKKfzuAeQu1soi+EcIzY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720397; c=relaxed/simple; bh=Kz/wGhjwA898M4xM8AiUp7jHwoiOpA40vXAi1bOb9ew=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YMiaYE93l00qQDeqtXkFPFvsS38zrGQ+pZIjihHJJ+9hakj2pjbx9FWFaGsUIxbfHXgg6JbOe13cuvnP35fdehm1l3fPpFrqnwFXW5UeWi18cN826HH8tqFWDEUA3QoBVIAMb49KutGDE0PttHmjt7OmwFfTXD4iHDOeWGBz+iU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=nYb4RSNO; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="nYb4RSNO" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=dbUUuuh+TiTwmyxcs080LBUsDQdjlLe3kybykALbaaw=; b=nYb4RSNOxEqtl+/HtAWdQ3l/Sq Zs/JO3czt0A1Q18cQiX9WuvU1aanozlFYYnGAP0xEko7f+Bm5f2yPjb/tAnfAEFINSA3wKQMfy4uW PrkVgRbHE9tGF0QBSjjYFd/9LVxLO7z2ekKYB+rOcHf4f5dbmaM4uVgnic0LrhWTIMvh7xuc9Fzk8 9lQJgFvEf07ISVNLuGB7DzeCrB4LO7u1O7uUKFLKOYqwTbDwrvP/jVLZGJ+OPwQYcqwQvK0zmgzLb 01JGHc7SS2A1CtdnkPbCz+kL7BYYrN7IB2+e76LjcHjapcA2NvSu64lS0b81zzunutctEUvDHcMWD r5/FQQTA==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5H-00000004PcW-3zZ3; Mon, 12 Feb 2024 06:46:32 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, John Garry , Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 05/15] block: use queue_limits_commit_update in queue_max_sectors_store Date: Mon, 12 Feb 2024 07:45:59 +0100 Message-Id: <20240212064609.1327143-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Convert queue_max_sectors_store to use queue_limits_commit_update to check and update the max_sectors limit and freeze the queue before doing so to ensure we don't have requests in flight while changing the limits. Note that this removes the previously held queue_lock that doesn't protect against any other reader or writer. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: John Garry Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- block/blk-sysfs.c | 37 ++++++++++++------------------------- 1 file changed, 12 insertions(+), 25 deletions(-) diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 6b2429cad81af1..26607f9825cb05 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -226,35 +226,22 @@ static ssize_t queue_zone_append_max_show(struct request_queue *q, char *page) static ssize_t queue_max_sectors_store(struct request_queue *q, const char *page, size_t count) { - unsigned long var; - unsigned int max_sectors_kb, - max_hw_sectors_kb = queue_max_hw_sectors(q) >> 1, - page_kb = 1 << (PAGE_SHIFT - 10); - ssize_t ret = queue_var_store(&var, page, count); + unsigned long max_sectors_kb; + struct queue_limits lim; + ssize_t ret; + int err; + ret = queue_var_store(&max_sectors_kb, page, count); if (ret < 0) return ret; - max_sectors_kb = (unsigned int)var; - max_hw_sectors_kb = min_not_zero(max_hw_sectors_kb, - q->limits.max_dev_sectors >> 1); - if (max_sectors_kb == 0) { - q->limits.max_user_sectors = 0; - max_sectors_kb = min(max_hw_sectors_kb, - BLK_DEF_MAX_SECTORS_CAP >> 1); - } else { - if (max_sectors_kb > max_hw_sectors_kb || - max_sectors_kb < page_kb) - return -EINVAL; - q->limits.max_user_sectors = max_sectors_kb << 1; - } - - spin_lock_irq(&q->queue_lock); - q->limits.max_sectors = max_sectors_kb << 1; - if (q->disk) - q->disk->bdi->io_pages = max_sectors_kb >> (PAGE_SHIFT - 10); - spin_unlock_irq(&q->queue_lock); - + blk_mq_freeze_queue(q); + lim = queue_limits_start_update(q); + lim.max_user_sectors = max_sectors_kb << 1; + err = queue_limits_commit_update(q, &lim); + blk_mq_unfreeze_queue(q); + if (err) + return err; return ret; } From patchwork Mon Feb 12 06:46:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552718 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0286CF9E4; Mon, 12 Feb 2024 06:46:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720401; cv=none; b=N3ONwVja3bYk3YAquodmIcNLw+oz3BsXyMCs6JY/dor+LY3fd6Bghu+gRqOTwP/1NX1TVp5H6YhYAytFfnfzBw/Dsv6J/dWMFZHTz5I4wU0L+j586tOh4gfvEAQ6adeqsE7Eg7yxfJLk5nHLnHiTxadA0Y7IljK4smEMLxDdb0E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720401; c=relaxed/simple; bh=fyslmlqNpBuKcO/m78FTUDPJu1YLxiCATf/HNCrlFQw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UmFFIDg6w3KHwwmfUbAEzskruNAyhrblvyb7E21K6v5DcUYJ/TnaAYKcDr5C9hMipNLG5Zyo7gzxxiVagl7C1grawDSdwgmXpnZj9W2Xkixsv5+Rx7GjkTjn+i+RgQyZMm+Zif0p3/1YDi644mo8wrNmErxMqfcAYQSsQ4n7Py4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=QtGCCTHa; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QtGCCTHa" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=q63nZGZ5nk2CmZZg/FUFdCJGsVWM4sszVHXEvpO53Oo=; b=QtGCCTHaUsqepXcNJH7YYleSM/ gofHoZmohbl6SyLrZf89+IvZUMN8KB6U8IfTEOrafeN77AJVEI6CUMZmCj3lCKMiyKWSrGyPEktnS Tgz44BY0iaUS/XfVs2zvo4DawcDkhV/UpBj2DDscWmzaXNH1Qa2pKWDRXEsB089CgqH4PsqpChMBP WceK8/aKKTvZ+ai+sgWvf1CUch/xQCD1Z1nnNBVS/PTFgCYgq3BfXsZZdFJYz0v/k+goc8SV0wQER 3XtMsUzzPmBao0JYFjzwQ00ATYCjGBERCcc9Keoe6/E7XZ9uERnMrZ+zUpBX6TYM6nDQO/1I21Im/ z5t7jkDA==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5K-00000004Pe4-3Stl; Mon, 12 Feb 2024 06:46:35 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 06/15] block: add a max_user_discard_sectors queue limit Date: Mon, 12 Feb 2024 07:46:00 +0100 Message-Id: <20240212064609.1327143-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a new max_user_discard_sectors limit that mirrors max_user_sectors and stores the value that the user manually set. This now allows updates of the max_hw_discard_sectors to not worry about the user limit. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- block/blk-settings.c | 18 +++++++++++++++--- block/blk-sysfs.c | 17 ++++++++--------- include/linux/blkdev.h | 1 + 3 files changed, 24 insertions(+), 12 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 27b9b4a2a85395..7139c13fe73484 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -37,6 +37,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) memset(lim, 0, sizeof(*lim)); lim->logical_block_size = lim->physical_block_size = lim->io_min = 512; lim->discard_granularity = 512; + lim->max_user_discard_sectors = UINT_MAX; lim->dma_alignment = 511; lim->seg_boundary_mask = BLK_SEG_BOUNDARY_MASK; @@ -160,7 +161,9 @@ static int blk_validate_limits(struct queue_limits *lim) if (!lim->max_segments) lim->max_segments = BLK_MAX_SEGMENTS; - lim->max_discard_sectors = lim->max_hw_discard_sectors; + lim->max_discard_sectors = + min(lim->max_hw_discard_sectors, lim->max_user_discard_sectors); + if (!lim->max_discard_segments) lim->max_discard_segments = 1; @@ -226,6 +229,12 @@ static int blk_validate_limits(struct queue_limits *lim) */ int blk_set_default_limits(struct queue_limits *lim) { + /* + * Most defaults are set by capping the bounds in blk_validate_limits, + * but max_user_discard_sectors is special and needs an explicit + * initialization to the max value here. + */ + lim->max_user_discard_sectors = UINT_MAX; return blk_validate_limits(lim); } @@ -347,8 +356,11 @@ EXPORT_SYMBOL(blk_queue_chunk_sectors); void blk_queue_max_discard_sectors(struct request_queue *q, unsigned int max_discard_sectors) { - q->limits.max_hw_discard_sectors = max_discard_sectors; - q->limits.max_discard_sectors = max_discard_sectors; + struct queue_limits *lim = &q->limits; + + lim->max_hw_discard_sectors = max_discard_sectors; + lim->max_discard_sectors = + min(max_discard_sectors, lim->max_user_discard_sectors); } EXPORT_SYMBOL(blk_queue_max_discard_sectors); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 26607f9825cb05..a1ec27f0ba4150 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -174,23 +174,22 @@ static ssize_t queue_discard_max_show(struct request_queue *q, char *page) static ssize_t queue_discard_max_store(struct request_queue *q, const char *page, size_t count) { - unsigned long max_discard; - ssize_t ret = queue_var_store(&max_discard, page, count); + unsigned long max_discard_bytes; + ssize_t ret; + ret = queue_var_store(&max_discard_bytes, page, count); if (ret < 0) return ret; - if (max_discard & (q->limits.discard_granularity - 1)) + if (max_discard_bytes & (q->limits.discard_granularity - 1)) return -EINVAL; - max_discard >>= 9; - if (max_discard > UINT_MAX) + if ((max_discard_bytes >> SECTOR_SHIFT) > UINT_MAX) return -EINVAL; - if (max_discard > q->limits.max_hw_discard_sectors) - max_discard = q->limits.max_hw_discard_sectors; - - q->limits.max_discard_sectors = max_discard; + q->limits.max_user_discard_sectors = max_discard_bytes >> SECTOR_SHIFT; + q->limits.max_discard_sectors = min(q->limits.max_hw_discard_sectors, + q->limits.max_user_discard_sectors); return ret; } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 97c01efed68253..64cca723619083 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -290,6 +290,7 @@ struct queue_limits { unsigned int io_opt; unsigned int max_discard_sectors; unsigned int max_hw_discard_sectors; + unsigned int max_user_discard_sectors; unsigned int max_secure_erase_sectors; unsigned int max_write_zeroes_sectors; unsigned int max_zone_append_sectors; From patchwork Mon Feb 12 06:46:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552719 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A538F9E4; Mon, 12 Feb 2024 06:46:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720405; cv=none; b=Cs6EJV8atNzccBemt82EAauyl/Zn9LgscYEiydYEpegKxHsYW0HoaXv0AR2wv5cjz2vSmDExUP1BaQQa/tcyCqQ9y5WdROLvKu1GBSyxUaKi512oydsdBMrSLfnMSssvBpR/oAPNk8Mr2pAjJdvusRn5DdGwhUhzKcjBgpEh0Nk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720405; c=relaxed/simple; bh=DqJ3yVhotqdem3USWpogfOXHwacQDgTajoI9rbiMBP0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LNp5p9IEj0ziqxLZNqG/i4rxPhr86CXbum1aERTMAGGpJlMODt1S82wXvG5nuigEVOz9pK9FuQTq0+YTi4lqSEutEghLY8XGx+9Wq1WSjGA0rpHy4vRVvDsywMXU5skBS817C3vFxlq3OQBdVMiobMZDNhdt0M7Ek9flOYnfAbc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=i2Fl3KZW; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="i2Fl3KZW" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=2ar7uAc1G22jWGawzjqLyxqJbQTxTABlLoYNg1HeWes=; b=i2Fl3KZWnb5MayME+31iqpZM8p wuioofufxYN6hpR8O+GiCoZX2cypxdq2fiK0zfjSCb5kqvzmSh/Jw1rzSAkg+BWvh3nSvRV1/nLdJ mVJvcLt1HxcXBjwU5IDSvdlA7xCUYXA+0NjmzGiaInrqycZui8+AGwqcZZ3GPLg8fOTJKGN4klBNS +C4ku4v9BxCtsS9nV7gszGB2RY0J9NCkYm8o1OOBXaXv7WFx/8i7dUrncFv6jQ7YeRVc9SLsAM2s+ 2wP40PCuBg6h1PKiA5aS7kUE+gn9dANLw8EbWUrwaFcHh31ZsQrgKdLjU6c59IhWyj/AsoBYuaPVR tSakIwOw==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5N-00000004PgR-2njn; Mon, 12 Feb 2024 06:46:38 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 07/15] block: use queue_limits_commit_update in queue_discard_max_store Date: Mon, 12 Feb 2024 07:46:01 +0100 Message-Id: <20240212064609.1327143-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Convert queue_discard_max_store to use queue_limits_commit_update to check and update the max_discard_sectors limit and freeze the queue before doing so to ensure we don't have requests in flight while changing the limits. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- block/blk-sysfs.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index a1ec27f0ba4150..8c8f69d8ba48ee 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -175,7 +175,9 @@ static ssize_t queue_discard_max_store(struct request_queue *q, const char *page, size_t count) { unsigned long max_discard_bytes; + struct queue_limits lim; ssize_t ret; + int err; ret = queue_var_store(&max_discard_bytes, page, count); if (ret < 0) @@ -187,9 +189,14 @@ static ssize_t queue_discard_max_store(struct request_queue *q, if ((max_discard_bytes >> SECTOR_SHIFT) > UINT_MAX) return -EINVAL; - q->limits.max_user_discard_sectors = max_discard_bytes >> SECTOR_SHIFT; - q->limits.max_discard_sectors = min(q->limits.max_hw_discard_sectors, - q->limits.max_user_discard_sectors); + blk_mq_freeze_queue(q); + lim = queue_limits_start_update(q); + lim.max_user_discard_sectors = max_discard_bytes >> SECTOR_SHIFT; + err = queue_limits_commit_update(q, &lim); + blk_mq_unfreeze_queue(q); + + if (err) + return err; return ret; } From patchwork Mon Feb 12 06:46:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552720 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4461101CA; Mon, 12 Feb 2024 06:46:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720408; cv=none; b=VvNvHq0WnEDAMvIrLcteeiExdDHeUnogQMscQ8OgvruIB+oKfEJOU/qXhw/2oPZga/EaaMTmqz5FEK8isPzwt3ltsgziahd3sH3hATaXPY8wjIbI9JdH+QOzI+LyumT3/JdT9QjPl42MkEPvt9fy4a78tmcv3Bm9I04yovdxB68= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720408; c=relaxed/simple; bh=dy9F0T7Jymcv2sybwXcvojiBudIBi/oVQYbgKIRMgDM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QgxfIOqa51ztMKmUA+xcjRRsUX4KCUlpFAJ3XoKiyksfFbOmkRYxJvo+oRyoIigiNWkbWYvuSQmGsZ0f5fd60rRQlLVV+Mdq/+anqPV6IfXjpsDF+VlYI7WA+jYZ6v7n+d9Hz5rQKv4WN87gvHo7VWkreuuI81OVIlS6Spuvuw0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=0SWzpf3c; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="0SWzpf3c" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Dx6sARfC/sXmZqfT3YRO2LeR15GkI17KFSCz6+3XuI0=; b=0SWzpf3cURrSrjOIFNDVIH2iz+ A1z3kJkgcGzUhYehGOqdSTJNpyHECbex+7MkhHDlKmKEPAn3UQnD+Ef/I5npIpYCnR3d2HvVu+ITA Nfm9ypXw0JDp1JAobVXmAeHaz0UlqmoUiJkaXU23ypmnwiaT6yPx+sO9KZf7DTT9yWcGWkRyUG7nJ Qk9buk0nf+OeZN9BB2mNPpfXJN/6wcwJnOe07Qx9ydRpD7VLwcjMe1J8dCxp799+3L+bPZQfCNNwB piE7ubMil/41QriNsR1ObkKMWfB48ChRKneDJbhn/TOGMbLHhjeH7Qt7Z2L5bt3fTvbkOtvKh0QHZ nOvrUmYw==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5Q-00000004PjB-2w09; Mon, 12 Feb 2024 06:46:41 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev Subject: [PATCH 08/15] block: pass a queue_limits argument to blk_alloc_queue Date: Mon, 12 Feb 2024 07:46:02 +0100 Message-Id: <20240212064609.1327143-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass a queue_limits to blk_alloc_queue and apply it after validating and capping the values using blk_validate_limits. This will allow allocating queues with valid queue limits instead of setting the values one at a time later. Signed-off-by: Christoph Hellwig Reviewed-by: Damien Le Moal --- block/blk-core.c | 26 ++++++++++++++++++-------- block/blk-mq.c | 7 ++++--- block/genhd.c | 5 +++-- 3 files changed, 25 insertions(+), 13 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index cb56724a8dfb25..a16b5abdbbf56f 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -394,24 +394,34 @@ static void blk_timeout_work(struct work_struct *work) { } -struct request_queue *blk_alloc_queue(int node_id) +struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id) { struct request_queue *q; + int error; q = kmem_cache_alloc_node(blk_requestq_cachep, GFP_KERNEL | __GFP_ZERO, node_id); if (!q) - return NULL; + return ERR_PTR(-ENOMEM); q->last_merge = NULL; q->id = ida_alloc(&blk_queue_ida, GFP_KERNEL); - if (q->id < 0) + if (q->id < 0) { + error = q->id; goto fail_q; + } q->stats = blk_alloc_queue_stats(); - if (!q->stats) + if (!q->stats) { + error = -ENOMEM; goto fail_id; + } + + error = blk_set_default_limits(lim); + if (error) + goto fail_stats; + q->limits = *lim; q->node = node_id; @@ -436,12 +446,12 @@ struct request_queue *blk_alloc_queue(int node_id) * Init percpu_ref in atomic mode so that it's faster to shutdown. * See blk_register_queue() for details. */ - if (percpu_ref_init(&q->q_usage_counter, + error = percpu_ref_init(&q->q_usage_counter, blk_queue_usage_counter_release, - PERCPU_REF_INIT_ATOMIC, GFP_KERNEL)) + PERCPU_REF_INIT_ATOMIC, GFP_KERNEL); + if (error) goto fail_stats; - blk_set_default_limits(&q->limits); q->nr_requests = BLKDEV_DEFAULT_RQ; return q; @@ -452,7 +462,7 @@ struct request_queue *blk_alloc_queue(int node_id) ida_free(&blk_queue_ida, q->id); fail_q: kmem_cache_free(blk_requestq_cachep, q); - return NULL; + return ERR_PTR(error); } /** diff --git a/block/blk-mq.c b/block/blk-mq.c index 6d2f7b5caa01d8..9dd8055cc5246d 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4086,12 +4086,13 @@ void blk_mq_release(struct request_queue *q) static struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set, void *queuedata) { + struct queue_limits lim = { }; struct request_queue *q; int ret; - q = blk_alloc_queue(set->numa_node); - if (!q) - return ERR_PTR(-ENOMEM); + q = blk_alloc_queue(&lim, set->numa_node); + if (IS_ERR(q)) + return q; q->queuedata = queuedata; ret = blk_mq_init_allocated_queue(set, q); if (ret) { diff --git a/block/genhd.c b/block/genhd.c index d74fb5b4ae6818..7a8fd57c51f73c 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -1393,11 +1393,12 @@ struct gendisk *__alloc_disk_node(struct request_queue *q, int node_id, struct gendisk *__blk_alloc_disk(int node, struct lock_class_key *lkclass) { + struct queue_limits lim = { }; struct request_queue *q; struct gendisk *disk; - q = blk_alloc_queue(node); - if (!q) + q = blk_alloc_queue(&lim, node); + if (IS_ERR(q)) return NULL; disk = __alloc_disk_node(q, node, lkclass); From patchwork Mon Feb 12 06:46:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552721 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B836101F7; Mon, 12 Feb 2024 06:46:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720411; cv=none; b=TufqxdSqRJHSmMcSObntjV6z8n5fC3N5+5i3Hi4EsToHXYna8A2acWmEIYMQe+0e9luUQ+9PK6jtWA03+yQfekKbn1o3lGypDUASfcnfK69BtDzh47DuVlQOOgL91GJae0oxBD0b/fnfySNnFViDWVV311oGYmYimjlauGrHE4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720411; c=relaxed/simple; bh=XJzcfOzc3JDPs/ZCFT7NFSAeC4ZCdM0r+9NUSHwxn/w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WgOnqzzvvEehxpgjQK3xMsoG2G0LFHdvorYsIBZ0VOKkM8/qXaeFFnW+cKBg8xdDBqyj9LlHEdP8Ts5dLK7MhaaUCeryGF5ueePjMy+rEi1SjXwwluAWx/9sphldC1IiqPLbdDa/giiYPeVBouhHDfAqePiNBgrXelv5oiTmzRk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=12ybg/qQ; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="12ybg/qQ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=1k6xxB4x/bX1C4/kyVky5c4x3xsH6h1Ntas75pPgGMo=; b=12ybg/qQxCTF0SvNgeO4hIv00m H3fTJHBzVIwgXStYu7jCc3Res8wVYJ5SALz/DYBrvYlMw76oaHirWfKJzwTAK/+jhJzvCW8VreOoy 3DjBy8WH1Hw1IppXusxLdQP+mDpb/3EA7PbzhAKRi9O7kP+49nKU/HkVd/Zrhd/hsJDBJPPII1auz w9iiTNx2qTv+zcesZ0loYuZqMbBYSPptj9rMx1mJvjQtUMdh9futDCgf1nyhfYbMgOsJmuF+z1r0X RAg6d/0akyb5xU0N0gqllf7VaDsQmY18WUu9ACTZhxPRB+MI6vCj63wzWoHv3PhwMPYzuhhtnvela gqJyDhMg==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5T-00000004Pm5-2oW4; Mon, 12 Feb 2024 06:46:44 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, John Garry , Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 09/15] block: pass a queue_limits argument to blk_mq_init_queue Date: Mon, 12 Feb 2024 07:46:03 +0100 Message-Id: <20240212064609.1327143-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass a queue_limits to blk_mq_init_queue and apply it if non-NULL. This will allow allocating queues with valid queue limits instead of setting the values one at a time later. Also rename the function to blk_mq_alloc_queue as that is a much better name for a function that allocates a queue and always pass the queuedata argument instead of having a separate version for the extra argument. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: John Garry Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- block/blk-mq.c | 21 ++++++++------------- block/bsg-lib.c | 2 +- drivers/nvme/host/apple.c | 2 +- drivers/nvme/host/core.c | 6 +++--- drivers/scsi/scsi_scan.c | 2 +- drivers/ufs/core/ufshcd.c | 2 +- include/linux/blk-mq.h | 3 ++- 7 files changed, 17 insertions(+), 21 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 9dd8055cc5246d..f6499bbd89be90 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4083,14 +4083,14 @@ void blk_mq_release(struct request_queue *q) blk_mq_sysfs_deinit(q); } -static struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set, - void *queuedata) +struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set, + struct queue_limits *lim, void *queuedata) { - struct queue_limits lim = { }; + struct queue_limits default_lim = { }; struct request_queue *q; int ret; - q = blk_alloc_queue(&lim, set->numa_node); + q = blk_alloc_queue(lim ? lim : &default_lim, set->numa_node); if (IS_ERR(q)) return q; q->queuedata = queuedata; @@ -4101,20 +4101,15 @@ static struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set, } return q; } - -struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *set) -{ - return blk_mq_init_queue_data(set, NULL); -} -EXPORT_SYMBOL(blk_mq_init_queue); +EXPORT_SYMBOL(blk_mq_alloc_queue); /** * blk_mq_destroy_queue - shutdown a request queue * @q: request queue to shutdown * - * This shuts down a request queue allocated by blk_mq_init_queue(). All future + * This shuts down a request queue allocated by blk_mq_alloc_queue(). All future * requests will be failed with -ENODEV. The caller is responsible for dropping - * the reference from blk_mq_init_queue() by calling blk_put_queue(). + * the reference from blk_mq_alloc_queue() by calling blk_put_queue(). * * Context: can sleep */ @@ -4141,7 +4136,7 @@ struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, struct request_queue *q; struct gendisk *disk; - q = blk_mq_init_queue_data(set, queuedata); + q = blk_mq_alloc_queue(set, NULL, queuedata); if (IS_ERR(q)) return ERR_CAST(q); diff --git a/block/bsg-lib.c b/block/bsg-lib.c index b3acdbdb6e7ea8..bcc7dee6abced6 100644 --- a/block/bsg-lib.c +++ b/block/bsg-lib.c @@ -383,7 +383,7 @@ struct request_queue *bsg_setup_queue(struct device *dev, const char *name, if (blk_mq_alloc_tag_set(set)) goto out_tag_set; - q = blk_mq_init_queue(set); + q = blk_mq_alloc_queue(set, NULL, NULL); if (IS_ERR(q)) { ret = PTR_ERR(q); goto out_queue; diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c index c727cd1f264bf6..a480cdeac2883c 100644 --- a/drivers/nvme/host/apple.c +++ b/drivers/nvme/host/apple.c @@ -1516,7 +1516,7 @@ static int apple_nvme_probe(struct platform_device *pdev) goto put_dev; } - anv->ctrl.admin_q = blk_mq_init_queue(&anv->admin_tagset); + anv->ctrl.admin_q = blk_mq_alloc_queue(&anv->admin_tagset, NULL, NULL); if (IS_ERR(anv->ctrl.admin_q)) { ret = -ENOMEM; goto put_dev; diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 0d124a8ca9c321..3afd449f0ead4e 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -4366,14 +4366,14 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set, if (ret) return ret; - ctrl->admin_q = blk_mq_init_queue(set); + ctrl->admin_q = blk_mq_alloc_queue(set, NULL, NULL); if (IS_ERR(ctrl->admin_q)) { ret = PTR_ERR(ctrl->admin_q); goto out_free_tagset; } if (ctrl->ops->flags & NVME_F_FABRICS) { - ctrl->fabrics_q = blk_mq_init_queue(set); + ctrl->fabrics_q = blk_mq_alloc_queue(set, NULL, NULL); if (IS_ERR(ctrl->fabrics_q)) { ret = PTR_ERR(ctrl->fabrics_q); goto out_cleanup_admin_q; @@ -4437,7 +4437,7 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set, return ret; if (ctrl->ops->flags & NVME_F_FABRICS) { - ctrl->connect_q = blk_mq_init_queue(set); + ctrl->connect_q = blk_mq_alloc_queue(set, NULL, NULL); if (IS_ERR(ctrl->connect_q)) { ret = PTR_ERR(ctrl->connect_q); goto out_free_tag_set; diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c index 44680f65ea1455..9969f4e2f1c3d9 100644 --- a/drivers/scsi/scsi_scan.c +++ b/drivers/scsi/scsi_scan.c @@ -332,7 +332,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget, sdev->sg_reserved_size = INT_MAX; - q = blk_mq_init_queue(&sdev->host->tag_set); + q = blk_mq_alloc_queue(&sdev->host->tag_set, NULL, NULL); if (IS_ERR(q)) { /* release fn is set up in scsi_sysfs_device_initialise, so * have to free and put manually here */ diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 029d017fc1b66b..c502a86db16b30 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -10592,7 +10592,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) err = blk_mq_alloc_tag_set(&hba->tmf_tag_set); if (err < 0) goto out_remove_scsi_host; - hba->tmf_queue = blk_mq_init_queue(&hba->tmf_tag_set); + hba->tmf_queue = blk_mq_alloc_queue(&hba->tmf_tag_set, NULL, NULL); if (IS_ERR(hba->tmf_queue)) { err = PTR_ERR(hba->tmf_queue); goto free_tmf_tag_set; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 7a8150a5f05133..7d42c359e2ab28 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -692,7 +692,8 @@ struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, }) struct gendisk *blk_mq_alloc_disk_for_queue(struct request_queue *q, struct lock_class_key *lkclass); -struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *); +struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set, + struct queue_limits *lim, void *queuedata); int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, struct request_queue *q); void blk_mq_destroy_queue(struct request_queue *); From patchwork Mon Feb 12 06:46:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552722 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA20E10942; Mon, 12 Feb 2024 06:46:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720416; cv=none; b=tqk4cp6Ft++mWLmhM/mYgvduZv8OiEvUgyM0C6OQ7423pmNqet+dc6sSDGC56MYeU4ee0AA0Fct6OUhxb17FARBJlywjxxp16NZioioGAfBcud350Z1p7qcsjNLgOCWo5dcgcN9kHYrlY0EL9zm3nTjNNtz4TBRklkinEBrjRvo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720416; c=relaxed/simple; bh=d4ne5U8DHDuQIpr8Zbg/P4CjcYusW0dD5Qz+V9iSvT8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FuiaBfqK12DHa5n6Vu8xzD11HBpxlMRmPBo+X2FTS7rvYRA3TufC4kyTzQ+56vPAVOHQ44r11jSQe+Drk1lOLI/1SZB11Yd2Xqfnj36bh2q8AOUIYgVleM4n2007FPAoCTQ4ieWFSwcxF8Dx7zJdQ50IfTPFIjh1vQjJeq7IJHU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=cTG6uFHW; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="cTG6uFHW" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=bcEDlVfeyip1W5NeW0KnLVWghb7lQH/tkeKz0iVdZfk=; b=cTG6uFHW4SSHrXYpARRuWFXkMo zYGU55sLRNlskz+o2qaSHGM2vE54inxxZNxaP7V9R8Sp6Eeax4xEhuZTgLq/4fWlU9gz1yrmwJmsx umprxfyfE+7gSw7CkXYGMezvA5R3jLuymyqGex2Aub6gWf8irOxFiquyZ0J3cFhNpmBSTAX77WprF ncpHTWf57iNyIEUYirkzqxL/T+EWJf1dQU77tChaK1EWCxsPQFUpYOgfSJutO3eorS4ktDB2b5Wg+ JLs52nTbvEen8DYa+IYrpIDVdLWDzggJulhQc270nJQ+VPFosmAoSY6yP865/j1NYtiAWBPWc4IIw mW1NrfJg==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5W-00000004Pou-3Mk2; Mon, 12 Feb 2024 06:46:47 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, John Garry , Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 10/15] block: pass a queue_limits argument to blk_mq_alloc_disk Date: Mon, 12 Feb 2024 07:46:04 +0100 Message-Id: <20240212064609.1327143-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass a queue_limits to blk_mq_alloc_disk and apply it if non-NULL. This will allow allocating queues with valid queue limits instead of setting the values one at a time later. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: John Garry Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- arch/um/drivers/ubd_kern.c | 2 +- block/blk-mq.c | 5 +++-- drivers/block/amiflop.c | 2 +- drivers/block/aoe/aoeblk.c | 2 +- drivers/block/ataflop.c | 2 +- drivers/block/floppy.c | 2 +- drivers/block/loop.c | 2 +- drivers/block/mtip32xx/mtip32xx.c | 2 +- drivers/block/nbd.c | 2 +- drivers/block/null_blk/main.c | 2 +- drivers/block/ps3disk.c | 2 +- drivers/block/rbd.c | 2 +- drivers/block/rnbd/rnbd-clt.c | 2 +- drivers/block/sunvdc.c | 2 +- drivers/block/swim.c | 2 +- drivers/block/swim3.c | 2 +- drivers/block/ublk_drv.c | 2 +- drivers/block/virtio_blk.c | 2 +- drivers/block/xen-blkfront.c | 2 +- drivers/block/z2ram.c | 2 +- drivers/cdrom/gdrom.c | 2 +- drivers/memstick/core/ms_block.c | 2 +- drivers/memstick/core/mspro_block.c | 2 +- drivers/mmc/core/queue.c | 2 +- drivers/mtd/mtd_blkdevs.c | 2 +- drivers/mtd/ubi/block.c | 2 +- drivers/nvme/host/core.c | 2 +- drivers/s390/block/dasd_genhd.c | 2 +- drivers/s390/block/scm_blk.c | 2 +- include/linux/blk-mq.h | 7 ++++--- 30 files changed, 35 insertions(+), 33 deletions(-) diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c index 92ee2697ff3984..25f1b18ce7d4e9 100644 --- a/arch/um/drivers/ubd_kern.c +++ b/arch/um/drivers/ubd_kern.c @@ -906,7 +906,7 @@ static int ubd_add(int n, char **error_out) if (err) goto out; - disk = blk_mq_alloc_disk(&ubd_dev->tag_set, ubd_dev); + disk = blk_mq_alloc_disk(&ubd_dev->tag_set, NULL, ubd_dev); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_cleanup_tags; diff --git a/block/blk-mq.c b/block/blk-mq.c index f6499bbd89be90..6abb4ce46baa1e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4130,13 +4130,14 @@ void blk_mq_destroy_queue(struct request_queue *q) } EXPORT_SYMBOL(blk_mq_destroy_queue); -struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, +struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, + struct queue_limits *lim, void *queuedata, struct lock_class_key *lkclass) { struct request_queue *q; struct gendisk *disk; - q = blk_mq_alloc_queue(set, NULL, queuedata); + q = blk_mq_alloc_queue(set, lim, queuedata); if (IS_ERR(q)) return ERR_CAST(q); diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c index 2b98114a9fe092..a25414228e4741 100644 --- a/drivers/block/amiflop.c +++ b/drivers/block/amiflop.c @@ -1779,7 +1779,7 @@ static int fd_alloc_disk(int drive, int system) struct gendisk *disk; int err; - disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL); + disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL); if (IS_ERR(disk)) return PTR_ERR(disk); diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c index b1b47d88f5db44..2ff6e2da8cc41c 100644 --- a/drivers/block/aoe/aoeblk.c +++ b/drivers/block/aoe/aoeblk.c @@ -371,7 +371,7 @@ aoeblk_gdalloc(void *vp) goto err_mempool; } - gd = blk_mq_alloc_disk(set, d); + gd = blk_mq_alloc_disk(set, NULL, d); if (IS_ERR(gd)) { pr_err("aoe: cannot allocate block queue for %ld.%d\n", d->aoemajor, d->aoeminor); diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c index 50949207798d2a..cacc4ba942a814 100644 --- a/drivers/block/ataflop.c +++ b/drivers/block/ataflop.c @@ -1994,7 +1994,7 @@ static int ataflop_alloc_disk(unsigned int drive, unsigned int type) { struct gendisk *disk; - disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL); + disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL); if (IS_ERR(disk)) return PTR_ERR(disk); diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index d0e41d52d6a9b5..6f765d221b3814 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -4515,7 +4515,7 @@ static int floppy_alloc_disk(unsigned int drive, unsigned int type) { struct gendisk *disk; - disk = blk_mq_alloc_disk(&tag_sets[drive], NULL); + disk = blk_mq_alloc_disk(&tag_sets[drive], NULL, NULL); if (IS_ERR(disk)) return PTR_ERR(disk); diff --git a/drivers/block/loop.c b/drivers/block/loop.c index f8145499da38c8..3f855cc79c29f5 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -2025,7 +2025,7 @@ static int loop_add(int i) if (err) goto out_free_idr; - disk = lo->lo_disk = blk_mq_alloc_disk(&lo->tag_set, lo); + disk = lo->lo_disk = blk_mq_alloc_disk(&lo->tag_set, NULL, lo); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_cleanup_tags; diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c index b200950e8fb5f9..ac08dea73552f4 100644 --- a/drivers/block/mtip32xx/mtip32xx.c +++ b/drivers/block/mtip32xx/mtip32xx.c @@ -3431,7 +3431,7 @@ static int mtip_block_initialize(struct driver_data *dd) goto block_queue_alloc_tag_error; } - dd->disk = blk_mq_alloc_disk(&dd->tags, dd); + dd->disk = blk_mq_alloc_disk(&dd->tags, NULL, dd); if (IS_ERR(dd->disk)) { dev_err(&dd->pdev->dev, "Unable to allocate request queue\n"); diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 33a8f37bb6a1f5..30ae3cc12e7787 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -1823,7 +1823,7 @@ static struct nbd_device *nbd_dev_add(int index, unsigned int refs) if (err < 0) goto out_free_tags; - disk = blk_mq_alloc_disk(&nbd->tag_set, NULL); + disk = blk_mq_alloc_disk(&nbd->tag_set, NULL, NULL); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_free_idr; diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 4281371c81fed1..eeb895ec6f34ae 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -2147,7 +2147,7 @@ static int null_add_dev(struct nullb_device *dev) goto out_cleanup_queues; nullb->tag_set->timeout = 5 * HZ; - nullb->disk = blk_mq_alloc_disk(nullb->tag_set, nullb); + nullb->disk = blk_mq_alloc_disk(nullb->tag_set, NULL, nullb); if (IS_ERR(nullb->disk)) { rv = PTR_ERR(nullb->disk); goto out_cleanup_tags; diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c index 36d7b36c60c76b..dfd3860df4f880 100644 --- a/drivers/block/ps3disk.c +++ b/drivers/block/ps3disk.c @@ -431,7 +431,7 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev) if (error) goto fail_teardown; - gendisk = blk_mq_alloc_disk(&priv->tag_set, dev); + gendisk = blk_mq_alloc_disk(&priv->tag_set, NULL, dev); if (IS_ERR(gendisk)) { dev_err(&dev->sbd.core, "%s:%u: blk_mq_alloc_disk failed\n", __func__, __LINE__); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 00ca8a1d8c46ff..6b4f1898a722a3 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -4966,7 +4966,7 @@ static int rbd_init_disk(struct rbd_device *rbd_dev) if (err) return err; - disk = blk_mq_alloc_disk(&rbd_dev->tag_set, rbd_dev); + disk = blk_mq_alloc_disk(&rbd_dev->tag_set, NULL, rbd_dev); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_tag_set; diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c index 4044c369d22a5f..d51be4f2df61a3 100644 --- a/drivers/block/rnbd/rnbd-clt.c +++ b/drivers/block/rnbd/rnbd-clt.c @@ -1408,7 +1408,7 @@ static int rnbd_client_setup_device(struct rnbd_clt_dev *dev, dev->size = le64_to_cpu(rsp->nsectors) * le16_to_cpu(rsp->logical_block_size); - dev->gd = blk_mq_alloc_disk(&dev->sess->tag_set, dev); + dev->gd = blk_mq_alloc_disk(&dev->sess->tag_set, NULL, dev); if (IS_ERR(dev->gd)) return PTR_ERR(dev->gd); dev->queue = dev->gd->queue; diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c index 7bf4b48e2282e7..a1f74dd1eae5d5 100644 --- a/drivers/block/sunvdc.c +++ b/drivers/block/sunvdc.c @@ -824,7 +824,7 @@ static int probe_disk(struct vdc_port *port) if (err) return err; - g = blk_mq_alloc_disk(&port->tag_set, port); + g = blk_mq_alloc_disk(&port->tag_set, NULL, port); if (IS_ERR(g)) { printk(KERN_ERR PFX "%s: Could not allocate gendisk.\n", port->vio.name); diff --git a/drivers/block/swim.c b/drivers/block/swim.c index f85b6af414b431..16bdf62067d8b1 100644 --- a/drivers/block/swim.c +++ b/drivers/block/swim.c @@ -820,7 +820,7 @@ static int swim_floppy_init(struct swim_priv *swd) goto exit_put_disks; swd->unit[drive].disk = - blk_mq_alloc_disk(&swd->unit[drive].tag_set, + blk_mq_alloc_disk(&swd->unit[drive].tag_set, NULL, &swd->unit[drive]); if (IS_ERR(swd->unit[drive].disk)) { blk_mq_free_tag_set(&swd->unit[drive].tag_set); diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c index c2bc85826358e9..a04756ac778ee8 100644 --- a/drivers/block/swim3.c +++ b/drivers/block/swim3.c @@ -1210,7 +1210,7 @@ static int swim3_attach(struct macio_dev *mdev, if (rc) goto out_unregister; - disk = blk_mq_alloc_disk(&fs->tag_set, fs); + disk = blk_mq_alloc_disk(&fs->tag_set, NULL, fs); if (IS_ERR(disk)) { rc = PTR_ERR(disk); goto out_free_tag_set; diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 1dfb2e77898ba6..c5b6552707984b 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -2222,7 +2222,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd) goto out_unlock; } - disk = blk_mq_alloc_disk(&ub->tag_set, NULL); + disk = blk_mq_alloc_disk(&ub->tag_set, NULL, NULL); if (IS_ERR(disk)) { ret = PTR_ERR(disk); goto out_unlock; diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 5bf98fd6a651a5..a23fce4eca4408 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1330,7 +1330,7 @@ static int virtblk_probe(struct virtio_device *vdev) if (err) goto out_free_vq; - vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, vblk); + vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, NULL, vblk); if (IS_ERR(vblk->disk)) { err = PTR_ERR(vblk->disk); goto out_free_tags; diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 434fab30677743..4cc2884e748463 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -1136,7 +1136,7 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity, if (err) goto out_release_minors; - gd = blk_mq_alloc_disk(&info->tag_set, info); + gd = blk_mq_alloc_disk(&info->tag_set, NULL, info); if (IS_ERR(gd)) { err = PTR_ERR(gd); goto out_free_tag_set; diff --git a/drivers/block/z2ram.c b/drivers/block/z2ram.c index 11493167b0a848..7c5f4e4d9b5037 100644 --- a/drivers/block/z2ram.c +++ b/drivers/block/z2ram.c @@ -318,7 +318,7 @@ static int z2ram_register_disk(int minor) struct gendisk *disk; int err; - disk = blk_mq_alloc_disk(&tag_set, NULL); + disk = blk_mq_alloc_disk(&tag_set, NULL, NULL); if (IS_ERR(disk)) return PTR_ERR(disk); diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c index d668b174ace92f..1d044779f5e42a 100644 --- a/drivers/cdrom/gdrom.c +++ b/drivers/cdrom/gdrom.c @@ -778,7 +778,7 @@ static int probe_gdrom(struct platform_device *devptr) if (err) goto probe_fail_free_cd_info; - gd.disk = blk_mq_alloc_disk(&gd.tag_set, NULL); + gd.disk = blk_mq_alloc_disk(&gd.tag_set, NULL, NULL); if (IS_ERR(gd.disk)) { err = PTR_ERR(gd.disk); goto probe_fail_free_tag_set; diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c index 04115cd92433bf..d3277c901d16bb 100644 --- a/drivers/memstick/core/ms_block.c +++ b/drivers/memstick/core/ms_block.c @@ -2093,7 +2093,7 @@ static int msb_init_disk(struct memstick_dev *card) if (rc) goto out_release_id; - msb->disk = blk_mq_alloc_disk(&msb->tag_set, card); + msb->disk = blk_mq_alloc_disk(&msb->tag_set, NULL, card); if (IS_ERR(msb->disk)) { rc = PTR_ERR(msb->disk); goto out_free_tag_set; diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c index 5a69ed33999b4c..db0e2a42ca3c32 100644 --- a/drivers/memstick/core/mspro_block.c +++ b/drivers/memstick/core/mspro_block.c @@ -1138,7 +1138,7 @@ static int mspro_block_init_disk(struct memstick_dev *card) if (rc) goto out_release_id; - msb->disk = blk_mq_alloc_disk(&msb->tag_set, card); + msb->disk = blk_mq_alloc_disk(&msb->tag_set, NULL, card); if (IS_ERR(msb->disk)) { rc = PTR_ERR(msb->disk); goto out_free_tag_set; diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index a0a2412f62a730..67ad186d132a69 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -447,7 +447,7 @@ struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card) return ERR_PTR(ret); - disk = blk_mq_alloc_disk(&mq->tag_set, mq); + disk = blk_mq_alloc_disk(&mq->tag_set, NULL, mq); if (IS_ERR(disk)) { blk_mq_free_tag_set(&mq->tag_set); return disk; diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c index f0526dcc216276..b8878a2457afa7 100644 --- a/drivers/mtd/mtd_blkdevs.c +++ b/drivers/mtd/mtd_blkdevs.c @@ -333,7 +333,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new) goto out_kfree_tag_set; /* Create gendisk */ - gd = blk_mq_alloc_disk(new->tag_set, new); + gd = blk_mq_alloc_disk(new->tag_set, NULL, new); if (IS_ERR(gd)) { ret = PTR_ERR(gd); goto out_free_tag_set; diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c index 654bd7372cd8c0..9be87c231a2eba 100644 --- a/drivers/mtd/ubi/block.c +++ b/drivers/mtd/ubi/block.c @@ -393,7 +393,7 @@ int ubiblock_create(struct ubi_volume_info *vi) /* Initialize the gendisk of this ubiblock device */ - gd = blk_mq_alloc_disk(&dev->tag_set, dev); + gd = blk_mq_alloc_disk(&dev->tag_set, NULL, dev); if (IS_ERR(gd)) { ret = PTR_ERR(gd); goto out_free_tags; diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 3afd449f0ead4e..7a70170ac08613 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3688,7 +3688,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info) if (!ns) return; - disk = blk_mq_alloc_disk(ctrl->tagset, ns); + disk = blk_mq_alloc_disk(ctrl->tagset, NULL, ns); if (IS_ERR(disk)) goto out_free_ns; disk->fops = &nvme_bdev_ops; diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c index 30e8ee583e980e..0465b706745f64 100644 --- a/drivers/s390/block/dasd_genhd.c +++ b/drivers/s390/block/dasd_genhd.c @@ -53,7 +53,7 @@ int dasd_gendisk_alloc(struct dasd_block *block) if (rc) return rc; - gdp = blk_mq_alloc_disk(&block->tag_set, block); + gdp = blk_mq_alloc_disk(&block->tag_set, NULL, block); if (IS_ERR(gdp)) { blk_mq_free_tag_set(&block->tag_set); return PTR_ERR(gdp); diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c index ade95e91b3c8db..d05b2e2799a47a 100644 --- a/drivers/s390/block/scm_blk.c +++ b/drivers/s390/block/scm_blk.c @@ -462,7 +462,7 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev) if (ret) goto out; - bdev->gendisk = blk_mq_alloc_disk(&bdev->tag_set, scmdev); + bdev->gendisk = blk_mq_alloc_disk(&bdev->tag_set, NULL, scmdev); if (IS_ERR(bdev->gendisk)) { ret = PTR_ERR(bdev->gendisk); goto out_tag; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 7d42c359e2ab28..390d35fa003295 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -682,13 +682,14 @@ enum { #define BLK_MQ_NO_HCTX_IDX (-1U) -struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, +struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, + struct queue_limits *lim, void *queuedata, struct lock_class_key *lkclass); -#define blk_mq_alloc_disk(set, queuedata) \ +#define blk_mq_alloc_disk(set, lim, queuedata) \ ({ \ static struct lock_class_key __key; \ \ - __blk_mq_alloc_disk(set, queuedata, &__key); \ + __blk_mq_alloc_disk(set, lim, queuedata, &__key); \ }) struct gendisk *blk_mq_alloc_disk_for_queue(struct request_queue *q, struct lock_class_key *lkclass); From patchwork Mon Feb 12 06:46:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552723 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A975D1097B; Mon, 12 Feb 2024 06:46:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720418; cv=none; b=mhGBssr3lfV9EdRLiAS3abiUCOgQYGbbQNgn2BrDcWPaSq9f31earnA6kVOdajltR+8NvTMRVeiAljFZN4Mti3bERDgN8R2MqlVmqLVxWrXgJbTKd1xCWS2q4GCaR95bGpIn951aQ7gS6tPbcQ52T6D3GBXY5bgxFWEGKSYd/XA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720418; c=relaxed/simple; bh=sEN4fPiEJ6hAk+ClVFufkYSqdnT71JlYg6BcPH+D0Hk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rS8Y4R3Ok7gQTjcE0UuDptFO+O2vGn9JdDtVUnVjO2o2LbO16NG475WT2ORAY8/ZUyZJSIcLPxTgrwS+jl8CeL0FKp81z2bP0oZ8ZX8wGqmBzFdVQTYv4/KthSfUMt6hmbxQZzB51EypWuiTlGEAt4mfAOrTJILTyyJnDPGATGg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=dmhCA5Ev; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="dmhCA5Ev" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=3aRHg94Th0EjMFWzx6moF5w0CKmedg+rnDtdYVGdct0=; b=dmhCA5EvV7vlMAgwfbNjY+DxOp QlVuVvIhu9xfh7Gb3a/aK57SLYZZhOZ1jYuraGzCOOmrWCz//HCh7xoSn5fgP9P2YLhQCrcyV5SPz xLG2b703Qylg74GmsFCS2/He9A9wfFisqHxkuFLGFtC5ThRQcWzUte1i2L5NgxS8YxAPJoUdtU22f kKuFa4eKnLaSa1Tri9BO3pIR+l491OaPjfACeIffsBP3XRsxI6XGodjkztdTxSB1svceWpqlZXa3d GmuCy3f+ChBz831oDeRp6UJm0/a+j/ao5ShPV/m7ywxftAakzlRB6/Le9/3+ckfG1iK1s+8XLZ5aj skp3CDcg==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5Z-00000004Ps0-3WR6; Mon, 12 Feb 2024 06:46:50 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 11/15] virtio_blk: split virtblk_probe Date: Mon, 12 Feb 2024 07:46:05 +0100 Message-Id: <20240212064609.1327143-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Split out a virtblk_read_limits helper that just reads the various queue limits to separate it from the higher level probing logic. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Stefan Hajnoczi Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- drivers/block/virtio_blk.c | 193 +++++++++++++++++++------------------ 1 file changed, 101 insertions(+), 92 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index a23fce4eca4408..dd46ccd9f84c7d 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1248,31 +1248,17 @@ static const struct blk_mq_ops virtio_mq_ops = { static unsigned int virtblk_queue_depth; module_param_named(queue_depth, virtblk_queue_depth, uint, 0444); -static int virtblk_probe(struct virtio_device *vdev) +static int virtblk_read_limits(struct virtio_blk *vblk) { - struct virtio_blk *vblk; - struct request_queue *q; - int err, index; - + struct request_queue *q = vblk->disk->queue; + struct virtio_device *vdev = vblk->vdev; u32 v, blk_size, max_size, sg_elems, opt_io_size; u32 max_discard_segs = 0; u32 discard_granularity = 0; u16 min_io_size; u8 physical_block_exp, alignment_offset; - unsigned int queue_depth; size_t max_dma_size; - - if (!vdev->config->get) { - dev_err(&vdev->dev, "%s failure: config access disabled\n", - __func__); - return -EINVAL; - } - - err = ida_alloc_range(&vd_index_ida, 0, - minor_to_index(1 << MINORBITS) - 1, GFP_KERNEL); - if (err < 0) - goto out; - index = err; + int err; /* We need to know how many segments before we allocate. */ err = virtio_cread_feature(vdev, VIRTIO_BLK_F_SEG_MAX, @@ -1286,73 +1272,6 @@ static int virtblk_probe(struct virtio_device *vdev) /* Prevent integer overflows and honor max vq size */ sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2); - vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL); - if (!vblk) { - err = -ENOMEM; - goto out_free_index; - } - - mutex_init(&vblk->vdev_mutex); - - vblk->vdev = vdev; - - INIT_WORK(&vblk->config_work, virtblk_config_changed_work); - - err = init_vq(vblk); - if (err) - goto out_free_vblk; - - /* Default queue sizing is to fill the ring. */ - if (!virtblk_queue_depth) { - queue_depth = vblk->vqs[0].vq->num_free; - /* ... but without indirect descs, we use 2 descs per req */ - if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC)) - queue_depth /= 2; - } else { - queue_depth = virtblk_queue_depth; - } - - memset(&vblk->tag_set, 0, sizeof(vblk->tag_set)); - vblk->tag_set.ops = &virtio_mq_ops; - vblk->tag_set.queue_depth = queue_depth; - vblk->tag_set.numa_node = NUMA_NO_NODE; - vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; - vblk->tag_set.cmd_size = - sizeof(struct virtblk_req) + - sizeof(struct scatterlist) * VIRTIO_BLK_INLINE_SG_CNT; - vblk->tag_set.driver_data = vblk; - vblk->tag_set.nr_hw_queues = vblk->num_vqs; - vblk->tag_set.nr_maps = 1; - if (vblk->io_queues[HCTX_TYPE_POLL]) - vblk->tag_set.nr_maps = 3; - - err = blk_mq_alloc_tag_set(&vblk->tag_set); - if (err) - goto out_free_vq; - - vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, NULL, vblk); - if (IS_ERR(vblk->disk)) { - err = PTR_ERR(vblk->disk); - goto out_free_tags; - } - q = vblk->disk->queue; - - virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN); - - vblk->disk->major = major; - vblk->disk->first_minor = index_to_minor(index); - vblk->disk->minors = 1 << PART_BITS; - vblk->disk->private_data = vblk; - vblk->disk->fops = &virtblk_fops; - vblk->index = index; - - /* configure queue flush support */ - virtblk_update_cache_mode(vdev); - - /* If disk is read-only in the host, the guest should obey */ - if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO)) - set_disk_ro(vblk->disk, 1); - /* We can handle whatever the host told us to handle. */ blk_queue_max_segments(q, sg_elems); @@ -1381,7 +1300,7 @@ static int virtblk_probe(struct virtio_device *vdev) dev_err(&vdev->dev, "virtio_blk: invalid block size: 0x%x\n", blk_size); - goto out_cleanup_disk; + return err; } blk_queue_logical_block_size(q, blk_size); @@ -1455,8 +1374,7 @@ static int virtblk_probe(struct virtio_device *vdev) if (!v) { dev_err(&vdev->dev, "virtio_blk: secure_erase_sector_alignment can't be 0\n"); - err = -EINVAL; - goto out_cleanup_disk; + return -EINVAL; } discard_granularity = min_not_zero(discard_granularity, v); @@ -1470,8 +1388,7 @@ static int virtblk_probe(struct virtio_device *vdev) if (!v) { dev_err(&vdev->dev, "virtio_blk: max_secure_erase_sectors can't be 0\n"); - err = -EINVAL; - goto out_cleanup_disk; + return -EINVAL; } blk_queue_max_secure_erase_sectors(q, v); @@ -1485,8 +1402,7 @@ static int virtblk_probe(struct virtio_device *vdev) if (!v) { dev_err(&vdev->dev, "virtio_blk: max_secure_erase_seg can't be 0\n"); - err = -EINVAL; - goto out_cleanup_disk; + return -EINVAL; } max_discard_segs = min_not_zero(max_discard_segs, v); @@ -1511,6 +1427,99 @@ static int virtblk_probe(struct virtio_device *vdev) q->limits.discard_granularity = blk_size; } + return 0; +} + +static int virtblk_probe(struct virtio_device *vdev) +{ + struct virtio_blk *vblk; + struct request_queue *q; + int err, index; + unsigned int queue_depth; + + if (!vdev->config->get) { + dev_err(&vdev->dev, "%s failure: config access disabled\n", + __func__); + return -EINVAL; + } + + err = ida_alloc_range(&vd_index_ida, 0, + minor_to_index(1 << MINORBITS) - 1, GFP_KERNEL); + if (err < 0) + goto out; + index = err; + + vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL); + if (!vblk) { + err = -ENOMEM; + goto out_free_index; + } + + mutex_init(&vblk->vdev_mutex); + + vblk->vdev = vdev; + + INIT_WORK(&vblk->config_work, virtblk_config_changed_work); + + err = init_vq(vblk); + if (err) + goto out_free_vblk; + + /* Default queue sizing is to fill the ring. */ + if (!virtblk_queue_depth) { + queue_depth = vblk->vqs[0].vq->num_free; + /* ... but without indirect descs, we use 2 descs per req */ + if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC)) + queue_depth /= 2; + } else { + queue_depth = virtblk_queue_depth; + } + + memset(&vblk->tag_set, 0, sizeof(vblk->tag_set)); + vblk->tag_set.ops = &virtio_mq_ops; + vblk->tag_set.queue_depth = queue_depth; + vblk->tag_set.numa_node = NUMA_NO_NODE; + vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; + vblk->tag_set.cmd_size = + sizeof(struct virtblk_req) + + sizeof(struct scatterlist) * VIRTIO_BLK_INLINE_SG_CNT; + vblk->tag_set.driver_data = vblk; + vblk->tag_set.nr_hw_queues = vblk->num_vqs; + vblk->tag_set.nr_maps = 1; + if (vblk->io_queues[HCTX_TYPE_POLL]) + vblk->tag_set.nr_maps = 3; + + err = blk_mq_alloc_tag_set(&vblk->tag_set); + if (err) + goto out_free_vq; + + vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, NULL, vblk); + if (IS_ERR(vblk->disk)) { + err = PTR_ERR(vblk->disk); + goto out_free_tags; + } + q = vblk->disk->queue; + + virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN); + + vblk->disk->major = major; + vblk->disk->first_minor = index_to_minor(index); + vblk->disk->minors = 1 << PART_BITS; + vblk->disk->private_data = vblk; + vblk->disk->fops = &virtblk_fops; + vblk->index = index; + + /* configure queue flush support */ + virtblk_update_cache_mode(vdev); + + /* If disk is read-only in the host, the guest should obey */ + if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO)) + set_disk_ro(vblk->disk, 1); + + err = virtblk_read_limits(vblk); + if (err) + goto out_cleanup_disk; + virtblk_update_capacity(vblk, false); virtio_device_ready(vdev); From patchwork Mon Feb 12 06:46:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552724 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AF6910A29; Mon, 12 Feb 2024 06:47:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720422; cv=none; b=GjlXnYdKkAhxQakwv9uiNmYOuEaxsgOSQEeyJZbWuQr6sJkqY7tgRGq2DOY01LDuJMIH3QSi10yqyK73+XXzsb2LWaK9txefsxVslw/XxET/vjgWNyoyntPTt0tiXz+N7a5yYYIEPLhnywIBceQxeew53bnycJIp76hZ8xKpKF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720422; c=relaxed/simple; bh=neLUgk6sNa38WSZOfPA9zj9MRh7/laRP+Aaz1sLrJaU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eXal1FKVu16J4P2AyBgoH7GC8NFpHqCiARUPv0jrlp7iD8SeknirlkGXJw4/iFu/l6Wo0uD0Ux9DSnmImCM8UGvd9w8H6FfW8bQG1+qfIZA/AVSmp2lZMib111Xz4yLsBLTzm0cOTC0yES0EIHdaH4+kVgbTnpz3v24IIljB0p8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=0uNfFP8K; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="0uNfFP8K" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=dYRIJ4plalalEl2ACyBNtF5eLDE8E4daipGCvXYrKxI=; b=0uNfFP8K4L/75/1lRUmIYLnduY yFAJi26YhT5ZS8uk9kuxo0mL22O4V9gaRC7aC8yf7UqBSVxJRUIFkAahXpvE0+EJZMlDyQUqxY1ib RipXjiq2UiHmadGl0ISNk0MKDGqL/t51B1VzXJrTIwKbGcDZfARtgI+RULEpap0ueldj2F6YMM/vn TkmAYPrF/fJmnWkSymL/LzUwmPCoYuxUvSpG7Y/flTXo8zBevSJ349LDVSqblB5CY+mRgIspZN4ds TLA3ivBvXXlXj+RsTK+BAHYID0uqj8hQD2yO/yi8jP/oa31k9397hb2YM2fZgei5BWCrxfQNHqgmy eN6hJJBQ==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5c-00000004Puw-3i3d; Mon, 12 Feb 2024 06:46:54 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 12/15] virtio_blk: pass queue_limits to blk_mq_alloc_disk Date: Mon, 12 Feb 2024 07:46:06 +0100 Message-Id: <20240212064609.1327143-13-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Call virtblk_read_limits and most of virtblk_probe_zoned_device before allocating the gendisk and thus request_queue and make them read into a queue_limits structure instead. Pass this initialized queue_limits to blk_mq_alloc_disk to set the queue up with the right parameters from the start and only leave a few final touches for zoned devices to be done just before adding the disk. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Stefan Hajnoczi Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- drivers/block/virtio_blk.c | 130 ++++++++++++++++++------------------- 1 file changed, 64 insertions(+), 66 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index dd46ccd9f84c7d..d8b55874cd5950 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -720,16 +720,15 @@ static int virtblk_report_zones(struct gendisk *disk, sector_t sector, return ret; } -static int virtblk_probe_zoned_device(struct virtio_device *vdev, - struct virtio_blk *vblk, - struct request_queue *q) +static int virtblk_read_zoned_limits(struct virtio_blk *vblk, + struct queue_limits *lim) { + struct virtio_device *vdev = vblk->vdev; u32 v, wg; dev_dbg(&vdev->dev, "probing host-managed zoned device\n"); - disk_set_zoned(vblk->disk); - blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); + lim->zoned = true; virtio_cread(vdev, struct virtio_blk_config, zoned.max_open_zones, &v); @@ -747,8 +746,8 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, dev_warn(&vdev->dev, "zero write granularity reported\n"); return -ENODEV; } - blk_queue_physical_block_size(q, wg); - blk_queue_io_min(q, wg); + lim->physical_block_size = wg; + lim->io_min = wg; dev_dbg(&vdev->dev, "write granularity = %u\n", wg); @@ -764,13 +763,13 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, vblk->zone_sectors); return -ENODEV; } - blk_queue_chunk_sectors(q, vblk->zone_sectors); + lim->chunk_sectors = vblk->zone_sectors; dev_dbg(&vdev->dev, "zone sectors = %u\n", vblk->zone_sectors); if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { dev_warn(&vblk->vdev->dev, "ignoring negotiated F_DISCARD for zoned device\n"); - blk_queue_max_discard_sectors(q, 0); + lim->max_hw_discard_sectors = 0; } virtio_cread(vdev, struct virtio_blk_config, @@ -785,25 +784,21 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, wg, v); return -ENODEV; } - blk_queue_max_zone_append_sectors(q, v); + lim->max_zone_append_sectors = v; dev_dbg(&vdev->dev, "max append sectors = %u\n", v); - return blk_revalidate_disk_zones(vblk->disk, NULL); + return 0; } - #else - /* - * Zoned block device support is not configured in this kernel. - * Host-managed zoned devices can't be supported, but others are - * good to go as regular block devices. + * Zoned block device support is not configured in this kernel, host-managed + * zoned devices can't be supported. */ #define virtblk_report_zones NULL - -static inline int virtblk_probe_zoned_device(struct virtio_device *vdev, - struct virtio_blk *vblk, struct request_queue *q) +static inline int virtblk_read_zoned_limits(struct virtio_blk *vblk, + struct queue_limits *lim) { - dev_err(&vdev->dev, + dev_err(&vblk->vdev->dev, "virtio_blk: zoned devices are not supported"); return -EOPNOTSUPP; } @@ -1248,9 +1243,9 @@ static const struct blk_mq_ops virtio_mq_ops = { static unsigned int virtblk_queue_depth; module_param_named(queue_depth, virtblk_queue_depth, uint, 0444); -static int virtblk_read_limits(struct virtio_blk *vblk) +static int virtblk_read_limits(struct virtio_blk *vblk, + struct queue_limits *lim) { - struct request_queue *q = vblk->disk->queue; struct virtio_device *vdev = vblk->vdev; u32 v, blk_size, max_size, sg_elems, opt_io_size; u32 max_discard_segs = 0; @@ -1273,10 +1268,10 @@ static int virtblk_read_limits(struct virtio_blk *vblk) sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2); /* We can handle whatever the host told us to handle. */ - blk_queue_max_segments(q, sg_elems); + lim->max_segments = sg_elems; /* No real sector limit. */ - blk_queue_max_hw_sectors(q, UINT_MAX); + lim->max_hw_sectors = UINT_MAX; max_dma_size = virtio_max_dma_size(vdev); max_size = max_dma_size > U32_MAX ? U32_MAX : max_dma_size; @@ -1288,7 +1283,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) if (!err) max_size = min(max_size, v); - blk_queue_max_segment_size(q, max_size); + lim->max_segment_size = max_size; /* Host can optionally specify the block size of the device */ err = virtio_cread_feature(vdev, VIRTIO_BLK_F_BLK_SIZE, @@ -1303,35 +1298,34 @@ static int virtblk_read_limits(struct virtio_blk *vblk) return err; } - blk_queue_logical_block_size(q, blk_size); + lim->logical_block_size = blk_size; } else - blk_size = queue_logical_block_size(q); + blk_size = lim->logical_block_size; /* Use topology information if available */ err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, struct virtio_blk_config, physical_block_exp, &physical_block_exp); if (!err && physical_block_exp) - blk_queue_physical_block_size(q, - blk_size * (1 << physical_block_exp)); + lim->physical_block_size = blk_size * (1 << physical_block_exp); err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, struct virtio_blk_config, alignment_offset, &alignment_offset); if (!err && alignment_offset) - blk_queue_alignment_offset(q, blk_size * alignment_offset); + lim->alignment_offset = blk_size * alignment_offset; err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, struct virtio_blk_config, min_io_size, &min_io_size); if (!err && min_io_size) - blk_queue_io_min(q, blk_size * min_io_size); + lim->io_min = blk_size * min_io_size; err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, struct virtio_blk_config, opt_io_size, &opt_io_size); if (!err && opt_io_size) - blk_queue_io_opt(q, blk_size * opt_io_size); + lim->io_opt = blk_size * opt_io_size; if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { virtio_cread(vdev, struct virtio_blk_config, @@ -1339,7 +1333,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) virtio_cread(vdev, struct virtio_blk_config, max_discard_sectors, &v); - blk_queue_max_discard_sectors(q, v ? v : UINT_MAX); + lim->max_hw_discard_sectors = v ? v : UINT_MAX; virtio_cread(vdev, struct virtio_blk_config, max_discard_seg, &max_discard_segs); @@ -1348,7 +1342,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) if (virtio_has_feature(vdev, VIRTIO_BLK_F_WRITE_ZEROES)) { virtio_cread(vdev, struct virtio_blk_config, max_write_zeroes_sectors, &v); - blk_queue_max_write_zeroes_sectors(q, v ? v : UINT_MAX); + lim->max_write_zeroes_sectors = v ? v : UINT_MAX; } /* The discard and secure erase limits are combined since the Linux @@ -1391,7 +1385,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) return -EINVAL; } - blk_queue_max_secure_erase_sectors(q, v); + lim->max_secure_erase_sectors = v; virtio_cread(vdev, struct virtio_blk_config, max_secure_erase_seg, &v); @@ -1418,13 +1412,34 @@ static int virtblk_read_limits(struct virtio_blk *vblk) if (!max_discard_segs) max_discard_segs = sg_elems; - blk_queue_max_discard_segments(q, - min(max_discard_segs, MAX_DISCARD_SEGMENTS)); + lim->max_discard_segments = + min(max_discard_segs, MAX_DISCARD_SEGMENTS); if (discard_granularity) - q->limits.discard_granularity = discard_granularity << SECTOR_SHIFT; + lim->discard_granularity = + discard_granularity << SECTOR_SHIFT; else - q->limits.discard_granularity = blk_size; + lim->discard_granularity = blk_size; + } + + if (virtio_has_feature(vdev, VIRTIO_BLK_F_ZONED)) { + u8 model; + + virtio_cread(vdev, struct virtio_blk_config, zoned.model, &model); + switch (model) { + case VIRTIO_BLK_Z_NONE: + case VIRTIO_BLK_Z_HA: + /* treat host-aware devices as non-zoned */ + return 0; + case VIRTIO_BLK_Z_HM: + err = virtblk_read_zoned_limits(vblk, lim); + if (err) + return err; + break; + default: + dev_err(&vdev->dev, "unsupported zone model %d\n", model); + return -EINVAL; + } } return 0; @@ -1433,7 +1448,7 @@ static int virtblk_read_limits(struct virtio_blk *vblk) static int virtblk_probe(struct virtio_device *vdev) { struct virtio_blk *vblk; - struct request_queue *q; + struct queue_limits lim = { }; int err, index; unsigned int queue_depth; @@ -1493,12 +1508,15 @@ static int virtblk_probe(struct virtio_device *vdev) if (err) goto out_free_vq; - vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, NULL, vblk); + err = virtblk_read_limits(vblk, &lim); + if (err) + goto out_free_tags; + + vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, &lim, vblk); if (IS_ERR(vblk->disk)) { err = PTR_ERR(vblk->disk); goto out_free_tags; } - q = vblk->disk->queue; virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN); @@ -1516,10 +1534,6 @@ static int virtblk_probe(struct virtio_device *vdev) if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO)) set_disk_ro(vblk->disk, 1); - err = virtblk_read_limits(vblk); - if (err) - goto out_cleanup_disk; - virtblk_update_capacity(vblk, false); virtio_device_ready(vdev); @@ -1527,27 +1541,11 @@ static int virtblk_probe(struct virtio_device *vdev) * All steps that follow use the VQs therefore they need to be * placed after the virtio_device_ready() call above. */ - if (virtio_has_feature(vdev, VIRTIO_BLK_F_ZONED)) { - u8 model; - - virtio_cread(vdev, struct virtio_blk_config, zoned.model, - &model); - switch (model) { - case VIRTIO_BLK_Z_NONE: - case VIRTIO_BLK_Z_HA: - /* Present the host-aware device as non-zoned */ - break; - case VIRTIO_BLK_Z_HM: - err = virtblk_probe_zoned_device(vdev, vblk, q); - if (err) - goto out_cleanup_disk; - break; - default: - dev_err(&vdev->dev, "unsupported zone model %d\n", - model); - err = -EINVAL; + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && lim.zoned) { + blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, vblk->disk->queue); + err = blk_revalidate_disk_zones(vblk->disk, NULL); + if (err) goto out_cleanup_disk; - } } err = device_add_disk(&vdev->dev, vblk->disk, virtblk_attr_groups); From patchwork Mon Feb 12 06:46:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552725 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D73011118E; Mon, 12 Feb 2024 06:47:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720425; cv=none; b=nbplGhMkfFnMGCOVJ9VC2qs0sZ5Uu92/5BbP4EsPQET0teHI8daPJt+fA5jV9b0F9+rWBcHhuTV3U+onWOPq1IGUsyNOxpV11wzoYpDE9OJggluoPga5D3UBSL4IN5H4fo1w2uTYlzvyxqvNgUGJOeL5UEt+LRpl/SPzJjI145s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720425; c=relaxed/simple; bh=eCRgyNeo2QlLey7ig1/ZK35VjYuL0BTonWNbJio9510=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CtobVT2cMICZeb7GHsdiplncjxGzZKncYdHx/edK4NiSYt7endEZInePPiWBixKPz/OUUd3gkG5Zk7r9oThsItlgVSrYHHOTQfNQiHG93fsNkFILNTAQkQflSMc3sLvrrMcjmSuTS9AfIL0ZgDjAzE3yHvbEYDcQQAm5bG2kAMM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=Lkl1QTB+; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Lkl1QTB+" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=7L7vdo9JmKQrRr+oi/wGFF1pvcQ4YeAjt0DgbHMcHqA=; b=Lkl1QTB+nGJrJfWYp3QJBA3DLN s+sua3LGyoc+KCeKBzLTEZ7nUkGAWayK1wpmYFgp9sDmZNvRj4v7F2UaXGGhrHukczIGCRIY/7keF ecmc1Xmbm09dxaGBuSQZDzHb6X/MbSEsLKt6klDGF846BMwslCcvm49v1zncErVnmlixhA2+jmp0m +9rMeXG7z0WMyli7R40uN/hbEXh1ePEMguGC52L4hwIi/SIQDa0ndSS6Jdg5vxFYS5QSCIvA/OyU7 1U/BFr3lHX3p6P1HZEfsK/UDx8DPCFce+qBtt7H+9CP9d85L1E1RkOIbad2DesDTrXym6+4bigttq t42F24XQ==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5g-00000004Pyq-1qgU; Mon, 12 Feb 2024 06:46:57 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 13/15] loop: cleanup loop_config_discard Date: Mon, 12 Feb 2024 07:46:07 +0100 Message-Id: <20240212064609.1327143-14-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Initialize the local variables for the discard max sectors and granularity to zero as a sensible default, and then merge the calls assigning them to the queue limits. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- drivers/block/loop.c | 27 ++++++++------------------- 1 file changed, 8 insertions(+), 19 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 3f855cc79c29f5..7abeb586942677 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -755,7 +755,8 @@ static void loop_config_discard(struct loop_device *lo) struct file *file = lo->lo_backing_file; struct inode *inode = file->f_mapping->host; struct request_queue *q = lo->lo_queue; - u32 granularity, max_discard_sectors; + u32 granularity = 0, max_discard_sectors = 0; + struct kstatfs sbuf; /* * If the backing device is a block device, mirror its zeroing @@ -775,29 +776,17 @@ static void loop_config_discard(struct loop_device *lo) * We use punch hole to reclaim the free space used by the * image a.k.a. discard. */ - } else if (!file->f_op->fallocate) { - max_discard_sectors = 0; - granularity = 0; - - } else { - struct kstatfs sbuf; - + } else if (file->f_op->fallocate && !vfs_statfs(&file->f_path, &sbuf)) { max_discard_sectors = UINT_MAX >> 9; - if (!vfs_statfs(&file->f_path, &sbuf)) - granularity = sbuf.f_bsize; - else - max_discard_sectors = 0; + granularity = sbuf.f_bsize; } - if (max_discard_sectors) { + blk_queue_max_discard_sectors(q, max_discard_sectors); + blk_queue_max_write_zeroes_sectors(q, max_discard_sectors); + if (max_discard_sectors) q->limits.discard_granularity = granularity; - blk_queue_max_discard_sectors(q, max_discard_sectors); - blk_queue_max_write_zeroes_sectors(q, max_discard_sectors); - } else { + else q->limits.discard_granularity = 0; - blk_queue_max_discard_sectors(q, 0); - blk_queue_max_write_zeroes_sectors(q, 0); - } } struct loop_worker { From patchwork Mon Feb 12 06:46:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552726 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52EBA1171D; Mon, 12 Feb 2024 06:47:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720427; cv=none; b=GrfpflieC9Pad8yOlTcGeqUlSp8Jm3x41+V+d6npzziNkHd4Cn+uj06NVNeOj2oECTlODADXHZe4QlHxSQqjemOKTw0hOhsZ+GzthvZX4qI3ho+N/HhubGwp1mCpqPKlXIFR9LrTiMPZyvw3YukHk+whaUhGUiicoVvwrKo/nGY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720427; c=relaxed/simple; bh=w586ArhPfKm9bAMkYeSRUYjeNqFc0qWnxxC7UORiduo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OyPzNNja9jUakJ2tiRe7XwjDaJWGRRN/AxJBhALXXMXg0SElPv+Fx7mJRVzi/sULlOEucPt+y8zYVAMewOO6nfAJCWVZKerflJNFdzP3rvnAnLvpGARAnqMLcbMdet5NFYKwvOTZgappjRH9rOOv/vKGInLO3WDBMiMf/4WguF4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=Zgy1x712; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Zgy1x712" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=YZzI2qzFink9WZtdPiGrGdyVWJZL9xIcaGTFOqTyA10=; b=Zgy1x712nMhW/TeQyo67LcmyRS Jt7Xk7VYV4g/edU1ad5X2p+T923hal8+pDJMHBGJWui4Y4X9N+O5t1tdtkMzR9adEkcOGzPY+IUj1 0gbdK8dJ1VNpaKCwf23zmvb85nK/3j9xUDin4Y7NpvGx1El1K2lWWRO8/Ei97Zatx67UVIgneemGA sLEMS9V040Xt5W0AF7J/0p46rfzoy2wDiQnlOGHS88yRiBTjt7ATKXOGaIFRdgi2if5+Q6ynb4I/m sIP+ts1ycJzMIgBENQZDa6ki8z/3JC2rdnu64EOQi8gJ4m1EjtfF2V/vlU0aJYhecisN/MAZ4Uxu/ k22SROZw==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5j-00000004Q2N-2vHo; Mon, 12 Feb 2024 06:47:00 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 14/15] loop: pass queue_limits to blk_mq_alloc_disk Date: Mon, 12 Feb 2024 07:46:08 +0100 Message-Id: <20240212064609.1327143-15-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass the max_hw_sector limit loop sets at initialization time directly to blk_mq_alloc_disk instead of updating it right after the allocation. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- drivers/block/loop.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 7abeb586942677..26c8ea79086798 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1971,6 +1971,12 @@ static const struct blk_mq_ops loop_mq_ops = { static int loop_add(int i) { + struct queue_limits lim = { + /* + * Random number picked from the historic block max_sectors cap. + */ + .max_hw_sectors = 2560u, + }; struct loop_device *lo; struct gendisk *disk; int err; @@ -2014,16 +2020,13 @@ static int loop_add(int i) if (err) goto out_free_idr; - disk = lo->lo_disk = blk_mq_alloc_disk(&lo->tag_set, NULL, lo); + disk = lo->lo_disk = blk_mq_alloc_disk(&lo->tag_set, &lim, lo); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_cleanup_tags; } lo->lo_queue = lo->lo_disk->queue; - /* random number picked from the history block max_sectors cap */ - blk_queue_max_hw_sectors(lo->lo_queue, 2560u); - /* * By default, we do buffer IO, so it doesn't make sense to enable * merge because the I/O submitted to backing file is handled page by From patchwork Mon Feb 12 06:46:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552727 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25FB6125A9; Mon, 12 Feb 2024 06:47:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720431; cv=none; b=D1NgNJA10IggajYrnYBHVlV2OWhF5M0M8ecA6b+kOa/9fovpfDRL0Ez4ZSELLKuehLimCxWOkfwe3WdJama8/jc5OO9gmQvq4wh+SuX0Ipc9zJ3xcIjT2kM9Ij0xDHcZYed8vzQGCpuuvv0papLbkkunNbykHf2xjEDzzlEO0OA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707720431; c=relaxed/simple; bh=xdkwlvQpTV/CV6uqbotx5NJMzhUp+gdubnT+BphgbMI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=g7ydo8UtASiALrgTF7HgLqGHDH8ICeQtpBReRM34GEHO7ev5Lgkl59nM8igTdELbA7hr1i0I8oMcMEwKKakqAy7ekhxUEHRumLOZv130jONjjGKek3Ezi+2d8JYneratS8R5iMqeRMbqk009VkqHcovxecttbdMG3f8l26ne0aM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=fWoxtjzB; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="fWoxtjzB" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=pvR4YMlI58JJSNlUq2cwBYj9o0FsTZy6GKdNPIYVtlA=; b=fWoxtjzBQte6cxKuakyzsq9ujE E7ZNhxcQ4j0Bjej2xEolqbi0aw5jrEhbrhL73pzD3ldtiOrtDP3kXkykHkjtS3HAsumRyw62LfGuN XFvSXz4fml+iQOjLtb3bA6TMTlAFMwNOX8oef1tnemjgnr9UqXOGeslvTreJEQMBXQdp2g4S9YQUO 3XDCCy8w2jJO4PYQVzbn++ZEtO5LgTD40wLydK5oBnjm7hw+OCjCuSu4OnKaOdD58m1+LS/p3gMRU ndJc4JXqrfTR+7qgZpOPVrSkBrlREdDoN+BabEEP5pzZy3uzBywdzJsoCqFNbgtL1nyeyeHe0LW7v 9KjUhRdw==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQ5m-00000004Q50-31Km; Mon, 12 Feb 2024 06:47:03 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Paolo Bonzini , Stefan Hajnoczi , "Martin K. Petersen" , Damien Le Moal , Keith Busch , Sagi Grimberg , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux.dev, Chaitanya Kulkarni , Ming Lei , Hannes Reinecke Subject: [PATCH 15/15] loop: use the atomic queue limits update API Date: Mon, 12 Feb 2024 07:46:09 +0100 Message-Id: <20240212064609.1327143-16-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212064609.1327143-1-hch@lst.de> References: <20240212064609.1327143-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass the default limits to blk_mq_alloc_disk and then use the queue_limits_{start,commit}_update API to change the limits in an atomic way on existing loop gendisks. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei Reviewed-by: Damien Le Moal Reviewed-by: Martin K. Petersen Reviewed-by: Hannes Reinecke --- drivers/block/loop.c | 41 +++++++++++++++++++++++++---------------- 1 file changed, 25 insertions(+), 16 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 26c8ea79086798..28a95fd366fea5 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -750,11 +750,11 @@ static void loop_sysfs_exit(struct loop_device *lo) &loop_attribute_group); } -static void loop_config_discard(struct loop_device *lo) +static void loop_config_discard(struct loop_device *lo, + struct queue_limits *lim) { struct file *file = lo->lo_backing_file; struct inode *inode = file->f_mapping->host; - struct request_queue *q = lo->lo_queue; u32 granularity = 0, max_discard_sectors = 0; struct kstatfs sbuf; @@ -781,12 +781,12 @@ static void loop_config_discard(struct loop_device *lo) granularity = sbuf.f_bsize; } - blk_queue_max_discard_sectors(q, max_discard_sectors); - blk_queue_max_write_zeroes_sectors(q, max_discard_sectors); + lim->max_hw_discard_sectors = max_discard_sectors; + lim->max_write_zeroes_sectors = max_discard_sectors; if (max_discard_sectors) - q->limits.discard_granularity = granularity; + lim->discard_granularity = granularity; else - q->limits.discard_granularity = 0; + lim->discard_granularity = 0; } struct loop_worker { @@ -975,6 +975,20 @@ loop_set_status_from_info(struct loop_device *lo, return 0; } +static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize, + bool update_discard_settings) +{ + struct queue_limits lim; + + lim = queue_limits_start_update(lo->lo_queue); + lim.logical_block_size = bsize; + lim.physical_block_size = bsize; + lim.io_min = bsize; + if (update_discard_settings) + loop_config_discard(lo, &lim); + return queue_limits_commit_update(lo->lo_queue, &lim); +} + static int loop_configure(struct loop_device *lo, blk_mode_t mode, struct block_device *bdev, const struct loop_config *config) @@ -1072,11 +1086,10 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode, else bsize = 512; - blk_queue_logical_block_size(lo->lo_queue, bsize); - blk_queue_physical_block_size(lo->lo_queue, bsize); - blk_queue_io_min(lo->lo_queue, bsize); + error = loop_reconfigure_limits(lo, bsize, true); + if (WARN_ON_ONCE(error)) + goto out_unlock; - loop_config_discard(lo); loop_update_rotational(lo); loop_update_dio(lo); loop_sysfs_init(lo); @@ -1143,9 +1156,7 @@ static void __loop_clr_fd(struct loop_device *lo, bool release) lo->lo_offset = 0; lo->lo_sizelimit = 0; memset(lo->lo_file_name, 0, LO_NAME_SIZE); - blk_queue_logical_block_size(lo->lo_queue, 512); - blk_queue_physical_block_size(lo->lo_queue, 512); - blk_queue_io_min(lo->lo_queue, 512); + loop_reconfigure_limits(lo, 512, false); invalidate_disk(lo->lo_disk); loop_sysfs_exit(lo); /* let user-space know about this change */ @@ -1477,9 +1488,7 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg) invalidate_bdev(lo->lo_device); blk_mq_freeze_queue(lo->lo_queue); - blk_queue_logical_block_size(lo->lo_queue, arg); - blk_queue_physical_block_size(lo->lo_queue, arg); - blk_queue_io_min(lo->lo_queue, arg); + err = loop_reconfigure_limits(lo, arg, false); loop_update_dio(lo); blk_mq_unfreeze_queue(lo->lo_queue);