From patchwork Wed Feb 28 22:56:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576094 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D0C070055; Wed, 28 Feb 2024 22:56:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161017; cv=none; b=OSw9BB19xuqOhYDJ6/hMSvgUIMWZFYXeqbHTJoaNSMOD+ayg/8oEkP9wGBbD+C5mSRhv6YTfD8M1+5W/a69quldwwrp3SYQYxUymETVHECLnmMgBxSfIMHDYYpJ3iM1UnqUjHcmw3OSVGvF3qvTDCA0R3gUbH1Ag+8t13NjQNkA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161017; c=relaxed/simple; bh=Yr884w3gRhIkUcrkO7iPb9alAoxqN1T7r99JPoCCW48=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KwMjjjeAxTTQDfdlSJdoj4rfOsAaxYLEo6RBLI4D3n1EH7QA21kQN8fwIZLawgz9vm/dmeT2JB85Vqr6LtMbv8rEz6GtnnVUU3HfmmEuCNfxnnH/UVSKDT3GZ2v1o2KQLv8hVkL+zvSDHgWr3CrAnhgQk6ewAk4vJNqPewDetic= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=u9eozA0Q; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="u9eozA0Q" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Vp6cPOGmx7IsY/Gom3ohdEDfNj/bokcwTvb4p5mmNMM=; b=u9eozA0Qn28jKb49Mfz27fnc8T vwsdKkHbxjzM9kSW+X5yjG8hlgtggF4BI6t/GFmEqnDdPwMTqjdeFvxBTW1hTn4QYUUd/gvMobg3S b2BczzfKEdpTX1UCJI1+/hRGPdL5asBoqr4hIg9ntrN/1UqG1zQFpIJzlYyDAfDdbBXMDZE6WaETz fK6azL4fPHlFCySIpTx8igiaq0qthjtsIrkZX1lFkgYnhncuWOOsSnKhyq/4XYD6h58xSGHS80SLc aA047lq6t49cls1yhIhmC0WrEu6d/LHgW99K0bfiAcOeVMma9yvw787cb2wL0XIR8A37UauQGcs+8 tQ+jUGaA==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSr7-0000000BCNY-3QNd; Wed, 28 Feb 2024 22:56:53 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 01/14] block: add a queue_limits_set helper Date: Wed, 28 Feb 2024 14:56:40 -0800 Message-Id: <20240228225653.947152-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a small wrapper around queue_limits_commit_update for stacking drivers that don't want to update existing limits, but set an entirely new set. Signed-off-by: Christoph Hellwig --- block/blk-settings.c | 18 ++++++++++++++++++ include/linux/blkdev.h | 1 + 2 files changed, 19 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index b6bbe683d218fa..1989a177be201b 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -266,6 +266,24 @@ int queue_limits_commit_update(struct request_queue *q, } EXPORT_SYMBOL_GPL(queue_limits_commit_update); +/** + * queue_limits_commit_set - apply queue limits to queue + * @q: queue to update + * @lim: limits to apply + * + * Apply the limits in @lim that were freshly initialized to @q. + * To update existing limits use queue_limits_start_update() and + * queue_limits_commit_update() instead. + * + * Returns 0 if successful, else a negative error code. + */ +int queue_limits_set(struct request_queue *q, struct queue_limits *lim) +{ + mutex_lock(&q->limits_lock); + return queue_limits_commit_update(q, lim); +} +EXPORT_SYMBOL_GPL(queue_limits_set); + /** * blk_queue_bounce_limit - set bounce buffer limit for queue * @q: the request queue for the device diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index a14ea934413850..dd510ad7ce4b45 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -889,6 +889,7 @@ queue_limits_start_update(struct request_queue *q) } int queue_limits_commit_update(struct request_queue *q, struct queue_limits *lim); +int queue_limits_set(struct request_queue *q, struct queue_limits *lim); /* * Access functions for manipulating queue properties From patchwork Wed Feb 28 22:56:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576095 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DDD871EA1; Wed, 28 Feb 2024 22:56:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161017; cv=none; b=Exc9ZNLpK6IfjMlAGuJ3rjB2dnelLE1K3sFdfG3cy8jz/JS6/6G6yoQtJ4dPu3lrSd0BZOFIwh/x2QMNDZeUoH0kAbhlKrrkfByHzB6rDAr5xqqDovosLryCFH5hdO3fO6NSrHvHNVEFEl5xBD9ZHMjgbk5daILV+ayKGRyUHbk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161017; c=relaxed/simple; bh=c0iiMVqC1HcMKa6BgM8sFnbsAw2nZhdnOCQjy+MlK7Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Zx96AGFTkHV9T1lSWkDPNMinIhTlk2UVW/sEkPkx7F2j+yVIdG1m2Gv3DeBNi/aoG+lPDE2mMxoByD3aARGWOuWS6hSsY35fiY49EwqUSiAKqFN8QdyN+smzzJklUwbkNT4mxtqCl7iV9veuyhX3RI/yIsDX40J6C3cRIYoKQFU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=Sg237stn; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Sg237stn" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=WCouwC0fhxTIxJn2BmBN/79OSqvHAzI8vVcSvmEQk0s=; b=Sg237stn4250N39bX0dwjYj6KA O2kUoh+SvgVP1VouSRndin6T7EWpOa7AXtGq1B66y/Ft5ABO6++6j2xvUVzfuXxlkRkEuuqYIKqDC uqa6LEewtrGSqTwwfw7qDZIK/v9Hnp2GIwG4sbUx06bNjNQ0woDxddgnVP+892OJ77QPWmihAvIuY kw8ve9t8gxcTu1PPkdCBinGCVGuYuIUdU+36RBuLSmybb7PQWb/CIiNorPHCv3xlGBp1fwKohq0Qz 0of2/kA/JVbhGz4bG13DPo67b2egs5hqDvPKKKHXcQ2V/B4aoNElcWclGHHhu0wVHEqiBzWU94Odx my6WFG9g==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSr8-0000000BCNc-1M3x; Wed, 28 Feb 2024 22:56:54 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 02/14] block: add a queue_limits_stack_bdev helper Date: Wed, 28 Feb 2024 14:56:41 -0800 Message-Id: <20240228225653.947152-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a small wrapper around blk_stack_limits that allows passing a bdev for the bottom device and prints an error in case of misaligned device. The name fits into the new queue limits API and the intent is to eventually replace disk_stack_limits. Signed-off-by: Christoph Hellwig --- block/blk-settings.c | 25 +++++++++++++++++++++++++ include/linux/blkdev.h | 2 ++ 2 files changed, 27 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 1989a177be201b..865fe4ebbf9b83 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -891,6 +891,31 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, } EXPORT_SYMBOL(blk_stack_limits); +/** + * queue_limits_stack_bdev - adjust queue_limits for stacked devices + * @t: the stacking driver limits (top device) + * @bdev: the underlying block device (bottom) + * @offset: offset to beginning of data within component device + * @pfx: prefix to use for warnings logged + * + * Description: + * This function is used by stacking drivers like MD and DM to ensure + * that all component devices have compatible block sizes and + * alignments. The stacking driver must provide a queue_limits + * struct (top) and then iteratively call the stacking function for + * all component (bottom) devices. The stacking function will + * attempt to combine the values and ensure proper alignment. + */ +void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, + sector_t offset, const char *pfx) +{ + if (blk_stack_limits(t, &bdev_get_queue(bdev)->limits, + get_start_sect(bdev) + offset)) + pr_notice("%s: Warning: Device %pg is misaligned\n", + pfx, bdev); +} +EXPORT_SYMBOL_GPL(queue_limits_stack_bdev); + /** * disk_stack_limits - adjust queue limits for stacked drivers * @disk: MD/DM gendisk (top) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index dd510ad7ce4b45..285e82723d641f 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -924,6 +924,8 @@ extern void blk_set_queue_depth(struct request_queue *q, unsigned int depth); extern void blk_set_stacking_limits(struct queue_limits *lim); extern int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, sector_t offset); +void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, + sector_t offset, const char *pfx); extern void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, sector_t offset); extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int); From patchwork Wed Feb 28 22:56:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576096 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C986E7290B; Wed, 28 Feb 2024 22:56:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161017; cv=none; b=a4Qf3RUqRx52AiB3VH19KOlJrBd7I2Yk+au9a28JgniK46JE016RcpjBdUWpphWVY23/qA7qsdy8KStM6dwb59y2gK3e9osp49GgdbKCs/418TBVsh2UqlEXU+irB/YKd66K2qzyMntd38vwZWuIV3KC3DDyzSNkSfNWgAhhJPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161017; c=relaxed/simple; bh=Tv5mg8OHXTqQlH1yt2ga15Ej1aysgGRUUX0AqSIj+jk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=h0T+B5s5htgncCVBf/j9cxTnY9w9M+2zsMQdPyehFNHmIjuYaGqX8H/pzBahNq3XGWQvJLbNUrhBSKC6UGMRia/z6FQcxVynmLPdAbJk/LbyNrnyjc01symYGhtqipaT2NnOTy/n1b0J9jy0UVbDqwH/tv6TS6fmjnj//QzKZVw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=wxqXIxMw; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="wxqXIxMw" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=alNyR++8q090hPNGb0IaHU3evv2TI7AvlERQygPpLlg=; b=wxqXIxMwNAF3l+D8gw8NQzEYFS 2Ex62B5lHRVRk6LKinvaEzsOCo2MXOOpbjhwsDS+05GEzNyeVC+P+6H6HUnKgkYLiSGv5WT1vgqbB lvs81dJgovru/FYXVxwLGG6ts5y0kkpq1fN5jMB1X1dcKwPuWrAGtm8ASH1MMTPF5OqWWz06BNfmN mbEc64Wsa3X6iuo4mRFAKj6Z0bkkvcOxKDDB4gq5PLZaj0uzXENpAti3EYJuC88N3MejNV+0suySZ WW3styGYH+qREAWqpG+O8cSZyDzzC0tlrWSbDMMPLXA7TMPF9rmhqckgpAgOdH3IgT1VVQTFDFfs2 cnyhodLg==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSr8-0000000BCNq-2XG3; Wed, 28 Feb 2024 22:56:54 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 03/14] dm: use queue_limits_set Date: Wed, 28 Feb 2024 14:56:42 -0800 Message-Id: <20240228225653.947152-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Use queue_limits_set which validates the limits and takes care of updating the readahead settings instead of directly assigning them to the queue. For that make sure all limits are actually updated before the assignment. Signed-off-by: Christoph Hellwig Reviewed-by: Mike Snitzer --- block/blk-settings.c | 2 +- drivers/md/dm-table.c | 27 ++++++++++++--------------- 2 files changed, 13 insertions(+), 16 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 865fe4ebbf9b83..13865a9f89726c 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -267,7 +267,7 @@ int queue_limits_commit_update(struct request_queue *q, EXPORT_SYMBOL_GPL(queue_limits_commit_update); /** - * queue_limits_commit_set - apply queue limits to queue + * queue_limits_set - apply queue limits to queue * @q: queue to update * @lim: limits to apply * diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 41f1d731ae5ac2..88114719fe187a 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1963,26 +1963,27 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, bool wc = false, fua = false; int r; - /* - * Copy table's limits to the DM device's request_queue - */ - q->limits = *limits; - if (dm_table_supports_nowait(t)) blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q); else blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, q); if (!dm_table_supports_discards(t)) { - q->limits.max_discard_sectors = 0; - q->limits.max_hw_discard_sectors = 0; - q->limits.discard_granularity = 0; - q->limits.discard_alignment = 0; - q->limits.discard_misaligned = 0; + limits->max_hw_discard_sectors = 0; + limits->discard_granularity = 0; + limits->discard_alignment = 0; + limits->discard_misaligned = 0; } + if (!dm_table_supports_write_zeroes(t)) + limits->max_write_zeroes_sectors = 0; + if (!dm_table_supports_secure_erase(t)) - q->limits.max_secure_erase_sectors = 0; + limits->max_secure_erase_sectors = 0; + + r = queue_limits_set(q, limits); + if (r) + return r; if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_WC))) { wc = true; @@ -2007,9 +2008,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, else blk_queue_flag_set(QUEUE_FLAG_NONROT, q); - if (!dm_table_supports_write_zeroes(t)) - q->limits.max_write_zeroes_sectors = 0; - dm_table_verify_integrity(t); /* @@ -2047,7 +2045,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, } dm_update_crypto_profile(q, t); - disk_update_readahead(t->md->disk); /* * Check for request-based device is left to From patchwork Wed Feb 28 22:56:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576097 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F9DB7290D; Wed, 28 Feb 2024 22:56:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161017; cv=none; b=VAgMj7Gkv2D6vwb7gMewZmpyGKwLWj6zE4+HBYmJ3Now2qzTqcJPIl2alVHSW2SNd9SHNdbybRAVqqz/svNHJY6HtZwSuXoTh4/Ynej8Wq9xodGRsHaUc185agfXaF98U8Y5RmMcFnJw+mfgjEq9pzAyDuTW86ioSEzjXo2OWo4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161017; c=relaxed/simple; bh=3/xOyO8bCQyh/vTDfk5iMCHyqJtwKvw42InP6I3FFxI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XL4eOpUWahlStMiC3ccBf0AjLJcrYOeZpffJrSlehqO0MLlwQauxAIRQ3sTQupoVkvJzGuzRqb5Js4wdDQMpAtzS1Dp5clFnN3luHfLVJwlPFb7TfoQYdlR9EiZEyhcfCHbakqxp0zE+CpRNYzbGZrSGgV6rBT/UitGeSpaJbOM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=cq2xurVN; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="cq2xurVN" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=DdiDosqRE4HrozPUItBNS0DqsW91jIxkrinJnwBod8E=; b=cq2xurVNjQdethG09/wwYWnchD aL1QM+rhDlIm8hbUaPoteBbI+QQQPkxO3ZpRRHzU1Ok9YNq/0mlcXkxJjzin5MuQ/ILKfDvGUIyq/ 0tgLd7bXkZr5cDYUY0b2GIp3M/33xOqL567OuqLVI/TZrJADplAC/Pp0Uq1CUEFeoUTXHRUyNSX4K fcvQaJ+RyMenI3s+ZTDZaA3o3/1MWsVJOS9VGEEMsoQ4QJ3rTc0upWRTezT5rZE6lvfqbiPsExxPe C6kSS+KKx+Ln/BcFvXLyMf5tnWxZ3Q/ka4jlxSuuN8Sw74buzT2t01OnGLaZkRjgxTTh2rNLEnZ5s DzNKg2gw==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSr9-0000000BCO2-0e6i; Wed, 28 Feb 2024 22:56:55 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 04/14] md: add a mddev_trace_remap helper Date: Wed, 28 Feb 2024 14:56:43 -0800 Message-Id: <20240228225653.947152-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a helper to trace bio remapping that hides some argument dereferences and the check for a DM-mapped MD device. Signed-off-by: Christoph Hellwig --- drivers/md/md.c | 6 +----- drivers/md/md.h | 8 ++++++++ drivers/md/raid0.c | 5 +---- drivers/md/raid1.c | 11 ++--------- drivers/md/raid10.c | 10 ++-------- drivers/md/raid5.c | 14 +++----------- 6 files changed, 17 insertions(+), 37 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 75266c34b1f99b..ccbc66ce8c4d13 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -65,7 +65,6 @@ #include #include -#include #include "md.h" #include "md-bitmap.h" #include "md-cluster.h" @@ -8663,10 +8662,7 @@ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev, bio_chain(discard_bio, bio); bio_clone_blkg_association(discard_bio, bio); - if (mddev->gendisk) - trace_block_bio_remap(discard_bio, - disk_devt(mddev->gendisk), - bio->bi_iter.bi_sector); + mddev_trace_remap(mddev, discard_bio, bio->bi_iter.bi_sector); submit_bio_noacct(discard_bio); } EXPORT_SYMBOL_GPL(md_submit_discard_bio); diff --git a/drivers/md/md.h b/drivers/md/md.h index 8d881cc597992f..6465411d3afd5d 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -18,6 +18,7 @@ #include #include #include +#include #include "md-cluster.h" #define MaxSector (~(sector_t)0) @@ -863,4 +864,11 @@ int do_md_run(struct mddev *mddev); extern const struct block_device_operations md_fops; +static inline void mddev_trace_remap(struct mddev *mddev, struct bio *bio, + sector_t sector) +{ + if (mddev->gendisk) + trace_block_bio_remap(bio, disk_devt(mddev->gendisk), sector); +} + #endif /* _MD_MD_H */ diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index c50a7abda744ad..aff094de974347 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -578,10 +578,7 @@ static void raid0_map_submit_bio(struct mddev *mddev, struct bio *bio) bio_set_dev(bio, tmp_dev->bdev); bio->bi_iter.bi_sector = sector + zone->dev_start + tmp_dev->data_offset; - - if (mddev->gendisk) - trace_block_bio_remap(bio, disk_devt(mddev->gendisk), - bio_sector); + mddev_trace_remap(mddev, bio, bio_sector); mddev_check_write_zeroes(mddev, bio); submit_bio_noacct(bio); } diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 286f8b16c7bde7..cb64477fa89feb 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1320,11 +1320,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, test_bit(R1BIO_FailFast, &r1_bio->state)) read_bio->bi_opf |= MD_FAILFAST; read_bio->bi_private = r1_bio; - - if (mddev->gendisk) - trace_block_bio_remap(read_bio, disk_devt(mddev->gendisk), - r1_bio->sector); - + mddev_trace_remap(mddev, read_bio, r1_bio->sector); submit_bio_noacct(read_bio); } @@ -1557,10 +1553,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, mbio->bi_private = r1_bio; atomic_inc(&r1_bio->remaining); - - if (mddev->gendisk) - trace_block_bio_remap(mbio, disk_devt(mddev->gendisk), - r1_bio->sector); + mddev_trace_remap(mddev, mbio, r1_bio->sector); /* flush_pending_writes() needs access to the rdev so...*/ mbio->bi_bdev = (void *)rdev; if (!raid1_add_bio_to_plug(mddev, mbio, raid1_unplug, disks)) { diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 7412066ea22c7a..ae90cca8335b50 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1249,10 +1249,7 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, test_bit(R10BIO_FailFast, &r10_bio->state)) read_bio->bi_opf |= MD_FAILFAST; read_bio->bi_private = r10_bio; - - if (mddev->gendisk) - trace_block_bio_remap(read_bio, disk_devt(mddev->gendisk), - r10_bio->sector); + mddev_trace_remap(mddev, read_bio, r10_bio->sector); submit_bio_noacct(read_bio); return; } @@ -1288,10 +1285,7 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, && enough(conf, devnum)) mbio->bi_opf |= MD_FAILFAST; mbio->bi_private = r10_bio; - - if (conf->mddev->gendisk) - trace_block_bio_remap(mbio, disk_devt(conf->mddev->gendisk), - r10_bio->sector); + mddev_trace_remap(mddev, mbio, r10_bio->sector); /* flush_pending_writes() needs access to the rdev so...*/ mbio->bi_bdev = (void *)rdev; diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 14f2cf75abbd72..c7da69c6e7c50c 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -1295,10 +1295,7 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s) if (rrdev) set_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags); - if (conf->mddev->gendisk) - trace_block_bio_remap(bi, - disk_devt(conf->mddev->gendisk), - sh->dev[i].sector); + mddev_trace_remap(conf->mddev, bi, sh->dev[i].sector); if (should_defer && op_is_write(op)) bio_list_add(&pending_bios, bi); else @@ -1342,10 +1339,7 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s) */ if (op == REQ_OP_DISCARD) rbi->bi_vcnt = 0; - if (conf->mddev->gendisk) - trace_block_bio_remap(rbi, - disk_devt(conf->mddev->gendisk), - sh->dev[i].sector); + mddev_trace_remap(conf->mddev, rbi, sh->dev[i].sector); if (should_defer && op_is_write(op)) bio_list_add(&pending_bios, rbi); else @@ -5530,9 +5524,7 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio) spin_unlock_irq(&conf->device_lock); } - if (mddev->gendisk) - trace_block_bio_remap(align_bio, disk_devt(mddev->gendisk), - raid_bio->bi_iter.bi_sector); + mddev_trace_remap(mddev, align_bio, raid_bio->bi_iter.bi_sector); submit_bio_noacct(align_bio); return 1; } From patchwork Wed Feb 28 22:56:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576102 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C50997290E; Wed, 28 Feb 2024 22:56:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161020; cv=none; b=fcWm+KqNZ19efkXKeCESKYn9LeuTVB4FmeeL4Xa1vp6qHhwh7K83bfxY6FjEhg9VcxSrTmyj7fYi/sd/NgrHJv4ITa9Dt12sN6ToqM9X2ESg+XN+SwmKxGPcHeKbj7Bm+2piHC2gHaJO4DXpFObK3rydpdAaZscMxZETp8Ml4Ho= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161020; c=relaxed/simple; bh=TjI7EHPbm0rtsFaJEe2vZG8NXZMT1AAx5wNDdJYY8JM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eJQxluxKSVifu6r/NMgVoNoGCd7gw9QEZ7wVVURdmALSX/YHXWeCIAlsmaEo+KGGctBxBgHfHRP0wnXz7qjbBftgFbGWWeaBi3404PPJDNPTNptgqWOFtZlSpg723L4enGh5AfA9297mHxX/Xh7fAQStWP2KTPkuB0WcVaKIsNU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=rBUBUKgb; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rBUBUKgb" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Guj9TF+YSfgRpOC/M20J9OvG45zLGLW0zwrnlN78wZo=; b=rBUBUKgbQgeZFBQulNEQRPfwM0 rVW/kwKftFs5W3xFFFZAhWMcu5QsH9auSFKiHrl3sInttF97R8VGTORNz2rxLb51YfpwMOouqnjZu o4Wsj4kC/aLt0TdGSQ5ObyHHMEswdWvF+vgSkY6opHzj8msQ6jSz81vMq7HyDHaEjon4dUbNYEOjD Qb73ZHMtMH1V6ks+bOvViUOOCNu3H2rIr0L7SUsbjgEaDg4n6HXu/1p3nC5jX3r93WfJrZ9/ySNPb u2N2UUpypY2BE5oiigG/h/iuZPCKqamDKuSEwr8LOvHokEKUsGCOG0YFxXuSwRXO3Lt35nPJJ6kTr qs6ljWwA==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSr9-0000000BCOF-2hSE; Wed, 28 Feb 2024 22:56:55 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 05/14] md: add a mddev_add_trace_msg helper Date: Wed, 28 Feb 2024 14:56:44 -0800 Message-Id: <20240228225653.947152-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a small wrapper around blk_add_trace_msg that hides some argument dereferences and the check for a DM-mapped MD device. Signed-off-by: Christoph Hellwig --- drivers/md/md-bitmap.c | 9 +++------ drivers/md/md.c | 3 +-- drivers/md/md.h | 6 ++++++ drivers/md/raid1.c | 10 ++++------ drivers/md/raid10.c | 15 +++++++-------- drivers/md/raid5.c | 14 +++++++------- 6 files changed, 28 insertions(+), 29 deletions(-) diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index 9672f75c30503c..a0935a3d66c2d7 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -1043,9 +1043,8 @@ void md_bitmap_unplug(struct bitmap *bitmap) if (dirty || need_write) { if (!writing) { md_bitmap_wait_writes(bitmap); - if (bitmap->mddev->queue) - blk_add_trace_msg(bitmap->mddev->queue, - "md bitmap_unplug"); + mddev_add_trace_msg(bitmap->mddev, + "md bitmap_unplug"); } clear_page_attr(bitmap, i, BITMAP_PAGE_PENDING); filemap_write_page(bitmap, i, false); @@ -1316,9 +1315,7 @@ void md_bitmap_daemon_work(struct mddev *mddev) } bitmap->allclean = 1; - if (bitmap->mddev->queue) - blk_add_trace_msg(bitmap->mddev->queue, - "md bitmap_daemon_work"); + mddev_add_trace_msg(bitmap->mddev, "md bitmap_daemon_work"); /* Any file-page which is PENDING now needs to be written. * So set NEEDWRITE now, then after we make any last-minute changes diff --git a/drivers/md/md.c b/drivers/md/md.c index ccbc66ce8c4d13..409e57242b27f6 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -2847,8 +2847,7 @@ void md_update_sb(struct mddev *mddev, int force_change) pr_debug("md: updating %s RAID superblock on device (in sync %d)\n", mdname(mddev), mddev->in_sync); - if (mddev->queue) - blk_add_trace_msg(mddev->queue, "md md_update_sb"); + mddev_add_trace_msg(mddev, "md md_update_sb"); rewrite: md_bitmap_update_sb(mddev->bitmap); rdev_for_each(rdev, mddev) { diff --git a/drivers/md/md.h b/drivers/md/md.h index 6465411d3afd5d..91ee8951fc8dcb 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -871,4 +871,10 @@ static inline void mddev_trace_remap(struct mddev *mddev, struct bio *bio, trace_block_bio_remap(bio, disk_devt(mddev->gendisk), sector); } +#define mddev_add_trace_msg(mddev, fmt, args...) \ +do { \ + if ((mddev)->gendisk) \ + blk_add_trace_msg((mddev)->queue, fmt, ##args); \ +} while (0) + #endif /* _MD_MD_H */ diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index cb64477fa89feb..3f47fe828b21bb 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -46,9 +46,6 @@ static void allow_barrier(struct r1conf *conf, sector_t sector_nr); static void lower_barrier(struct r1conf *conf, sector_t sector_nr); -#define raid1_log(md, fmt, args...) \ - do { if ((md)->queue) blk_add_trace_msg((md)->queue, "raid1 " fmt, ##args); } while (0) - #define RAID_1_10_NAME "raid1" #include "raid1-10.c" @@ -1098,7 +1095,7 @@ static void freeze_array(struct r1conf *conf, int extra) */ spin_lock_irq(&conf->resync_lock); conf->array_frozen = 1; - raid1_log(conf->mddev, "wait freeze"); + mddev_add_trace_msg(conf->mddev, "raid1 wait freeze"); wait_event_lock_irq_cmd( conf->wait_barrier, get_unqueued_pending(conf) == extra, @@ -1287,7 +1284,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, * Reading from a write-mostly device must take care not to * over-take any writes that are 'behind' */ - raid1_log(mddev, "wait behind writes"); + mddev_add_trace_msg(mddev, "raid1 wait behind writes"); wait_event(bitmap->behind_wait, atomic_read(&bitmap->behind_writes) == 0); } @@ -1470,7 +1467,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, bio_wouldblock_error(bio); return; } - raid1_log(mddev, "wait rdev %d blocked", blocked_rdev->raid_disk); + mddev_add_trace_msg(mddev, "raid1 wait rdev %d blocked", + blocked_rdev->raid_disk); md_wait_for_blocked_rdev(blocked_rdev, mddev); wait_barrier(conf, bio->bi_iter.bi_sector, false); goto retry_write; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index ae90cca8335b50..b6c5194c22308d 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -76,9 +76,6 @@ static void reshape_request_write(struct mddev *mddev, struct r10bio *r10_bio); static void end_reshape_write(struct bio *bio); static void end_reshape(struct r10conf *conf); -#define raid10_log(md, fmt, args...) \ - do { if ((md)->queue) blk_add_trace_msg((md)->queue, "raid10 " fmt, ##args); } while (0) - #include "raid1-10.c" #define NULL_CMD @@ -1033,7 +1030,7 @@ static bool wait_barrier(struct r10conf *conf, bool nowait) ret = false; } else { conf->nr_waiting++; - raid10_log(conf->mddev, "wait barrier"); + mddev_add_trace_msg(conf->mddev, "raid10 wait barrier"); wait_event_barrier(conf, stop_waiting_barrier(conf)); conf->nr_waiting--; } @@ -1152,7 +1149,7 @@ static bool regular_request_wait(struct mddev *mddev, struct r10conf *conf, bio_wouldblock_error(bio); return false; } - raid10_log(conf->mddev, "wait reshape"); + mddev_add_trace_msg(conf->mddev, "raid10 wait reshape"); wait_event(conf->wait_barrier, conf->reshape_progress <= bio->bi_iter.bi_sector || conf->reshape_progress >= bio->bi_iter.bi_sector + @@ -1354,8 +1351,9 @@ static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio) if (unlikely(blocked_rdev)) { /* Have to wait for this device to get unblocked, then retry */ allow_barrier(conf); - raid10_log(conf->mddev, "%s wait rdev %d blocked", - __func__, blocked_rdev->raid_disk); + mddev_add_trace_msg(conf->mddev, + "raid10 %s wait rdev %d blocked", + __func__, blocked_rdev->raid_disk); md_wait_for_blocked_rdev(blocked_rdev, mddev); wait_barrier(conf, false); goto retry_wait; @@ -1410,7 +1408,8 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, bio_wouldblock_error(bio); return; } - raid10_log(conf->mddev, "wait reshape metadata"); + mddev_add_trace_msg(conf->mddev, + "raid10 wait reshape metadata"); wait_event(mddev->sb_wait, !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)); diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index c7da69c6e7c50c..969df5c584653e 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -4199,10 +4199,9 @@ static int handle_stripe_dirtying(struct r5conf *conf, set_bit(STRIPE_HANDLE, &sh->state); if ((rmw < rcw || (rmw == rcw && conf->rmw_level == PARITY_PREFER_RMW)) && rmw > 0) { /* prefer read-modify-write, but need to get some data */ - if (conf->mddev->queue) - blk_add_trace_msg(conf->mddev->queue, - "raid5 rmw %llu %d", - (unsigned long long)sh->sector, rmw); + mddev_add_trace_msg(conf->mddev, "raid5 rmw %llu %d", + sh->sector, rmw); + for (i = disks; i--; ) { struct r5dev *dev = &sh->dev[i]; if (test_bit(R5_InJournal, &dev->flags) && @@ -4280,9 +4279,10 @@ static int handle_stripe_dirtying(struct r5conf *conf, } } if (rcw && conf->mddev->queue) - blk_add_trace_msg(conf->mddev->queue, "raid5 rcw %llu %d %d %d", - (unsigned long long)sh->sector, - rcw, qread, test_bit(STRIPE_DELAYED, &sh->state)); + mddev_add_trace_msg(conf->mddev, + "raid5 rcw %llu %d %d %d", + sh->sector, rcw, qread, + test_bit(STRIPE_DELAYED, &sh->state)); } if (rcw > disks && rmw > disks && From patchwork Wed Feb 28 22:56:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576099 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 772ED72935; Wed, 28 Feb 2024 22:56:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161019; cv=none; b=UNwoHDsWcsysqeAcWeTJXaFDtyIdzCzBVnVuM0wJ+28AKl9D2GZYqlc79pCtY5+4VQUP0rImPfyjHWm+WDBREtE/4iPA9CucOqhQspC+nQcqe1vQseq6XB8vB/MWUFGKpk6Y8pg+jwmamRjPEpAy+MRTXGaA+OgNxUBZyP0sVTo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161019; c=relaxed/simple; bh=tUBiXcOF8C3XXZXLESFVjzDk+EquolH2ZllyiYg8knc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ru4JocI+1bN9T3VAvetXi0XKQc2Qv76AnGYY+paoVx/Nubf5M0GVeKUJ/Uzw78fzwaMNTyb9vt0Xu7lZHskfQvAJtS7HNSz2mcpIy5JwPhW44vXVqA9wJTJSViShBS9aWY4SSKjdQGjjVZCKFZFODnIE1wX5cGpGLT3mH4qWx3g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=a+pbN6lU; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="a+pbN6lU" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=3SnZQCAHO0uhrafQS77bL3xaFhHO2hFXUSSgv3ZmbdE=; b=a+pbN6lUQ69WeE//6AkpAVAFCV gYV1xsz+yBlKeHvO+FZVyjy2e7KtSnh4HCbG+dPV+SlyizWyPp8Vkop7YifU69KG+NKcavq2tyca4 lQNNTPDulFA/scpMo7a4b1ei/SlSmGalCiByl4qwD+D+DU/lyLapI4I0Kw7pcc9BoqJNMgZhbKyyh WCBIJR9haNYad5rfS4WEY2e/uByD8Wm0wkNA03/MJk1PJ5l3Qk71g7eklSF/VnsvTLHSnPC4EFgwR lnyXYBPJJ523pN/u7LmjrPE38D+L5wAZB2443JaoRCeKMjngE4XdP1JVOYP3Lcoa9oOmNLsMoOQCP Dz8QOwqA==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSrA-0000000BCOX-13YC; Wed, 28 Feb 2024 22:56:56 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 06/14] md: add a mddev_is_dm helper Date: Wed, 28 Feb 2024 14:56:45 -0800 Message-Id: <20240228225653.947152-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a helper to check for a DM-mapped MD device instead of using the obfuscated ->gendisk or ->queue NULL checks. Signed-off-by: Christoph Hellwig --- drivers/md/md.c | 15 +++++++-------- drivers/md/md.h | 12 ++++++++++-- drivers/md/raid0.c | 2 +- drivers/md/raid1.c | 13 +++++-------- drivers/md/raid10.c | 10 +++++----- drivers/md/raid5.c | 21 ++++++++++----------- 6 files changed, 38 insertions(+), 35 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 409e57242b27f6..01a219b2559bdb 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -2401,7 +2401,7 @@ int md_integrity_register(struct mddev *mddev) if (list_empty(&mddev->disks)) return 0; /* nothing to do */ - if (!mddev->gendisk || blk_get_integrity(mddev->gendisk)) + if (mddev_is_dm(mddev) || blk_get_integrity(mddev->gendisk)) return 0; /* shouldn't register, or already is */ rdev_for_each(rdev, mddev) { /* skip spares and non-functional disks */ @@ -2454,7 +2454,7 @@ int md_integrity_add_rdev(struct md_rdev *rdev, struct mddev *mddev) { struct blk_integrity *bi_mddev; - if (!mddev->gendisk) + if (mddev_is_dm(mddev)) return 0; bi_mddev = blk_get_integrity(mddev->gendisk); @@ -5923,7 +5923,7 @@ int md_run(struct mddev *mddev) invalidate_bdev(rdev->bdev); if (mddev->ro != MD_RDONLY && rdev_read_only(rdev)) { mddev->ro = MD_RDONLY; - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) set_disk_ro(mddev->gendisk, 1); } @@ -6082,7 +6082,7 @@ int md_run(struct mddev *mddev) } } - if (mddev->queue) { + if (!mddev_is_dm(mddev)) { bool nonrot = true; rdev_for_each(rdev, mddev) { @@ -6338,7 +6338,7 @@ static void mddev_detach(struct mddev *mddev) mddev->pers->quiesce(mddev, 0); } md_unregister_thread(mddev, &mddev->thread); - if (mddev->queue) + if (!mddev_is_dm(mddev)) blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/ } @@ -7304,10 +7304,9 @@ static int update_size(struct mddev *mddev, sector_t num_sectors) if (!rv) { if (mddev_is_clustered(mddev)) md_cluster_ops->update_size(mddev, old_dev_sectors); - else if (mddev->queue) { + else if (!mddev_is_dm(mddev)) set_capacity_and_notify(mddev->gendisk, mddev->array_sectors); - } } return rv; } @@ -9137,7 +9136,7 @@ void md_do_sync(struct md_thread *thread) mddev->delta_disks > 0 && mddev->pers->finish_reshape && mddev->pers->size && - mddev->queue) { + !mddev_is_dm(mddev)) { mddev_lock_nointr(mddev); md_set_array_sectors(mddev, mddev->pers->size(mddev, 0, 0)); mddev_unlock(mddev); diff --git a/drivers/md/md.h b/drivers/md/md.h index 91ee8951fc8dcb..b08e655f8bec41 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -864,16 +864,24 @@ int do_md_run(struct mddev *mddev); extern const struct block_device_operations md_fops; +/* + * MD devices can be used undeneath by DM, in which case ->gendisk is NULL. + */ +static inline bool mddev_is_dm(struct mddev *mddev) +{ + return !mddev->gendisk; +} + static inline void mddev_trace_remap(struct mddev *mddev, struct bio *bio, sector_t sector) { - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) trace_block_bio_remap(bio, disk_devt(mddev->gendisk), sector); } #define mddev_add_trace_msg(mddev, fmt, args...) \ do { \ - if ((mddev)->gendisk) \ + if (!mddev_is_dm(mddev)) \ blk_add_trace_msg((mddev)->queue, fmt, ##args); \ } while (0) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index aff094de974347..9f787ae77ede88 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -399,7 +399,7 @@ static int raid0_run(struct mddev *mddev) mddev->private = conf; } conf = mddev->private; - if (mddev->queue) { + if (!mddev_is_dm(mddev)) { struct md_rdev *rdev; blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors); diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 3f47fe828b21bb..3b1227f67a6d61 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1782,7 +1782,7 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev) for (mirror = first; mirror <= last; mirror++) { p = conf->mirrors + mirror; if (!p->rdev) { - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) disk_stack_limits(mddev->gendisk, rdev->bdev, rdev->data_offset << 9); @@ -3109,14 +3109,11 @@ static int raid1_run(struct mddev *mddev) if (IS_ERR(conf)) return PTR_ERR(conf); - if (mddev->queue) + if (!mddev_is_dm(mddev)) { blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - - rdev_for_each(rdev, mddev) { - if (!mddev->gendisk) - continue; - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); + rdev_for_each(rdev, mddev) + disk_stack_limits(mddev->gendisk, rdev->bdev, + rdev->data_offset << 9); } mddev->degraded = 0; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index b6c5194c22308d..95fa9e728f95a9 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2124,7 +2124,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) continue; } - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) disk_stack_limits(mddev->gendisk, rdev->bdev, rdev->data_offset << 9); @@ -2144,7 +2144,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) set_bit(Replacement, &rdev->flags); rdev->raid_disk = repl_slot; err = 0; - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) disk_stack_limits(mddev->gendisk, rdev->bdev, rdev->data_offset << 9); conf->fullsync = 1; @@ -4040,7 +4040,7 @@ static int raid10_run(struct mddev *mddev) } } - if (mddev->queue) { + if (!mddev_is_dm(conf->mddev)) { blk_queue_max_write_zeroes_sectors(mddev->queue, 0); blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); raid10_set_io_opt(conf); @@ -4074,7 +4074,7 @@ static int raid10_run(struct mddev *mddev) if (first || diff < min_offset_diff) min_offset_diff = diff; - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) disk_stack_limits(mddev->gendisk, rdev->bdev, rdev->data_offset << 9); @@ -4959,7 +4959,7 @@ static void end_reshape(struct r10conf *conf) conf->reshape_safe = MaxSector; spin_unlock_irq(&conf->device_lock); - if (conf->mddev->queue) + if (!mddev_is_dm(conf->mddev)) raid10_set_io_opt(conf); conf->fullsync = 0; } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 969df5c584653e..287fc1540a8d32 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -2416,12 +2416,12 @@ static int grow_stripes(struct r5conf *conf, int num) size_t namelen = sizeof(conf->cache_name[0]); int devs = max(conf->raid_disks, conf->previous_raid_disks); - if (conf->mddev->gendisk) + if (mddev_is_dm(conf->mddev)) snprintf(conf->cache_name[0], namelen, - "raid%d-%s", conf->level, mdname(conf->mddev)); + "raid%d-%p", conf->level, conf->mddev); else snprintf(conf->cache_name[0], namelen, - "raid%d-%p", conf->level, conf->mddev); + "raid%d-%s", conf->level, mdname(conf->mddev)); snprintf(conf->cache_name[1], namelen, "%.27s-alt", conf->cache_name[0]); conf->active_name = 0; @@ -4278,11 +4278,10 @@ static int handle_stripe_dirtying(struct r5conf *conf, set_bit(STRIPE_DELAYED, &sh->state); } } - if (rcw && conf->mddev->queue) - mddev_add_trace_msg(conf->mddev, - "raid5 rcw %llu %d %d %d", - sh->sector, rcw, qread, - test_bit(STRIPE_DELAYED, &sh->state)); + if (rcw && !mddev_is_dm(conf->mddev)) + blk_add_trace_msg(conf->mddev->queue, "raid5 rcw %llu %d %d %d", + (unsigned long long)sh->sector, + rcw, qread, test_bit(STRIPE_DELAYED, &sh->state)); } if (rcw > disks && rmw > disks && @@ -5693,7 +5692,7 @@ static void raid5_unplug(struct blk_plug_cb *blk_cb, bool from_schedule) } release_inactive_stripe_list(conf, cb->temp_inactive_list, NR_STRIPE_HASH_LOCKS); - if (mddev->queue) + if (!mddev_is_dm(mddev)) trace_block_unplug(mddev->queue, cnt, !from_schedule); kfree(cb); } @@ -7942,7 +7941,7 @@ static int raid5_run(struct mddev *mddev) mdname(mddev)); md_set_array_sectors(mddev, raid5_size(mddev, 0, 0)); - if (mddev->queue) { + if (!mddev_is_dm(mddev)) { int chunk_size; /* read-ahead size must cover two whole stripes, which * is 2 * (datadisks) * chunksize where 'n' is the @@ -8546,7 +8545,7 @@ static void end_reshape(struct r5conf *conf) spin_unlock_irq(&conf->device_lock); wake_up(&conf->wait_for_overlap); - if (conf->mddev->queue) + if (!mddev_is_dm(conf->mddev)) raid5_set_io_opt(conf); } } From patchwork Wed Feb 28 22:56:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576098 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E18772936; Wed, 28 Feb 2024 22:56:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161019; cv=none; b=nOOeJe3uu0OrBA1muplODC//y+AgBx0bPHdMB//Qq3E2+Bu/5N+5vCqip6AY71ppkCtlxdE8ZnuT5W7Nx4DfExJHnLyR/bpmcxt4yqt+sEYQZRdpAm+o5nGsPB/I2Yjs2doBMfego097DVnMOqx2Io3Dl+bsdpUet7c2/iJb6E8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161019; c=relaxed/simple; bh=fO2OcLGc6ABvLwJD7bXwyyJ/UD6MJXBneMfAwO/gmXM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZLL4xedppDry83fAH0oc0Sgvj+Gg9akyptdrWvxavm9rFv6LcGK3tyNsVZJbv+Kvk3uYPNDwaY6xl9fgbA2FAZHwFO7uZzCSDZYczeKhCk/yLDhpidTB2JEx0OvVE92XP9sEXf3KvejvYDrJOtqFvk8041MfH7IHxpjEopxSjZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=MIXeTVQj; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="MIXeTVQj" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=4pEzXEENf2nRoMzp3gcfpwk0OQV71w8bWmGMmdfnx1A=; b=MIXeTVQjyKOHplmAgJDUp+AuKx uqXZPUWKQkdhIDcMJAx1pBCqcojrZAvmmtde4zwim7fpfIp5JfmjE7bqe/Zei2ejpho/Hx3TynW/3 VUZO7CE1wA9ctjyT2/spRNF/mk3BMiMS3nndcMR5YUo7NgdqKrgL/lrCNgI8+XzHXmCkEDkDJNyAM ri34esTcN0MUOuKpHEpeVrxKRsSJ1AG1VRwKj9UpuRnnjO9YrQPc33Vl+Y8/N2Zr0v6Q87mJ9j1kn vu/fkzSzaiTYX53lPGRDxmP7boFWbTs84HDIPFWka1Yt72YrfrHQEqxi3jzjQliv/NCqX1s9Hsbem hToLuGsQ==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSrA-0000000BCOk-2IG7; Wed, 28 Feb 2024 22:56:56 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 07/14] md: add queue limit helpers Date: Wed, 28 Feb 2024 14:56:46 -0800 Message-Id: <20240228225653.947152-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a few helpers that wrap the block queue limits API for use in MD. Signed-off-by: Christoph Hellwig --- drivers/md/md.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ drivers/md/md.h | 3 +++ 2 files changed, 48 insertions(+) diff --git a/drivers/md/md.c b/drivers/md/md.c index 01a219b2559bdb..bfc38cb4b31014 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5697,6 +5697,51 @@ static const struct kobj_type md_ktype = { int mdp_major = 0; +/* stack the limit for all rdevs into lim */ +void mddev_stack_rdev_limits(struct mddev *mddev, struct queue_limits *lim) +{ + struct md_rdev *rdev; + + rdev_for_each(rdev, mddev) { + queue_limits_stack_bdev(lim, rdev->bdev, rdev->data_offset, + mddev->gendisk->disk_name); + } +} +EXPORT_SYMBOL_GPL(mddev_stack_rdev_limits); + +/* apply the extra stacking limits from a new rdev into mddev */ +int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev) +{ + struct queue_limits lim; + + if (mddev_is_dm(mddev)) + return 0; + + lim = queue_limits_start_update(mddev->queue); + queue_limits_stack_bdev(&lim, rdev->bdev, rdev->data_offset, + mddev->gendisk->disk_name); + return queue_limits_commit_update(mddev->queue, &lim); +} +EXPORT_SYMBOL_GPL(mddev_stack_new_rdev); + +/* update the optimal I/O size after a reshape */ +void mddev_update_io_opt(struct mddev *mddev, unsigned int nr_stripes) +{ + struct queue_limits lim; + + if (mddev_is_dm(mddev)) + return; + + /* don't bother updating io_opt if we can't suspend the array */ + if (mddev_suspend(mddev, false) < 0) + return; + lim = queue_limits_start_update(mddev->gendisk->queue); + lim.io_opt = lim.io_min * nr_stripes; + queue_limits_commit_update(mddev->gendisk->queue, &lim); + mddev_resume(mddev); +} +EXPORT_SYMBOL_GPL(mddev_update_io_opt); + static void mddev_delayed_delete(struct work_struct *ws) { struct mddev *mddev = container_of(ws, struct mddev, del_work); diff --git a/drivers/md/md.h b/drivers/md/md.h index b08e655f8bec41..5db58d076256d3 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -861,6 +861,9 @@ void md_autostart_arrays(int part); int md_set_array_info(struct mddev *mddev, struct mdu_array_info_s *info); int md_add_new_disk(struct mddev *mddev, struct mdu_disk_info_s *info); int do_md_run(struct mddev *mddev); +void mddev_stack_rdev_limits(struct mddev *mddev, struct queue_limits *lim); +int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev); +void mddev_update_io_opt(struct mddev *mddev, unsigned int nr_stripes); extern const struct block_device_operations md_fops; From patchwork Wed Feb 28 22:56:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576100 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 223E576F09; Wed, 28 Feb 2024 22:56:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161019; cv=none; b=l5MJwBTOFIqeXhvz4YZAVpFdFUuJlAhJA6RlcHyVG385PuG7qzvJ6OfClIt7onwdMdXcEXyTzDF8Pjqk78kmSi/bi42WZOvpNBdfKZivsEX2wngLIWHTxV/f88lpjp55OkNlotla3LwpB4Hncqmlq4ZoywOuTOXhA5qH13cGMbE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161019; c=relaxed/simple; bh=phhP3JvljeW48Hk8U5w6qEwDabadtVvD3iHDOKHgsGk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KXasd5PphHPUg2WJWEvccGjYd6UWDM1Hl5nVOBfe1dzUPUYqD4PpbRCAwWXRY0PHrycvMRKk/xCEmd8RdcBUE/3p/QGDpYHAKLMMcEbPS1/nzSrqDneNkdMyKAG9GVz4Bw2CSQjZHZ2jqNODHOsYQcFk8T9Fzv7jsAd5TJb/d5E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=XFrribF2; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="XFrribF2" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=VpG4V5/kI/N9iSOd4zOsIZodQf9DOHZRCkTqv4MZ5m0=; b=XFrribF2C0XLnWjr0U1/b8f/k5 EdC8zIiM/zaWY5SZwljjVEc1Rn1ZyuTQt0U/eVCQUpbTt+zGvoKH8xKVFhX7VOwcSPllRQorCXMwj 64lGHM5NuZzrO10Nc6Ga4obg7aoCl9IkGjF+loJ+Xk4AOPxyufaGMcNGO6uv62G6falpUM/JOP7bT CttxeZaUd2ovjBGcLuW7/R2DxdGuOqbZDfQTpr3hKi/yDw0DZwIBu6QXRRZU0HHBPyoVtXpJAeT5S gCmAmfKjWJ5sTsdQJTBwnh+ItdVURWysjhKIjKQVbbIe5Gh+UYpm6z68p/FlDBlox81ILnHNu4pNL 6fGV9MJw==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSrB-0000000BCOq-0Gks; Wed, 28 Feb 2024 22:56:57 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 08/14] md/raid0: use the atomic queue limit update APIs Date: Wed, 28 Feb 2024 14:56:47 -0800 Message-Id: <20240228225653.947152-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into a separate helper function. Signed-off-by: Christoph Hellwig --- drivers/md/raid0.c | 35 ++++++++++++++++++++--------------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 9f787ae77ede88..f65aa6ecec0482 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -379,6 +379,19 @@ static void raid0_free(struct mddev *mddev, void *priv) free_conf(mddev, conf); } +static int raid0_set_limits(struct mddev *mddev) +{ + struct queue_limits lim; + + blk_set_stacking_limits(&lim); + lim.max_hw_sectors = mddev->chunk_sectors; + lim.max_write_zeroes_sectors = mddev->chunk_sectors; + lim.io_min = mddev->chunk_sectors << 9; + lim.io_opt = lim.io_min * mddev->raid_disks; + mddev_stack_rdev_limits(mddev, &lim); + return queue_limits_set(mddev->queue, &lim); +} + static int raid0_run(struct mddev *mddev) { struct r0conf *conf; @@ -400,19 +413,9 @@ static int raid0_run(struct mddev *mddev) } conf = mddev->private; if (!mddev_is_dm(mddev)) { - struct md_rdev *rdev; - - blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors); - blk_queue_max_write_zeroes_sectors(mddev->queue, mddev->chunk_sectors); - - blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); - blk_queue_io_opt(mddev->queue, - (mddev->chunk_sectors << 9) * mddev->raid_disks); - - rdev_for_each(rdev, mddev) { - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - } + ret = raid0_set_limits(mddev); + if (ret) + goto out_free_conf; } /* calculate array device size */ @@ -426,8 +429,10 @@ static int raid0_run(struct mddev *mddev) ret = md_integrity_register(mddev); if (ret) - free_conf(mddev, conf); - + goto out_free_conf; + return 0; +out_free_conf: + free_conf(mddev, conf); return ret; } From patchwork Wed Feb 28 22:56:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576101 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D98D76F19; Wed, 28 Feb 2024 22:56:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161020; cv=none; b=Ejvsp+IIA1GQnTwEQoZ25S+Ewij/9XnESGkHny2VqgKIKphjeSeeyvbsQf7GO2/r82TVs4USmWMy2KBSx0zorBxhwERmx+LEzVM9COsMN8mYFZcPLYOoiecrnTmfNr6KestHMrbG+UVAl3jU+cF2x+ePyC6Zji9jFObHJ7Q1t4Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161020; c=relaxed/simple; bh=T16dA4uPo0S1sJZzggzJbFctchiWrhrEDks57vuCR7Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SfE88XsT9aZVhDLuTpC+5mDiZqOjTL0n56Fw7I1sxQFQwwnVXYgAxa+UUjwZ346Gn0XBkRtfifSYoNU5cn6MfNCSSD5rJH2zs+LWVPyjSFThKY5fk6bP7ufquoORPNcifKaSdq3k3t/k3L3IldWmGYDOGGsuy9F4QHoJoUCKbNY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=Gk1sdSag; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Gk1sdSag" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=uwaMl+cldStB6V1bOYa8zhyrgxqTQSHxDG1wSziHQE4=; b=Gk1sdSagLF2A5YiHC38A++Xb7K 822/DiTE5u0GQyT6EcdoOz6Rb3sn94HTsh2UJfMF1BNBsXHtQ5a6Vxgj73CFESHx/406tQbXox93B akz5bub7Me0a+uERR72BU8pxBU5TlhXj19JKFWIoyRjo46OWn1p9MUUbORQOdUyIXax/DFukaPBTt wUTs3HlWeMBkzOYrFbnJes0UiVEuD+738nZlzDPlH775gZmU3q05Hfz4h5xhHOQVSRfTrVCkLvMwz 8OYtM4Qe0UnPM8KpRa77Gfn6Wal1rptJPg57IYhs9eCEoAWFTzStrvb9ExF6H8OKwmrZvu22jzpRt nwSg3coA==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSrB-0000000BCP2-2oge; Wed, 28 Feb 2024 22:56:57 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 09/14] md/raid1: use the atomic queue limit update APIs Date: Wed, 28 Feb 2024 14:56:48 -0800 Message-Id: <20240228225653.947152-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into a separate helper function. Signed-off-by: Christoph Hellwig --- drivers/md/raid1.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 3b1227f67a6d61..75329ab2dbd8de 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1782,10 +1782,9 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev) for (mirror = first; mirror <= last; mirror++) { p = conf->mirrors + mirror; if (!p->rdev) { - if (!mddev_is_dm(mddev)) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - + err = mddev_stack_new_rdev(mddev, rdev); + if (err) + return err; p->head_position = 0; rdev->raid_disk = mirror; err = 0; @@ -3077,12 +3076,21 @@ static struct r1conf *setup_conf(struct mddev *mddev) return ERR_PTR(err); } +static int raid1_set_limits(struct mddev *mddev) +{ + struct queue_limits lim; + + blk_set_stacking_limits(&lim); + lim.max_write_zeroes_sectors = 0; + mddev_stack_rdev_limits(mddev, &lim); + return queue_limits_set(mddev->queue, &lim); +} + static void raid1_free(struct mddev *mddev, void *priv); static int raid1_run(struct mddev *mddev) { struct r1conf *conf; int i; - struct md_rdev *rdev; int ret; if (mddev->level != 1) { @@ -3110,10 +3118,9 @@ static int raid1_run(struct mddev *mddev) return PTR_ERR(conf); if (!mddev_is_dm(mddev)) { - blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - rdev_for_each(rdev, mddev) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); + ret = raid1_set_limits(mddev); + if (ret) + goto abort; } mddev->degraded = 0; From patchwork Wed Feb 28 22:56:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576103 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 138097290D; Wed, 28 Feb 2024 22:56:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161020; cv=none; b=G1XdkGpV9HOnYb/p3XAtYejLJo0VNs8pBiinNXx7dL78Vsk9IDbvaPBcK1wkkvB4VDvwpFTay0kdVHW8Hf9MgAIKxTbi6YPz/mREFCnmazxHTcebo6RDI+yjictnWucXXCl9xb3zU6sNh/pgsoJXwiMvFYgLAct7USj5KrrYe7g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161020; c=relaxed/simple; bh=BP5BqBvQtIdMPKUxE+glrbuzGOcViEQvWXkaUDzmePw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lQshLfGgfmnBhcYCei9rvo57Alm3ON1CXmUvqcj48aJfKv5FoKqcp4ApjNLcXCSwWeCDAY7o0zkMaNhEQQz325ZB0jl/XDgbLguHHF2v/G2wQXaOIUeRHXLsI90dQ3AG/agfiMK7RV065F+lpA3TLjp90WmByTrcL0lhP08mQTc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=A3grBHcd; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="A3grBHcd" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=QyUR83H2cStWLilwMv0KyU8V4Jt8oJOZB/cpxunDodo=; b=A3grBHcdBVE63uh1CbperI9Bfj 1ddH/FCychzaXVhlgX+yOsqHl6Urjye1AYbfuiPsN27BXcLfll42LBzuOYvLMv2UYtKEJmf70R2OP fotgfMVcfxuTozfbYuvO/xrVys2v3P79PBvRslpcQY/wJSl4J5oHklOryjq2CxKacAm6kP56a41fh m2EAhPCXnpbE1HkZ1Wzw1EvGgC9t9pkCT8Q1lGKjWgDcdGPZIvLYeBsEpKTbVf3UNe24mG/BwY/E1 TNplgR1nK5IT3yWQT/VOZ6+n9+C5REcef4XstTFL6nJElmViv90Urh7OP2WuI9mLlGPASOO0EosXT +mVItUDA==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSrC-0000000BCPA-0s37; Wed, 28 Feb 2024 22:56:58 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 10/14] md/raid5: use the atomic queue limit update APIs Date: Wed, 28 Feb 2024 14:56:49 -0800 Message-Id: <20240228225653.947152-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into separate helpers. Signed-off-by: Christoph Hellwig --- drivers/md/raid5.c | 130 ++++++++++++++++++++++----------------------- 1 file changed, 65 insertions(+), 65 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 287fc1540a8d32..8d2e3f9419a7f3 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7673,10 +7673,65 @@ static int only_parity(int raid_disk, int algo, int raid_disks, int max_degraded return 0; } -static void raid5_set_io_opt(struct r5conf *conf) +static int raid5_set_limits(struct mddev *mddev) { - blk_queue_io_opt(conf->mddev->queue, (conf->chunk_sectors << 9) * - (conf->raid_disks - conf->max_degraded)); + struct r5conf *conf = mddev->private; + struct queue_limits lim; + int data_disks, stripe; + struct md_rdev *rdev; + + /* + * The read-ahead size must cover two whole stripes, which is + * 2 * (datadisks) * chunksize where 'n' is the number of raid devices. + */ + data_disks = conf->previous_raid_disks - conf->max_degraded; + + /* + * We can only discard a whole stripe. It doesn't make sense to + * discard data disk but write parity disk + */ + stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9)); + + blk_set_stacking_limits(&lim); + lim.io_min = mddev->chunk_sectors << 9; + lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded); + lim.raid_partial_stripes_expensive = 1; + lim.discard_granularity = stripe; + lim.max_write_zeroes_sectors = 0; + mddev_stack_rdev_limits(mddev, &lim); + rdev_for_each(rdev, mddev) + queue_limits_stack_bdev(&lim, rdev->bdev, rdev->new_data_offset, + mddev->gendisk->disk_name); + + /* + * Zeroing is required for discard, otherwise data could be lost. + * + * Consider a scenario: discard a stripe (the stripe could be + * inconsistent if discard_zeroes_data is 0); write one disk of the + * stripe (the stripe could be inconsistent again depending on which + * disks are used to calculate parity); the disk is broken; The stripe + * data of this disk is lost. + * + * We only allow DISCARD if the sysadmin has confirmed that only safe + * devices are in use by setting a module parameter. A better idea + * might be to turn DISCARD into WRITE_ZEROES requests, as that is + * required to be safe. + */ + if (!devices_handle_discard_safely || + lim.max_discard_sectors < (stripe >> 9) || + lim.discard_granularity < stripe) + lim.max_hw_discard_sectors = 0; + + /* + * Requests require having a bitmap for each stripe. + * Limit the max sectors based on this. + */ + lim.max_hw_sectors = RAID5_MAX_REQ_STRIPES << RAID5_STRIPE_SHIFT(conf); + + /* No restrictions on the number of segments in the request */ + lim.max_segments = USHRT_MAX; + + return queue_limits_set(mddev->queue, &lim); } static int raid5_run(struct mddev *mddev) @@ -7689,6 +7744,7 @@ static int raid5_run(struct mddev *mddev) int i; long long min_offset_diff = 0; int first = 1; + int ret = -EIO; if (mddev->recovery_cp != MaxSector) pr_notice("md/raid:%s: not clean -- starting background reconstruction\n", @@ -7942,65 +7998,9 @@ static int raid5_run(struct mddev *mddev) md_set_array_sectors(mddev, raid5_size(mddev, 0, 0)); if (!mddev_is_dm(mddev)) { - int chunk_size; - /* read-ahead size must cover two whole stripes, which - * is 2 * (datadisks) * chunksize where 'n' is the - * number of raid devices - */ - int data_disks = conf->previous_raid_disks - conf->max_degraded; - int stripe = data_disks * - ((mddev->chunk_sectors << 9) / PAGE_SIZE); - - chunk_size = mddev->chunk_sectors << 9; - blk_queue_io_min(mddev->queue, chunk_size); - raid5_set_io_opt(conf); - mddev->queue->limits.raid_partial_stripes_expensive = 1; - /* - * We can only discard a whole stripe. It doesn't make sense to - * discard data disk but write parity disk - */ - stripe = stripe * PAGE_SIZE; - stripe = roundup_pow_of_two(stripe); - mddev->queue->limits.discard_granularity = stripe; - - blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - - rdev_for_each(rdev, mddev) { - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->new_data_offset << 9); - } - - /* - * zeroing is required, otherwise data - * could be lost. Consider a scenario: discard a stripe - * (the stripe could be inconsistent if - * discard_zeroes_data is 0); write one disk of the - * stripe (the stripe could be inconsistent again - * depending on which disks are used to calculate - * parity); the disk is broken; The stripe data of this - * disk is lost. - * - * We only allow DISCARD if the sysadmin has confirmed that - * only safe devices are in use by setting a module parameter. - * A better idea might be to turn DISCARD into WRITE_ZEROES - * requests, as that is required to be safe. - */ - if (!devices_handle_discard_safely || - mddev->queue->limits.max_discard_sectors < (stripe >> 9) || - mddev->queue->limits.discard_granularity < stripe) - blk_queue_max_discard_sectors(mddev->queue, 0); - - /* - * Requests require having a bitmap for each stripe. - * Limit the max sectors based on this. - */ - blk_queue_max_hw_sectors(mddev->queue, - RAID5_MAX_REQ_STRIPES << RAID5_STRIPE_SHIFT(conf)); - - /* No restrictions on the number of segments in the request */ - blk_queue_max_segments(mddev->queue, USHRT_MAX); + ret = raid5_set_limits(mddev); + if (ret) + goto abort; } if (log_init(conf, journal_dev, raid5_has_ppl(conf))) @@ -8013,7 +8013,7 @@ static int raid5_run(struct mddev *mddev) free_conf(conf); mddev->private = NULL; pr_warn("md/raid:%s: failed to run raid set.\n", mdname(mddev)); - return -EIO; + return ret; } static void raid5_free(struct mddev *mddev, void *priv) @@ -8545,8 +8545,8 @@ static void end_reshape(struct r5conf *conf) spin_unlock_irq(&conf->device_lock); wake_up(&conf->wait_for_overlap); - if (!mddev_is_dm(conf->mddev)) - raid5_set_io_opt(conf); + mddev_update_io_opt(conf->mddev, + conf->raid_disks - conf->max_degraded); } } From patchwork Wed Feb 28 22:56:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576104 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C67F7290F; Wed, 28 Feb 2024 22:57:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161026; cv=none; b=ZUA69PwBGQ3JnnDQmS3yV5WCjd6P7JxQqiXW3s6LrTyijScea0J0AB6iKz6EoJSB+yjI69jGASaAThNFczgeykGtWFLqQbrJk0PORwX022t4y3uqYnM5p8TUvy7J9o42Vhv5pRheyC2Fnhc1Ls0qff6rQMfdviIf2CNaLEVTeBk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161026; c=relaxed/simple; bh=EyGievBHp61SYFzpefanBWL5PLO6CviK4dFYj7JFjew=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=G8liFkOLaUX49atoEmSNzyLoFNGWIgp2sCsm0RzDjW8S2Wkxo2m6bU4sT2bDDK7ZoisWY8R9HUi6TFg0wZ5m2MC49AuzDYjfqylR5Bq64HCEVHNndJfCH0ByXdGIsgTPJrj9tYlNcnKDxwvYtY78mRB4PgSE0H83C3WLsA1AZzE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=gddZRF9h; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="gddZRF9h" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=5SZHUYDI1UghuHcUnZfni2d98+Jb3vhnDCVfYx1sFtc=; b=gddZRF9hPRyqi2GNlBytoV8Aym nOUs1eO57L5aY81UIHu/ab79DmgQj1tmBnm1yZ+n590J35jyT9dJ1luJ0CJkqSNLCPfNeZJvZIJAY Tf5P92VNWctOomMY6C8CRDCam0fn+x14h4RIpbRSjAAyqhrpdyjp3hZh97BGMZzYnmMcnk+QfUn+B 8mUC1PVlZIeGcE2kn4LrQ5sZRRVzY2lNbNd8MGdSKO/dC6UOgcViPv+2SZkMsx39dol1TEiJi9EHI pzJQy90r7y+BY0dDgWJf5hAJYvae/LNviCz2Ut28oAcjYft/8DTouqKcNzRdUZetKtIN3VYFAOfAd aBwVp+ww==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSrH-0000000BCPb-27ed; Wed, 28 Feb 2024 22:57:03 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 11/14] md/raid10: use the atomic queue limit update APIs Date: Wed, 28 Feb 2024 14:56:50 -0800 Message-Id: <20240228225653.947152-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into separate helpers. Signed-off-by: Christoph Hellwig --- drivers/md/raid10.c | 60 +++++++++++++++++++++++++-------------------- 1 file changed, 33 insertions(+), 27 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 95fa9e728f95a9..692a3bd94100e2 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2124,10 +2124,9 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) continue; } - if (!mddev_is_dm(mddev)) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - + err = mddev_stack_new_rdev(mddev, rdev); + if (err) + return err; p->head_position = 0; p->recovery_disabled = mddev->recovery_disabled - 1; rdev->raid_disk = mirror; @@ -2143,10 +2142,9 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) clear_bit(In_sync, &rdev->flags); set_bit(Replacement, &rdev->flags); rdev->raid_disk = repl_slot; - err = 0; - if (!mddev_is_dm(mddev)) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); + err = mddev_stack_new_rdev(mddev, rdev); + if (err) + return err; conf->fullsync = 1; WRITE_ONCE(p->replacement, rdev); } @@ -3995,14 +3993,26 @@ static struct r10conf *setup_conf(struct mddev *mddev) return ERR_PTR(err); } -static void raid10_set_io_opt(struct r10conf *conf) +static unsigned int raid10_nr_stripes(struct r10conf *conf) { - int raid_disks = conf->geo.raid_disks; + unsigned int raid_disks = conf->geo.raid_disks; + + if (conf->geo.raid_disks % conf->geo.near_copies) + return raid_disks; + return raid_disks / conf->geo.near_copies; +} - if (!(conf->geo.raid_disks % conf->geo.near_copies)) - raid_disks /= conf->geo.near_copies; - blk_queue_io_opt(conf->mddev->queue, (conf->mddev->chunk_sectors << 9) * - raid_disks); +static int raid10_set_queue_limits(struct mddev *mddev) +{ + struct r10conf *conf = mddev->private; + struct queue_limits lim; + + blk_set_stacking_limits(&lim); + lim.max_write_zeroes_sectors = 0; + lim.io_min = mddev->chunk_sectors << 9; + lim.io_opt = lim.io_min * raid10_nr_stripes(conf); + mddev_stack_rdev_limits(mddev, &lim); + return queue_limits_set(mddev->queue, &lim); } static int raid10_run(struct mddev *mddev) @@ -4014,6 +4024,7 @@ static int raid10_run(struct mddev *mddev) sector_t size; sector_t min_offset_diff = 0; int first = 1; + int ret = -EIO; if (mddev->private == NULL) { conf = setup_conf(mddev); @@ -4040,12 +4051,6 @@ static int raid10_run(struct mddev *mddev) } } - if (!mddev_is_dm(conf->mddev)) { - blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); - raid10_set_io_opt(conf); - } - rdev_for_each(rdev, mddev) { long long diff; @@ -4074,14 +4079,16 @@ static int raid10_run(struct mddev *mddev) if (first || diff < min_offset_diff) min_offset_diff = diff; - if (!mddev_is_dm(mddev)) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - disk->head_position = 0; first = 0; } + if (!mddev_is_dm(conf->mddev)) { + ret = raid10_set_queue_limits(mddev); + if (ret) + goto out_free_conf; + } + /* need to check that every block has at least one working mirror */ if (!enough(conf, -1)) { pr_err("md/raid10:%s: not enough operational mirrors.\n", @@ -4182,7 +4189,7 @@ static int raid10_run(struct mddev *mddev) raid10_free_conf(conf); mddev->private = NULL; out: - return -EIO; + return ret; } static void raid10_free(struct mddev *mddev, void *priv) @@ -4959,8 +4966,7 @@ static void end_reshape(struct r10conf *conf) conf->reshape_safe = MaxSector; spin_unlock_irq(&conf->device_lock); - if (!mddev_is_dm(conf->mddev)) - raid10_set_io_opt(conf); + mddev_update_io_opt(conf->mddev, raid10_nr_stripes(conf)); conf->fullsync = 0; } From patchwork Wed Feb 28 22:56:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576105 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E97C72907; Wed, 28 Feb 2024 22:57:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161026; cv=none; b=kAOdDJF15vEVMln9rUqEijvujzL7DAPtaLDbK8XWZfxJGc/OWHVaItONIks1WdzXK2/dS6OnUYblmlL4yae6KMlH+tFYHDwgPUPwCJLqPeMEHg3lgZgU5MEHUBtvt2Ys13y6yZ9bg0+wJOil3w8abHxUSkAJ7zcF0uAK7pNoUJU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161026; c=relaxed/simple; bh=VuNlpuWL+R0rZvazr8iGF9/zHlwENuoTzFR0Pa2F4aY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=W6VN1cGwIuyD+z8oj50IjFqdXfZ6q6qV6jLOpKSDi6V7DDCElVmXhtKXP3X/g72tuxbDrJoeOgrfxK5XZIgE7gULyKAN2mTKgCuUzOwcdK5+VlF4THBJxeLzUYu+P1Xj5rGnR7eMreJCWil2a8/Estz/dan7xcAe73f0whho7QE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=MhquiGMb; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="MhquiGMb" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=pKUl55zXeZtPShGrrLRyzw4XvdT5TDobR0FI4r7jEZY=; b=MhquiGMbeNN6/T216LLxDYnkqh zoH5oc3oP7y/XRqiCHnrPOk3nuAsW5JnnpgeDr2iP6GvX75acpOUB32rG9IoH7LZkPNNjJ9KXsoEH eLled3TsNr+NGTvjUHCfUuofFA0vUPMmpwn0uXTxVV+dRPkGLikahfRX9vWGf6KzyPJUAd4LyjuuB xbnD+/rTK3lhavXF6mfNBx0qBhTCdBgC7N98z4yFmk6yAHoPZheJmbR7iBeEvI8D71c7C2p4zzllF h+8YCTXxErwJy22FRs/Zz7xVYhZg9KS7LhCI0c6E3FNGuw3eP4rbQMaPCtD11pqiVuIOvX3BUAuWZ VLlfnh8A==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSrI-0000000BCPj-1FfQ; Wed, 28 Feb 2024 22:57:04 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 12/14] md: don't initialize queue limits Date: Wed, 28 Feb 2024 14:56:51 -0800 Message-Id: <20240228225653.947152-13-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Initial queue limits are now set from ->run. Remove the superfluous initialization in md_alloc and level_store. Signed-off-by: Christoph Hellwig --- drivers/md/md.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index bfc38cb4b31014..eab1b36c1d02ef 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -4155,7 +4155,6 @@ level_store(struct mddev *mddev, const char *buf, size_t len) mddev->in_sync = 1; del_timer_sync(&mddev->safemode_timer); } - blk_set_stacking_limits(&mddev->queue->limits); pers->run(mddev); set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); if (!mddev->thread) @@ -5825,7 +5824,6 @@ struct mddev *md_alloc(dev_t dev, char *name) disk->private_data = mddev; mddev->queue = disk->queue; - blk_set_stacking_limits(&mddev->queue->limits); blk_queue_write_cache(mddev->queue, true, true); disk->events |= DISK_EVENT_MEDIA_CHANGE; mddev->gendisk = disk; From patchwork Wed Feb 28 22:56:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576107 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9FDC70055; Wed, 28 Feb 2024 22:57:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161027; cv=none; b=uieJpi0FVyOQkSkItl7KVK8KlmTeu505o2C1C1BV3gaxFEajhrcnR2mbxz79jmCqaRJxEt6LmQ5q3T2dgSu1NlElBMFOQL74gkaljDuDmol6A0k7+GapbFEVgTFJMnHduTLy0GjhQ7PvCJRLjhcTmZi6BsQr48DJ9w4siAoXFZI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161027; c=relaxed/simple; bh=t0hwZwpfttp69/uGay6s716nZ8N7x7chnpzvYiDxo5s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=agEWPZXvyHl4B4wYx81/T4zu4pCOpXA0CB2ImGvN0hD5lDuX68QXlA3NqNtv2FRf9Ga0QvFJvW7Rw9nXI0crCfOabIAGdR35zrhe2/dmHHduwbE36h9RCqFzVtpL2bEtEUCf/O/ejCzV8MMV687P1msE7rsCMNkW/EAIBRHm6kA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=JRRv1kkI; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="JRRv1kkI" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=J8BtlSqCftL8iclcmwg13p3hmHP0z6D21Rkm/rVP9ZM=; b=JRRv1kkI3KbMrspccO64WdGn5V V4oO4xue2FnQecEETp7QQPplb/ntkineJ7d3/OfewBAXYjHpINw4z0ufWJSMmBJ3raBhJNWRsczpT 6g9FGzWo/bj8P58vIjutCKD110LuUYq6kCPHBga4Y0PsMbQe6aKDNgYNaI0L3fQecvvI0lz+xe14Q zTpOhzCeuaUYHY0fZStrgxvaiRqwNTb84XjbLzMZhbQCtmXk27Xg4r40D3zmEXM1jImJAP68dhUpV 174gNESRxCvBHdhwY3alaJIAr4NPH9eaMwmmsQBlrIu2lufSlItkn+98Eb8PV0Yu/BU49iV8JDLu/ GKkqUzhA==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSrI-0000000BCPs-2JP9; Wed, 28 Feb 2024 22:57:04 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 13/14] md: remove mddev->queue Date: Wed, 28 Feb 2024 14:56:52 -0800 Message-Id: <20240228225653.947152-14-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Just use the request_queue from the gendisk pointer in the relatively few places that sill need it. Signed-off-by: Christoph Hellwig --- drivers/md/md.c | 22 ++++++++++++---------- drivers/md/md.h | 5 ++--- drivers/md/raid0.c | 2 +- drivers/md/raid1.c | 2 +- drivers/md/raid10.c | 2 +- drivers/md/raid5-ppl.c | 3 ++- drivers/md/raid5.c | 13 +++++++------ 7 files changed, 26 insertions(+), 23 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index eab1b36c1d02ef..1cb4a33148aac9 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5716,10 +5716,10 @@ int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev) if (mddev_is_dm(mddev)) return 0; - lim = queue_limits_start_update(mddev->queue); + lim = queue_limits_start_update(mddev->gendisk->queue); queue_limits_stack_bdev(&lim, rdev->bdev, rdev->data_offset, mddev->gendisk->disk_name); - return queue_limits_commit_update(mddev->queue, &lim); + return queue_limits_commit_update(mddev->gendisk->queue, &lim); } EXPORT_SYMBOL_GPL(mddev_stack_new_rdev); @@ -5823,8 +5823,7 @@ struct mddev *md_alloc(dev_t dev, char *name) disk->fops = &md_fops; disk->private_data = mddev; - mddev->queue = disk->queue; - blk_queue_write_cache(mddev->queue, true, true); + blk_queue_write_cache(disk->queue, true, true); disk->events |= DISK_EVENT_MEDIA_CHANGE; mddev->gendisk = disk; error = add_disk(disk); @@ -6126,6 +6125,7 @@ int md_run(struct mddev *mddev) } if (!mddev_is_dm(mddev)) { + struct request_queue *q = mddev->gendisk->queue; bool nonrot = true; rdev_for_each(rdev, mddev) { @@ -6137,14 +6137,14 @@ int md_run(struct mddev *mddev) if (mddev->degraded) nonrot = false; if (nonrot) - blk_queue_flag_set(QUEUE_FLAG_NONROT, mddev->queue); + blk_queue_flag_set(QUEUE_FLAG_NONROT, q); else - blk_queue_flag_clear(QUEUE_FLAG_NONROT, mddev->queue); - blk_queue_flag_set(QUEUE_FLAG_IO_STAT, mddev->queue); + blk_queue_flag_clear(QUEUE_FLAG_NONROT, q); + blk_queue_flag_set(QUEUE_FLAG_IO_STAT, q); /* Set the NOWAIT flags if all underlying devices support it */ if (nowait) - blk_queue_flag_set(QUEUE_FLAG_NOWAIT, mddev->queue); + blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q); } if (pers->sync_request) { if (mddev->kobj.sd && @@ -6381,8 +6381,10 @@ static void mddev_detach(struct mddev *mddev) mddev->pers->quiesce(mddev, 0); } md_unregister_thread(mddev, &mddev->thread); + + /* the unplug fn references 'conf' */ if (!mddev_is_dm(mddev)) - blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/ + blk_sync_queue(mddev->gendisk->queue); } static void __md_stop(struct mddev *mddev) @@ -7110,7 +7112,7 @@ static int hot_add_disk(struct mddev *mddev, dev_t dev) if (!bdev_nowait(rdev->bdev)) { pr_info("%s: Disabling nowait because %pg does not support nowait\n", mdname(mddev), rdev->bdev); - blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, mddev->queue); + blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, mddev->gendisk->queue); } /* * Kick recovery, maybe this spare has to be added to the diff --git a/drivers/md/md.h b/drivers/md/md.h index 5db58d076256d3..dc7d3dc1569934 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -469,7 +469,6 @@ struct mddev { struct timer_list safemode_timer; struct percpu_ref writes_pending; int sync_checkers; /* # of threads checking writes_pending */ - struct request_queue *queue; /* for plugging ... */ struct bitmap *bitmap; /* the bitmap for the device */ struct { @@ -822,7 +821,7 @@ static inline void mddev_check_write_zeroes(struct mddev *mddev, struct bio *bio { if (bio_op(bio) == REQ_OP_WRITE_ZEROES && !bio->bi_bdev->bd_disk->queue->limits.max_write_zeroes_sectors) - mddev->queue->limits.max_write_zeroes_sectors = 0; + mddev->gendisk->queue->limits.max_write_zeroes_sectors = 0; } static inline int mddev_suspend_and_lock(struct mddev *mddev) @@ -885,7 +884,7 @@ static inline void mddev_trace_remap(struct mddev *mddev, struct bio *bio, #define mddev_add_trace_msg(mddev, fmt, args...) \ do { \ if (!mddev_is_dm(mddev)) \ - blk_add_trace_msg((mddev)->queue, fmt, ##args); \ + blk_add_trace_msg((mddev)->gendisk->queue, fmt, ##args); \ } while (0) #endif /* _MD_MD_H */ diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index f65aa6ecec0482..c5d4aeb68404c9 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -389,7 +389,7 @@ static int raid0_set_limits(struct mddev *mddev) lim.io_min = mddev->chunk_sectors << 9; lim.io_opt = lim.io_min * mddev->raid_disks; mddev_stack_rdev_limits(mddev, &lim); - return queue_limits_set(mddev->queue, &lim); + return queue_limits_set(mddev->gendisk->queue, &lim); } static int raid0_run(struct mddev *mddev) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 75329ab2dbd8de..445e3d3ff9ff7d 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3083,7 +3083,7 @@ static int raid1_set_limits(struct mddev *mddev) blk_set_stacking_limits(&lim); lim.max_write_zeroes_sectors = 0; mddev_stack_rdev_limits(mddev, &lim); - return queue_limits_set(mddev->queue, &lim); + return queue_limits_set(mddev->gendisk->queue, &lim); } static void raid1_free(struct mddev *mddev, void *priv); diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 692a3bd94100e2..fd960a5b29fe49 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -4012,7 +4012,7 @@ static int raid10_set_queue_limits(struct mddev *mddev) lim.io_min = mddev->chunk_sectors << 9; lim.io_opt = lim.io_min * raid10_nr_stripes(conf); mddev_stack_rdev_limits(mddev, &lim); - return queue_limits_set(mddev->queue, &lim); + return queue_limits_set(mddev->gendisk->queue, &lim); } static int raid10_run(struct mddev *mddev) diff --git a/drivers/md/raid5-ppl.c b/drivers/md/raid5-ppl.c index da4ba736c4f0c9..a70cbec12ed017 100644 --- a/drivers/md/raid5-ppl.c +++ b/drivers/md/raid5-ppl.c @@ -1393,7 +1393,8 @@ int ppl_init_log(struct r5conf *conf) ppl_conf->signature = ~crc32c_le(~0, mddev->uuid, sizeof(mddev->uuid)); ppl_conf->block_size = 512; } else { - ppl_conf->block_size = queue_logical_block_size(mddev->queue); + ppl_conf->block_size = + queue_logical_block_size(mddev->gendisk->queue); } for (i = 0; i < ppl_conf->count; i++) { diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 8d2e3f9419a7f3..651fc4d603dc59 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -4279,9 +4279,10 @@ static int handle_stripe_dirtying(struct r5conf *conf, } } if (rcw && !mddev_is_dm(conf->mddev)) - blk_add_trace_msg(conf->mddev->queue, "raid5 rcw %llu %d %d %d", - (unsigned long long)sh->sector, - rcw, qread, test_bit(STRIPE_DELAYED, &sh->state)); + blk_add_trace_msg(conf->mddev->gendisk->queue, + "raid5 rcw %llu %d %d %d", + (unsigned long long)sh->sector, rcw, qread, + test_bit(STRIPE_DELAYED, &sh->state)); } if (rcw > disks && rmw > disks && @@ -5693,7 +5694,7 @@ static void raid5_unplug(struct blk_plug_cb *blk_cb, bool from_schedule) release_inactive_stripe_list(conf, cb->temp_inactive_list, NR_STRIPE_HASH_LOCKS); if (!mddev_is_dm(mddev)) - trace_block_unplug(mddev->queue, cnt, !from_schedule); + trace_block_unplug(mddev->gendisk->queue, cnt, !from_schedule); kfree(cb); } @@ -7073,7 +7074,7 @@ raid5_store_skip_copy(struct mddev *mddev, const char *page, size_t len) if (!conf) err = -ENODEV; else if (new != conf->skip_copy) { - struct request_queue *q = mddev->queue; + struct request_queue *q = mddev->gendisk->queue; conf->skip_copy = new; if (new) @@ -7731,7 +7732,7 @@ static int raid5_set_limits(struct mddev *mddev) /* No restrictions on the number of segments in the request */ lim.max_segments = USHRT_MAX; - return queue_limits_set(mddev->queue, &lim); + return queue_limits_set(mddev->gendisk->queue, &lim); } static int raid5_run(struct mddev *mddev) From patchwork Wed Feb 28 22:56:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13576106 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC1B17290B; Wed, 28 Feb 2024 22:57:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161027; cv=none; b=oko8Z5AlwreFdg7Ls+MRKfxXVTuF7zYjAGeVvy0AlMHZs9ZIWbeB3FCR6u9XM28aRIJQaddqjEKJHrTGP1vaplk8hxdyyyCwdOjHnYM9N8tIbson73fuMYkfH22y5Jakv6QVfljuNEZkuupEjB6kMcP9oCYbQKTEdQC7tmyjnPg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709161027; c=relaxed/simple; bh=rjrBsSeN4TvfsIeude8FJjx7Vee0SyykGlyHSUvPlkU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Hc/CRwQlTzORP5lXLe8nfS+XwzXAvVnFo8+07UFOkkm814Th99HSMo42Dn9BpGDZLnWKaTteg49NKqBaAWpsvGLZ5KkHBO0g+ZWVMJSdXz6dTHRYgQleXfMMKzBspZwXSnWtWcAkspM1Owwmlm43hcL/8h17W7QdEa9w48iJXig= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=ObCMGTaY; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ObCMGTaY" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=s9Pu6ieiVUWU70yfnx7yKzpl91lyIUDqJOc53o+My9E=; b=ObCMGTaYmZ7eGaBIveicf6zCPg 4ldJZK3Zo/x9AzzJN+CyLM4jJgIgwbFC5bApatXQGhx3iDMfEQL+2Rc8oDSIj3Bpo4bOOqTjY/VUx p9FHFxyBZ5XG1SLLPVDWVT+sSxR10tnFG8PmOSPxceqiRaUPC5Z7YCQ4oSDD13tZ6gqUOZKrLga33 wBVeBIo6p6zAHp4M+FYPMTJuLcjjBzLhQItZOYd2BODrPdKRcEFGuzSnRTUY4RVwE3HDE8KVZe2oY GjPu5Fr/BVqq9v0i4JN4BYMvryraC5/39T6UIE+96kQ/e0vNpjxqnX9G//ASIDbgf/BPQXYR7mBVM 52OC7idg==; Received: from [4.28.11.157] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rfSrI-0000000BCQ7-3WqO; Wed, 28 Feb 2024 22:57:04 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 14/14] block: remove disk_stack_limits Date: Wed, 28 Feb 2024 14:56:53 -0800 Message-Id: <20240228225653.947152-15-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228225653.947152-1-hch@lst.de> References: <20240228225653.947152-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html disk_stack_limits is unused now, remove it. Signed-off-by: Christoph Hellwig --- block/blk-settings.c | 24 ------------------------ include/linux/blkdev.h | 2 -- 2 files changed, 26 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 13865a9f89726c..3c7d8d638ab59d 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -916,30 +916,6 @@ void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, } EXPORT_SYMBOL_GPL(queue_limits_stack_bdev); -/** - * disk_stack_limits - adjust queue limits for stacked drivers - * @disk: MD/DM gendisk (top) - * @bdev: the underlying block device (bottom) - * @offset: offset to beginning of data within component device - * - * Description: - * Merges the limits for a top level gendisk and a bottom level - * block_device. - */ -void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, - sector_t offset) -{ - struct request_queue *t = disk->queue; - - if (blk_stack_limits(&t->limits, &bdev_get_queue(bdev)->limits, - get_start_sect(bdev) + (offset >> 9)) < 0) - pr_notice("%s: Warning: Device %pg is misaligned\n", - disk->disk_name, bdev); - - disk_update_readahead(disk); -} -EXPORT_SYMBOL(disk_stack_limits); - /** * blk_queue_update_dma_pad - update pad mask * @q: the request queue for the device diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 285e82723d641f..75c909865a8b7b 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -926,8 +926,6 @@ extern int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, sector_t offset); void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, sector_t offset, const char *pfx); -extern void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, - sector_t offset); extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int); extern void blk_queue_segment_boundary(struct request_queue *, unsigned long); extern void blk_queue_virt_boundary(struct request_queue *, unsigned long);