From patchwork Fri Feb 23 16:12:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13569415 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7BB90823CD; Fri, 23 Feb 2024 16:13:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704785; cv=none; b=tfSHojcvwIRxtxnUzva6YubTt2p6ToFSVtDPkBVdMMcBBfUb9bXj2eiaMW4TgxGDb6rsgDxh9PkIHiXDimSeEUbNTK/eqUVomndTwCtPpQPrsgKlvDP3Pn7aAYhQ6FXuYWS/S2uCz/tHhmX6ym/V71dZMCfHt86PWMCCyuKj4qw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704785; c=relaxed/simple; bh=Yr884w3gRhIkUcrkO7iPb9alAoxqN1T7r99JPoCCW48=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dXGDrWeMXuK584qK88UHWLC3nG7UpF/ohkdjqXw+vhoLAhltX9hNA6s6J7719JQ+p7sd99AJBfYnxnXc0RUQCChuzuPhh6x9JIqOfx6FVJ7fP/tkSkXIVA0i+AikzjYbHTnZdT/dyLyXulUQ1htiIaT9pXTwMB9MttJGVkhNfmE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=I1z9YdCI; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="I1z9YdCI" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Vp6cPOGmx7IsY/Gom3ohdEDfNj/bokcwTvb4p5mmNMM=; b=I1z9YdCIeN5gNvoFqo5EjeaFe9 vp916yJEPBhp76UwPjAvIeKiM5vtEqADqSqy15BZRqGVzzs4E84OB326imp89pkHy6RQXDdFUH47K pEZ5xOvUqh/SGJ3PIolrLoDHzWHtUNHr6Mlb+FqlGPMAELZpt8KHJSHSaeDI1T6vnfDu0yoJ1yF1d uy9fMU8kuURuiEpmIRK0Qt43zD56JlORlQOAjsPwZV1ltgAUGDzCbI2amE0ukNmF61cvEVLCRsXLh PPN1GwIoMk7jMYdGSyJlHzxHQC58/wwO69fRnYPuPWkatvJk0X9kD6KxSve6W5j9YY3wRGuPFk2ex +fz+xtFw==; Received: from [2001:4bb8:19a:62b2:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rdYAP-0000000AAfO-1zOz; Fri, 23 Feb 2024 16:12:55 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 1/9] block: add a queue_limits_set helper Date: Fri, 23 Feb 2024 17:12:39 +0100 Message-Id: <20240223161247.3998821-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223161247.3998821-1-hch@lst.de> References: <20240223161247.3998821-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a small wrapper around queue_limits_commit_update for stacking drivers that don't want to update existing limits, but set an entirely new set. Signed-off-by: Christoph Hellwig --- block/blk-settings.c | 18 ++++++++++++++++++ include/linux/blkdev.h | 1 + 2 files changed, 19 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index b6bbe683d218fa..1989a177be201b 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -266,6 +266,24 @@ int queue_limits_commit_update(struct request_queue *q, } EXPORT_SYMBOL_GPL(queue_limits_commit_update); +/** + * queue_limits_commit_set - apply queue limits to queue + * @q: queue to update + * @lim: limits to apply + * + * Apply the limits in @lim that were freshly initialized to @q. + * To update existing limits use queue_limits_start_update() and + * queue_limits_commit_update() instead. + * + * Returns 0 if successful, else a negative error code. + */ +int queue_limits_set(struct request_queue *q, struct queue_limits *lim) +{ + mutex_lock(&q->limits_lock); + return queue_limits_commit_update(q, lim); +} +EXPORT_SYMBOL_GPL(queue_limits_set); + /** * blk_queue_bounce_limit - set bounce buffer limit for queue * @q: the request queue for the device diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index a14ea934413850..dd510ad7ce4b45 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -889,6 +889,7 @@ queue_limits_start_update(struct request_queue *q) } int queue_limits_commit_update(struct request_queue *q, struct queue_limits *lim); +int queue_limits_set(struct request_queue *q, struct queue_limits *lim); /* * Access functions for manipulating queue properties From patchwork Fri Feb 23 16:12:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13569416 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55084823C5; Fri, 23 Feb 2024 16:13:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704785; cv=none; b=llV7vsrpmCVPO060d66vqhk6cTkVa+icwB3qxUOiuSZCYHlvQHDbvJyVH7WcIxf9icMqSCYPp5Bh2y47DcVqLcKHnUM3Fpb52UbtnBV/JtP1OR2wDue2SBPOeCP9PDiYy7lEUBWWb34yGBXY1a2spBAZaNoLXl632gslZTS2aEo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704785; c=relaxed/simple; bh=tqUc+Z9d2VAdWNLtusCOHo8Eoophn2R0QknoDx92pJ0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=GebKVpkllYEM7bIOfE+ZbQOYYMTfAvWWPsZIsti0Hupxf2aRL7GZNLhx03zLPzia4uZzNuTl9fcMqXUiOzqdIDo/tNlzvK7LSSd8/vr3d6QxqRKaXHvw0e5aVZUvlbn/yxJfz2vhuJtUgkAQOx4hH05eExuAQjfSxCZpQutY9Xs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=SdwUH+ZS; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="SdwUH+ZS" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=OMaKAmrOGVexeWclRWo7mbRBrld0WSyZffyo91301/M=; b=SdwUH+ZSlxRUjNTV6VjxMwvkcD hV1OSOFLEmyVBDUNPjZlfm9bpLXqcQt2UZg2LUNn+4nfyYCKOQRNxJGJ2rYF00CDuzGESUWu0f6Xk MBbPA7oWXJ9rqXDJ8upjJqayFqFxcVhq8SJqyoLC1qSI2efdFp9nGX7MmmBPcUBztL7Oew02Zr6Ve +PXwJ8uE1LcoesvOygbzjGh4mTnjEMGmmbPQat2PsN13Ght4oQvKNngkBpETZ6PyvZklsLSfaojFo TBu8oUNxE2YKQyb3zCTCr1LT2rFDpJRR7y7VOF2GbrUx2NXVLNLKJRxHyFjW7J0FEoi4Ggt2dwkTK 7YPbZWTA==; Received: from [2001:4bb8:19a:62b2:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rdYAT-0000000AAgf-1Zo2; Fri, 23 Feb 2024 16:13:03 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 2/9] block: add a queue_limits_stack_bdev helper Date: Fri, 23 Feb 2024 17:12:40 +0100 Message-Id: <20240223161247.3998821-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223161247.3998821-1-hch@lst.de> References: <20240223161247.3998821-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a small wrapper around blk_stack_limits that allows passing a bdev for the bottom device and prints an error in case of misaligned device. The name fits into the new queue limits API and the intent is to eventually replace disk_stack_limits. Signed-off-by: Christoph Hellwig --- block/blk-settings.c | 24 ++++++++++++++++++++++++ include/linux/blkdev.h | 2 ++ 2 files changed, 26 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 1989a177be201b..f14d3a18f9e2f0 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -891,6 +891,30 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, } EXPORT_SYMBOL(blk_stack_limits); +/** + * queue_limits_stack_bdev - adjust queue_limits for stacked devices + * @t: the stacking driver limits (top device) + * @bdev: the underlying block device (bottom) + * @offset: offset to beginning of data within component device + * + * Description: + * This function is used by stacking drivers like MD and DM to ensure + * that all component devices have compatible block sizes and + * alignments. The stacking driver must provide a queue_limits + * struct (top) and then iteratively call the stacking function for + * all component (bottom) devices. The stacking function will + * attempt to combine the values and ensure proper alignment. + */ +void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, + sector_t offset, const char *pfx) +{ + if (blk_stack_limits(t, &bdev_get_queue(bdev)->limits, + get_start_sect(bdev) + offset)) + pr_notice("%s: Warning: Device %pg is misaligned\n", + pfx, bdev); +} +EXPORT_SYMBOL_GPL(queue_limits_stack_bdev); + /** * disk_stack_limits - adjust queue limits for stacked drivers * @disk: MD/DM gendisk (top) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index dd510ad7ce4b45..285e82723d641f 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -924,6 +924,8 @@ extern void blk_set_queue_depth(struct request_queue *q, unsigned int depth); extern void blk_set_stacking_limits(struct queue_limits *lim); extern int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, sector_t offset); +void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, + sector_t offset, const char *pfx); extern void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, sector_t offset); extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int); From patchwork Fri Feb 23 16:12:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13569417 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B114823C5; Fri, 23 Feb 2024 16:13:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704788; cv=none; b=gBUI63iYgPVe7UzBpnCE2Gqi1SKPAO42Wjd8NwwZ4VqH9F8+P8PNWeBe43bQM91X7jr3mv/1YCa02aT1e/TvA+TgTB/OlbZ8tAjGoG2whxqmv9hkut8NyeIL6kf0ebHpzxGtPJUf013BzVS/QPBdUIZJNaVZZjiuSLU5DtKFm7I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704788; c=relaxed/simple; bh=CtypFnQPMSf6cTJp+3omQp0mUZrVL+FWefDyt1O5jo0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BC7/WvsR2IZFb5IbeC1GqSWXZRKG3beJ8hc5OeFZ5VuhlS38quO39OZeYljxUyp09CuG0Ziq+/M/c19D8i6Yo6ylbnOM71U5Ly4ZV6sJ/DcxjrjSs8Csw59H8jV8zxGpVTlq3IZCjHwEZ6FKEAu4TKrIWpZBMft0mwONGk+V6wE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=Ztmztu1q; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Ztmztu1q" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=3IGcs6hbpkpzypLQ9cJmuxoia8QbDd1hniPlxwkl9Zc=; b=Ztmztu1qh5UdEsyQQBPgwnBb0Y f4vTMtHjfDEt2xlO6nR6Yrr7tn5wHGKDCCsT2kt6v2VHRT3g4SBsQsJFwtOKTSjkSB+4SM6FzTh3U 3L09a3usY9FEdV+2sKeVJzIFfM0lBmWZBhfUp/IVUNk86ssa2zpEcyE0Rd05TANEhRLqv1QNkr+nG 98A3wxvJhTJLZjWwIFy1ZS7HjkyiWMweGSei83QQpk8yY976ZHUDSlTWM43AzMFWn8jLr/YO1uzlb F45Epklh5HHoYALdXyTp7WQGB0npYk3gOeu70eYKpA5CE9WRxhdqpxSaXLYCgx1umzbdSB1uLZRry NPWe+84Q==; Received: from [2001:4bb8:19a:62b2:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rdYAb-0000000AAi6-1Zkp; Fri, 23 Feb 2024 16:13:05 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 3/9] dm: use queue_limits_set Date: Fri, 23 Feb 2024 17:12:41 +0100 Message-Id: <20240223161247.3998821-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223161247.3998821-1-hch@lst.de> References: <20240223161247.3998821-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Use queue_limits_set which validates the limits and takes care of updating the readahead settings instead of directly assigning them to the queue. For that make sure all limits are actually updated before the assignment. Signed-off-by: Christoph Hellwig Reviewed-by: Mike Snitzer --- drivers/md/dm-table.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 41f1d731ae5ac2..88114719fe187a 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1963,26 +1963,27 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, bool wc = false, fua = false; int r; - /* - * Copy table's limits to the DM device's request_queue - */ - q->limits = *limits; - if (dm_table_supports_nowait(t)) blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q); else blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, q); if (!dm_table_supports_discards(t)) { - q->limits.max_discard_sectors = 0; - q->limits.max_hw_discard_sectors = 0; - q->limits.discard_granularity = 0; - q->limits.discard_alignment = 0; - q->limits.discard_misaligned = 0; + limits->max_hw_discard_sectors = 0; + limits->discard_granularity = 0; + limits->discard_alignment = 0; + limits->discard_misaligned = 0; } + if (!dm_table_supports_write_zeroes(t)) + limits->max_write_zeroes_sectors = 0; + if (!dm_table_supports_secure_erase(t)) - q->limits.max_secure_erase_sectors = 0; + limits->max_secure_erase_sectors = 0; + + r = queue_limits_set(q, limits); + if (r) + return r; if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_WC))) { wc = true; @@ -2007,9 +2008,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, else blk_queue_flag_set(QUEUE_FLAG_NONROT, q); - if (!dm_table_supports_write_zeroes(t)) - q->limits.max_write_zeroes_sectors = 0; - dm_table_verify_integrity(t); /* @@ -2047,7 +2045,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, } dm_update_crypto_profile(q, t); - disk_update_readahead(t->md->disk); /* * Check for request-based device is left to From patchwork Fri Feb 23 16:12:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13569418 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC80F81ACE; Fri, 23 Feb 2024 16:13:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704793; cv=none; b=PFYh0fK0z74avMLkiWk6HEiMSL2vajOZfNWt9+vNzsWeI/CnJ/0wJejbSNUCAeK/HD2UUFJgut1SGqLLzAIVKfFBY7ARvvENF8/x04xZU0jV/42kvAcjiOf40huBpWjd60vjB7Ci/MJN2R94ihWm4lSgSDV0hsgrLdhC9QR/q4w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704793; c=relaxed/simple; bh=qpx5ZfsXN/PL+1OUdl1bDP6qACVTtOCO66trsr+igSI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Xa/FDW86GL4AOEt+fhi/gkpCbz2HCtWDwadCOTT0jmjaUFGhDDUlQfg+cvN5ZtkpOcxWeqrrYqjBU1iJMr5xpZTQ2luLTOgMRKFIl9dNKo+6rw7WKA8bwX8qPS/g1qBOjE0FxivrspMBYDyWn4bYRs5vmeIxz59fOoyh41XvlEI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=mCavDDXd; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="mCavDDXd" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=58hQxzCOBh09HYswQ4p4ZVLnJZqyofksZjan/0dqSg4=; b=mCavDDXdHuHwdJUogbCk//k9SD a81mbbYTtPnTNXdGxP4QVMAJ24LgdaScMzxUzx6kRpqD2DPVg3j+22ODiaiiu4c9izvnnwggB9GMG ttcsXF1e5qWclwqoSw3feMCKSVRc47AubeB4EUkpDrjzWUt9A3pZCs/iESFxaAXSnQ0d9UM2jYm3K fmsuUmgFPIy1T5+VsZcqziB4SwdlqCDEhZNQDetqGJGd1baRQBU3ichLP97/T94SbqszcR74ph8SV iG368x+DZNK/JLsdZyAX0yNayJJxHV6hp7+rSZ+k28F4Mdurz5giR5KJlbVMK5maRrNNjrwxtw8Ro j+Md2EWg==; Received: from [2001:4bb8:19a:62b2:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rdYAd-0000000AAkD-4BEM; Fri, 23 Feb 2024 16:13:08 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 4/9] md: add queue limit helpers Date: Fri, 23 Feb 2024 17:12:42 +0100 Message-Id: <20240223161247.3998821-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223161247.3998821-1-hch@lst.de> References: <20240223161247.3998821-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a few helpers that wrap the block queue limits API for use in MD. Signed-off-by: Christoph Hellwig --- drivers/md/md.c | 37 +++++++++++++++++++++++++++++++++++++ drivers/md/md.h | 3 +++ 2 files changed, 40 insertions(+) diff --git a/drivers/md/md.c b/drivers/md/md.c index 75266c34b1f99b..23823823f80c6b 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5699,6 +5699,43 @@ static const struct kobj_type md_ktype = { int mdp_major = 0; +/* stack the limit for all rdevs into lim */ +void mddev_stack_rdev_limits(struct mddev *mddev, struct queue_limits *lim) +{ + struct md_rdev *rdev; + + rdev_for_each(rdev, mddev) { + queue_limits_stack_bdev(lim, rdev->bdev, rdev->data_offset, + mddev->gendisk->disk_name); + } +} +EXPORT_SYMBOL_GPL(mddev_stack_rdev_limits); + +/* apply the extra stacking limits from a new rdev into mddev */ +int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev) +{ + struct queue_limits lim = queue_limits_start_update(mddev->queue); + + queue_limits_stack_bdev(&lim, rdev->bdev, rdev->data_offset, + mddev->gendisk->disk_name); + return queue_limits_commit_update(mddev->queue, &lim); +} +EXPORT_SYMBOL_GPL(mddev_stack_new_rdev); + +/* update the optimal I/O size after a reshape */ +void mddev_update_io_opt(struct mddev *mddev, unsigned int nr_stripes) +{ + struct queue_limits lim; + int ret; + + blk_mq_freeze_queue(mddev->queue); + lim = queue_limits_start_update(mddev->queue); + lim.io_opt = lim.io_min * nr_stripes; + ret = queue_limits_commit_update(mddev->queue, &lim); + blk_mq_unfreeze_queue(mddev->queue); +} +EXPORT_SYMBOL_GPL(mddev_update_io_opt); + static void mddev_delayed_delete(struct work_struct *ws) { struct mddev *mddev = container_of(ws, struct mddev, del_work); diff --git a/drivers/md/md.h b/drivers/md/md.h index 8d881cc597992f..25b19614aa3239 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -860,6 +860,9 @@ void md_autostart_arrays(int part); int md_set_array_info(struct mddev *mddev, struct mdu_array_info_s *info); int md_add_new_disk(struct mddev *mddev, struct mdu_disk_info_s *info); int do_md_run(struct mddev *mddev); +void mddev_stack_rdev_limits(struct mddev *mddev, struct queue_limits *lim); +int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev); +void mddev_update_io_opt(struct mddev *mddev, unsigned int nr_stripes); extern const struct block_device_operations md_fops; From patchwork Fri Feb 23 16:12:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13569419 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E681385638; Fri, 23 Feb 2024 16:13:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704793; cv=none; b=jNakNND9K/HXs5jLdOzTJoJiwymS6Dn0a1P9OAvtb+fWcqtonQ1Kuag/IRHNZ+7ld4p8CFvvb7bRaD7AX+w+WA+ivslatL/VEs3GrXZNUKWrO2VrkLxQo8wW072kO8rzkH2BC5tXX6NNJIo34pjGuy/p/BHe/pcAf25UIHbUQ0M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704793; c=relaxed/simple; bh=zCBk/PqWZLZGFEBjfM2iuYUXRJ2akvHWBPqC0vjBMw0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=GXf06uPjgxCUUnjoZFF+ucBHzlRv4pgLHgoacAjtcuHUXAY/S2BiEEGXzflgT32OcD2EuyVrHUp1y2w11VDTBZmUHNzc9HLH1Mf5XJDsAZgIsUHE69bWVyX1MxQiKwsprpkgiH7CkhWPsgPUK3wWQSi4Bh8f5FfzzBRXd99BDGk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=Z9gFGMjk; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Z9gFGMjk" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=sguSj+eaTMBTK7hMcvpX41ZMewjfbPx9/+19uC/WjPk=; b=Z9gFGMjkqNdTvip6KFBzKAIU2/ nOBFS1DErdHLhyQY/MtYazmYuvwDaTmr/weBhfgjI+GAUGs548gTDK0p1AI/Pzh7g8UJjK9cfpPik gxeibVIY04sKgoun1Uv0r6meCxgCH5pruXcnjDL9KPi1CSTjZiFuiFSPw+nRqvqCGc8b1HQcV9pFU ERT9sQ65ghLNp1NLZzBLSVwBO+JFxpV3eN9S8h9ih1Y3jUoHX4XiV8PD5n7myENujdNNg7WqDlizS 6kqJ86Sso9tvKcD3sOPGQ+C2d/+KZvCeEt6e+cVkmDaipwRqUDjaG9XG9KIMl9jPe60s/wElYF0Hj 575EDYsw==; Received: from [2001:4bb8:19a:62b2:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rdYAg-0000000AAmE-2WzM; Fri, 23 Feb 2024 16:13:11 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 5/9] md/raid0: use the atomic queue limit update APIs Date: Fri, 23 Feb 2024 17:12:43 +0100 Message-Id: <20240223161247.3998821-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223161247.3998821-1-hch@lst.de> References: <20240223161247.3998821-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. Also remove the bogus ->gendisk and ->queue NULL checks in the are while touching it. Signed-off-by: Christoph Hellwig --- drivers/md/raid0.c | 35 ++++++++++++++++------------------- 1 file changed, 16 insertions(+), 19 deletions(-) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index c50a7abda744ad..f7d78ee5338bd3 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -381,6 +381,7 @@ static void raid0_free(struct mddev *mddev, void *priv) static int raid0_run(struct mddev *mddev) { + struct queue_limits lim; struct r0conf *conf; int ret; @@ -391,29 +392,23 @@ static int raid0_run(struct mddev *mddev) if (md_check_no_bitmap(mddev)) return -EINVAL; - /* if private is not null, we are here after takeover */ - if (mddev->private == NULL) { + /* if conf is not null, we are here after takeover */ + if (!conf) { ret = create_strip_zones(mddev, &conf); if (ret < 0) return ret; mddev->private = conf; } - conf = mddev->private; - if (mddev->queue) { - struct md_rdev *rdev; - - blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors); - blk_queue_max_write_zeroes_sectors(mddev->queue, mddev->chunk_sectors); - - blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); - blk_queue_io_opt(mddev->queue, - (mddev->chunk_sectors << 9) * mddev->raid_disks); - rdev_for_each(rdev, mddev) { - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - } - } + blk_set_stacking_limits(&lim); + lim.max_hw_sectors = mddev->chunk_sectors; + lim.max_write_zeroes_sectors = mddev->chunk_sectors; + lim.io_min = mddev->chunk_sectors << 9; + lim.io_opt = lim.io_min * mddev->raid_disks; + mddev_stack_rdev_limits(mddev, &lim); + ret = queue_limits_set(mddev->queue, &lim); + if (ret) + goto out_free_conf; /* calculate array device size */ md_set_array_sectors(mddev, raid0_size(mddev, 0, 0)); @@ -426,8 +421,10 @@ static int raid0_run(struct mddev *mddev) ret = md_integrity_register(mddev); if (ret) - free_conf(mddev, conf); - + goto out_free_conf; + return 0; +out_free_conf: + free_conf(mddev, conf); return ret; } From patchwork Fri Feb 23 16:12:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13569420 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FF1284FA1; Fri, 23 Feb 2024 16:13:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704796; cv=none; b=EnOslIaMbq8usIFPGfB78GnuHpD1sRgD0U5AKGBFxNozC4bVNxN/1jT9xjWGbvrEPd8kRxh+8SDYEBKLf0UbV4Rg/q0DxJfDXrXrjczRQf+bKDLer3g0Qc4QQZU1wR0nkyGkPXt+vQFZ5qo4NZaiidQnGICAhbXlGt2CzXrvp+Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704796; c=relaxed/simple; bh=mKwRPZU/UQcXEo3px/D5v/7XwWRHrzVu9x2yDVPti1U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UMFxjdPUuyODwzjcHy3CJBe9ycO6e1gryX7QV8Fl3Ggw2Th7ZvbJfFFFyNXpfcXYBHxXZ3DheQPvFK66zEjEcrAjLw8KnMlrEES5rMBTfX8e4LUEQD5rmuGsdwfFGI83NMzsx1OG0UT58rK6X3qMXBu2xdJlwqDnQ9q47sM68e4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=QFnbwKTX; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QFnbwKTX" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=GyO6JLPvWJHWdBIq99b9W/Hc0Q12PjuJv1kdwaFNu7o=; b=QFnbwKTXgIvAWMHYsoXyH3RHtP 5e4E3tf6kzowePkmnMFg3EnEh/0LqWQL1HvSwJCVAQwMrm3+mqIgLd3T7lATcsUpqrS4zWSfZODjH yPFWWCPzFqCWB04r8iYawITZ4jNFy/IWtOTg06VgWS7ng7oxQeeGlI/yADUrjaN/UTxgFlpyialri 6q46Pl1H8f14bUg9WiYULQQ9yq7b1C/SNnl/V9hCKZZ3ZuioGLP4t7Z4M6Q2jD4wH5AG0zc4/ty7q mbRFctOZ7SViJXiDl11IU/eOcm990a152gHq+NIvN/qU+P+nytAkR/WkPtc/K2K3pLYpGMdjjGnBI rmo9hvdg==; Received: from [2001:4bb8:19a:62b2:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rdYAj-0000000AAmh-1QA0; Fri, 23 Feb 2024 16:13:14 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 6/9] md/raid1: use the atomic queue limit update APIs Date: Fri, 23 Feb 2024 17:12:44 +0100 Message-Id: <20240223161247.3998821-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223161247.3998821-1-hch@lst.de> References: <20240223161247.3998821-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. Also remove the bogus ->gendisk and ->queue NULL checks in the are while touching it. Signed-off-by: Christoph Hellwig --- drivers/md/raid1.c | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 286f8b16c7bde7..752ff99736a636 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1791,10 +1791,9 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev) for (mirror = first; mirror <= last; mirror++) { p = conf->mirrors + mirror; if (!p->rdev) { - if (mddev->gendisk) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - + err = mddev_stack_new_rdev(mddev, rdev); + if (err) + return err; p->head_position = 0; rdev->raid_disk = mirror; err = 0; @@ -3089,9 +3088,9 @@ static struct r1conf *setup_conf(struct mddev *mddev) static void raid1_free(struct mddev *mddev, void *priv); static int raid1_run(struct mddev *mddev) { + struct queue_limits lim; struct r1conf *conf; int i; - struct md_rdev *rdev; int ret; if (mddev->level != 1) { @@ -3118,15 +3117,12 @@ static int raid1_run(struct mddev *mddev) if (IS_ERR(conf)) return PTR_ERR(conf); - if (mddev->queue) - blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - - rdev_for_each(rdev, mddev) { - if (!mddev->gendisk) - continue; - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - } + blk_set_stacking_limits(&lim); + lim.max_write_zeroes_sectors = 0; + mddev_stack_rdev_limits(mddev, &lim); + ret = queue_limits_set(mddev->queue, &lim); + if (ret) + goto abort; mddev->degraded = 0; for (i = 0; i < conf->raid_disks; i++) From patchwork Fri Feb 23 16:12:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13569421 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43F7884FA1; Fri, 23 Feb 2024 16:13:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704800; cv=none; b=Xv7nua0LQtWGlHoFp1sHjP2xUvX3049ITz7wRnwCRtaHOTuKjqAhry+Ydg03JZIu6scaDh+e2yucxIm0YsE8L4jrIQ0IoDNaX2ilNpK7QYnQ6Bb1IRrq6ANqGKCheBIN89icJ1gbn6aWJojO8sgvg7t1h68s2/pm5FeVZvXfMpg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704800; c=relaxed/simple; bh=pdGx2vwZa8Dr6IFIsSHS+0zp/tQjhJTDUzal3dXFX9U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oYTDGhX+aoqUlbZLi2pAelY5h3oZYfDGj3Fqw/c/mtrce4cUI7ZO8ZVilk6xaNKH8moGFFtrNgK2wI1lO6BBmRu+MBBunRMGQ/lzNX1bM8OcQ7ahWJ0z73LWqQFdtLRbaTX9jp7tgAsQmRz2i8lPhuM8zsX1A3SCVFgs6wLICmE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=aGLjkKSv; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="aGLjkKSv" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=dO1UJbN98oc0DXmJGPgP9/w6HQlV/cBVJ10XT4uDjII=; b=aGLjkKSvbmdVMpUBaGkY1TSdzx CCtqgOl7xUnTBAa8Nq7YbsWpPBpcb5tj8L9bEraKaDv8yXtKyLPzWkuNTtF0CFanXrVmPARLheNin VdborfNs/BHRSuSia32oQa3cMmK3Khl1Qo1KDgJrJlALE67oYk5CzTcVhRwNGudV9zEatCwbl6umu NBE0Vmzll9TjgQOhNkYaDQbMOJLgZln7yYJnFQThkUNzyS76A3jbtFoLaW0bUkrX3PVPlwrKpsI2d 6Yua8HLmUrSWwLjIc9KNBXNosN15TneLjxxLC1fpOefWTK1HVP9CpR+cOpScT7LLc/x3vxIcyQ//H 5ubD9etQ==; Received: from [2001:4bb8:19a:62b2:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rdYAm-0000000AAnt-0rA8; Fri, 23 Feb 2024 16:13:16 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 7/9] md/raid10: use the atomic queue limit update APIs Date: Fri, 23 Feb 2024 17:12:45 +0100 Message-Id: <20240223161247.3998821-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223161247.3998821-1-hch@lst.de> References: <20240223161247.3998821-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. Also remove the bogus ->gendisk and ->queue NULL checks in the are while touching it. Signed-off-by: Christoph Hellwig --- drivers/md/raid10.c | 52 +++++++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 28 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 7412066ea22c7a..21d0aced9a0725 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2130,11 +2130,9 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) repl_slot = mirror; continue; } - - if (mddev->gendisk) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - + err = mddev_stack_new_rdev(mddev, rdev); + if (err) + return err; p->head_position = 0; p->recovery_disabled = mddev->recovery_disabled - 1; rdev->raid_disk = mirror; @@ -2150,10 +2148,9 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) clear_bit(In_sync, &rdev->flags); set_bit(Replacement, &rdev->flags); rdev->raid_disk = repl_slot; - err = 0; - if (mddev->gendisk) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); + err = mddev_stack_new_rdev(mddev, rdev); + if (err) + return err; conf->fullsync = 1; WRITE_ONCE(p->replacement, rdev); } @@ -4002,18 +3999,18 @@ static struct r10conf *setup_conf(struct mddev *mddev) return ERR_PTR(err); } -static void raid10_set_io_opt(struct r10conf *conf) +static unsigned int raid10_nr_stripes(struct r10conf *conf) { - int raid_disks = conf->geo.raid_disks; + unsigned int raid_disks = conf->geo.raid_disks; - if (!(conf->geo.raid_disks % conf->geo.near_copies)) - raid_disks /= conf->geo.near_copies; - blk_queue_io_opt(conf->mddev->queue, (conf->mddev->chunk_sectors << 9) * - raid_disks); + if (conf->geo.raid_disks % conf->geo.near_copies) + return raid_disks; + return raid_disks / conf->geo.near_copies; } static int raid10_run(struct mddev *mddev) { + struct queue_limits lim; struct r10conf *conf; int i, disk_idx; struct raid10_info *disk; @@ -4021,6 +4018,7 @@ static int raid10_run(struct mddev *mddev) sector_t size; sector_t min_offset_diff = 0; int first = 1; + int ret = -EIO; if (mddev->private == NULL) { conf = setup_conf(mddev); @@ -4047,12 +4045,6 @@ static int raid10_run(struct mddev *mddev) } } - if (mddev->queue) { - blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); - raid10_set_io_opt(conf); - } - rdev_for_each(rdev, mddev) { long long diff; @@ -4081,14 +4073,19 @@ static int raid10_run(struct mddev *mddev) if (first || diff < min_offset_diff) min_offset_diff = diff; - if (mddev->gendisk) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - disk->head_position = 0; first = 0; } + blk_set_stacking_limits(&lim); + lim.max_write_zeroes_sectors = 0; + lim.io_min = mddev->chunk_sectors << 9; + lim.io_opt = lim.io_min * raid10_nr_stripes(conf); + mddev_stack_rdev_limits(mddev, &lim); + ret = queue_limits_set(mddev->queue, &lim); + if (ret) + goto out_free_conf; + /* need to check that every block has at least one working mirror */ if (!enough(conf, -1)) { pr_err("md/raid10:%s: not enough operational mirrors.\n", @@ -4189,7 +4186,7 @@ static int raid10_run(struct mddev *mddev) raid10_free_conf(conf); mddev->private = NULL; out: - return -EIO; + return ret; } static void raid10_free(struct mddev *mddev, void *priv) @@ -4966,8 +4963,7 @@ static void end_reshape(struct r10conf *conf) conf->reshape_safe = MaxSector; spin_unlock_irq(&conf->device_lock); - if (conf->mddev->queue) - raid10_set_io_opt(conf); + mddev_update_io_opt(conf->mddev, raid10_nr_stripes(conf)); conf->fullsync = 0; } From patchwork Fri Feb 23 16:12:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13569422 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A29C882C60; Fri, 23 Feb 2024 16:13:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704805; cv=none; b=gS4iJUOI5Fi8cGZ5sxQb/3QIvjQW+VD7pUlGyrX/55CoevGG928DCh2lS6h3AP3cFBPWOWAMx1S9mEXc9e/17LhZSuWWlTeksGNl5SPDcfSYiYYoQUWnlKVNhEnSZ0im/9ae7n4/vMSYINuW5av8fao3zCCZBP3Rzbc/Kbt/TaU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704805; c=relaxed/simple; bh=mB06Z2uA4DAsqce0csx+7RI8JFCGUu6qC8AcgHHHB8A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZGPqmh29NT8O/szpJdRr4RuZe1WnfxQjT0/FpPlu4zMOtDTcNrxYYNobSXAWPD1DR9VXdj3sNXMpnF3wi4T6h7uJ+D0p24J8ATRUwS2+FZ7PxgpEZ/UK6vJVikHvYb7PrjIic/7w5DeAcVu0IFejiD1xfRNBwZao4vbuQKlYn1M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=yDQot5tV; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="yDQot5tV" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=9RW0iy9EfWCoucmumjZAUDfk0pG1Ea5omDHUf2NvGls=; b=yDQot5tV2xxfEojMGHPyo/RbNb f2bbVEm2IbgZWN5HeiPTgMLxEDD9B464prhOk6ITsLDUTafKpPHXtpuz+6ix3n9ShIUyfRYYecMGD 0hrcfrEtAHjaGfDyk/JYiPEpFB5kNwN9Wnle266wf8T51cB1sgkgnBOPnB/rwyJBZCEm4jhK6rqE/ 2W4ehg82LKehPqWA2PDmF1iEp9gz0o7k345SaedYR2fprspchCLdJpJgcTyCpMb0GMpHZ7dDkoO5P duRNtKpgmZJI7M1nSdR6fuMtZnKE1bf6t0bV1G7DTi8819VZjYpUIkI3aULzjrqhp5C4Q+QFDryDb qpSd2FNA==; Received: from [2001:4bb8:19a:62b2:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rdYAp-0000000AAoX-33up; Fri, 23 Feb 2024 16:13:22 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 8/9] md/raid5: use the atomic queue limit update APIs Date: Fri, 23 Feb 2024 17:12:46 +0100 Message-Id: <20240223161247.3998821-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223161247.3998821-1-hch@lst.de> References: <20240223161247.3998821-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. Also remove the bogus ->gendisk and ->queue NULL checks in the are while touching it. Signed-off-by: Christoph Hellwig --- drivers/md/raid5.c | 123 +++++++++++++++++++++------------------------ 1 file changed, 56 insertions(+), 67 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 14f2cf75abbd72..3dd7c05d3ba2ab 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7682,12 +7682,6 @@ static int only_parity(int raid_disk, int algo, int raid_disks, int max_degraded return 0; } -static void raid5_set_io_opt(struct r5conf *conf) -{ - blk_queue_io_opt(conf->mddev->queue, (conf->chunk_sectors << 9) * - (conf->raid_disks - conf->max_degraded)); -} - static int raid5_run(struct mddev *mddev) { struct r5conf *conf; @@ -7695,9 +7689,12 @@ static int raid5_run(struct mddev *mddev) struct md_rdev *rdev; struct md_rdev *journal_dev = NULL; sector_t reshape_offset = 0; + struct queue_limits lim; int i; long long min_offset_diff = 0; int first = 1; + int data_disks, stripe; + int ret = -EIO; if (mddev->recovery_cp != MaxSector) pr_notice("md/raid:%s: not clean -- starting background reconstruction\n", @@ -7950,67 +7947,59 @@ static int raid5_run(struct mddev *mddev) mdname(mddev)); md_set_array_sectors(mddev, raid5_size(mddev, 0, 0)); - if (mddev->queue) { - int chunk_size; - /* read-ahead size must cover two whole stripes, which - * is 2 * (datadisks) * chunksize where 'n' is the - * number of raid devices - */ - int data_disks = conf->previous_raid_disks - conf->max_degraded; - int stripe = data_disks * - ((mddev->chunk_sectors << 9) / PAGE_SIZE); - - chunk_size = mddev->chunk_sectors << 9; - blk_queue_io_min(mddev->queue, chunk_size); - raid5_set_io_opt(conf); - mddev->queue->limits.raid_partial_stripes_expensive = 1; - /* - * We can only discard a whole stripe. It doesn't make sense to - * discard data disk but write parity disk - */ - stripe = stripe * PAGE_SIZE; - stripe = roundup_pow_of_two(stripe); - mddev->queue->limits.discard_granularity = stripe; - - blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - - rdev_for_each(rdev, mddev) { - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->new_data_offset << 9); - } + /* + * The read-ahead size must cover two whole stripes, which is + * 2 * (datadisks) * chunksize where 'n' is the number of raid devices. + */ + data_disks = conf->previous_raid_disks - conf->max_degraded; + /* + * We can only discard a whole stripe. It doesn't make sense to + * discard data disk but write parity disk + */ + stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9)); + + blk_set_stacking_limits(&lim); + lim.io_min = mddev->chunk_sectors << 9; + lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded); + lim.raid_partial_stripes_expensive = 1; + lim.discard_granularity = stripe; + lim.max_write_zeroes_sectors = 0; + mddev_stack_rdev_limits(mddev, &lim); + rdev_for_each(rdev, mddev) { + queue_limits_stack_bdev(&lim, rdev->bdev, rdev->new_data_offset, + mddev->gendisk->disk_name); + } - /* - * zeroing is required, otherwise data - * could be lost. Consider a scenario: discard a stripe - * (the stripe could be inconsistent if - * discard_zeroes_data is 0); write one disk of the - * stripe (the stripe could be inconsistent again - * depending on which disks are used to calculate - * parity); the disk is broken; The stripe data of this - * disk is lost. - * - * We only allow DISCARD if the sysadmin has confirmed that - * only safe devices are in use by setting a module parameter. - * A better idea might be to turn DISCARD into WRITE_ZEROES - * requests, as that is required to be safe. - */ - if (!devices_handle_discard_safely || - mddev->queue->limits.max_discard_sectors < (stripe >> 9) || - mddev->queue->limits.discard_granularity < stripe) - blk_queue_max_discard_sectors(mddev->queue, 0); + /* + * Zeroing is required for discard, otherwise data could be lost. + * + * Consider a scenario: discard a stripe (the stripe could be + * inconsistent if discard_zeroes_data is 0); write one disk of the + * stripe (the stripe could be inconsistent again depending on which + * disks are used to calculate parity); the disk is broken; The stripe + * data of this disk is lost. + * + * We only allow DISCARD if the sysadmin has confirmed that only safe + * devices are in use by setting a module parameter. A better idea + * might be to turn DISCARD into WRITE_ZEROES requests, as that is + * required to be safe. + */ + if (!devices_handle_discard_safely || + lim.max_discard_sectors < (stripe >> 9) || + lim.discard_granularity < stripe) + lim.max_hw_discard_sectors = 0; - /* - * Requests require having a bitmap for each stripe. - * Limit the max sectors based on this. - */ - blk_queue_max_hw_sectors(mddev->queue, - RAID5_MAX_REQ_STRIPES << RAID5_STRIPE_SHIFT(conf)); + /* + * Requests require having a bitmap for each stripe. + * Limit the max sectors based on this. + */ + lim.max_hw_sectors = RAID5_MAX_REQ_STRIPES << RAID5_STRIPE_SHIFT(conf); - /* No restrictions on the number of segments in the request */ - blk_queue_max_segments(mddev->queue, USHRT_MAX); - } + /* No restrictions on the number of segments in the request */ + lim.max_segments = USHRT_MAX; + ret = queue_limits_set(mddev->queue, &lim); + if (ret) + goto abort; if (log_init(conf, journal_dev, raid5_has_ppl(conf))) goto abort; @@ -8022,7 +8011,7 @@ static int raid5_run(struct mddev *mddev) free_conf(conf); mddev->private = NULL; pr_warn("md/raid:%s: failed to run raid set.\n", mdname(mddev)); - return -EIO; + return ret; } static void raid5_free(struct mddev *mddev, void *priv) @@ -8554,8 +8543,8 @@ static void end_reshape(struct r5conf *conf) spin_unlock_irq(&conf->device_lock); wake_up(&conf->wait_for_overlap); - if (conf->mddev->queue) - raid5_set_io_opt(conf); + mddev_update_io_opt(conf->mddev, + conf->raid_disks - conf->max_degraded); } } From patchwork Fri Feb 23 16:12:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13569423 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8335D82C60; Fri, 23 Feb 2024 16:13:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704807; cv=none; b=tf5HJ2RwFteiK4Q+DeBn6KxJowPhFZr5z3Iy7Z4Aooli42wrpQXPx5rUk5HJMYy1g0Yh7k32ttSa1QLy+FyJK4HlYQ+OIW8sv3I+8K1E2oXH3oE6PE/RtNtf/VW3EQ5EVI94BGEZOdp7R+SvbZ50YCQ05GQHv1jPBmbeMFL/+QA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708704807; c=relaxed/simple; bh=l1ZzPCiEr335aH7EWESo2kG3Oa9lgrq7EV/mJlIuWJk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UHMtrIgN0Rs3qc7Fd0A8ZgConaCGu0DEb+1rPLSeVZDnL9VWf284DwBzdXq0tjA/vGhpbLkpT2SUz6h2kVbKmzUFswjNTxTSEwXt6OiNBZKD2uxStPYFSqehg/p7Mw5Aflx2gCDE1RWHOJp53owllexe3w2ewYGNENAjWtB//nQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=T6GZ6OQj; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="T6GZ6OQj" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=zXQ5eHK4YZoxrGzfweizYedOAgX1D7tLxQ3tD2OddIo=; b=T6GZ6OQjb1s06j9LDQJov016Gf QfvXiYADF6/lx4FBdPPlwglVahtmwpO6z8iMCsvES0DIgS5ZfRLtZ7qw5lNq38NleitlThkpBDNaI OaGyAua+eAoOjdaozr8EOBvrovCpI9+4pOIzqBFPu7bUC2wxI/m4APn/T73uY1qLVZr1D6Kc4b6oo hPSLQ4R0hj1FAUb54Ij60tFzShjdunjW1uD0mbB0RlmbZ0YYQ3OhmpoTDjK8HNtM/hpEVwvljYg0e 4517G62nyzPhEBYNblBoYqet8kQZ5gI07SpvFVic0o+8cS+EKEilu1Fak94up1T48cYsjhocZKpXd 1oy8wyLw==; Received: from [2001:4bb8:19a:62b2:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rdYAv-0000000AAqd-01cJ; Fri, 23 Feb 2024 16:13:25 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 9/9] block: remove disk_stack_limits Date: Fri, 23 Feb 2024 17:12:47 +0100 Message-Id: <20240223161247.3998821-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240223161247.3998821-1-hch@lst.de> References: <20240223161247.3998821-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html disk_stack_limits is unused now, remove it. Signed-off-by: Christoph Hellwig --- block/blk-settings.c | 24 ------------------------ include/linux/blkdev.h | 2 -- 2 files changed, 26 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index f14d3a18f9e2f0..299ecc399c0e6f 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -915,30 +915,6 @@ void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, } EXPORT_SYMBOL_GPL(queue_limits_stack_bdev); -/** - * disk_stack_limits - adjust queue limits for stacked drivers - * @disk: MD/DM gendisk (top) - * @bdev: the underlying block device (bottom) - * @offset: offset to beginning of data within component device - * - * Description: - * Merges the limits for a top level gendisk and a bottom level - * block_device. - */ -void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, - sector_t offset) -{ - struct request_queue *t = disk->queue; - - if (blk_stack_limits(&t->limits, &bdev_get_queue(bdev)->limits, - get_start_sect(bdev) + (offset >> 9)) < 0) - pr_notice("%s: Warning: Device %pg is misaligned\n", - disk->disk_name, bdev); - - disk_update_readahead(disk); -} -EXPORT_SYMBOL(disk_stack_limits); - /** * blk_queue_update_dma_pad - update pad mask * @q: the request queue for the device diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 285e82723d641f..75c909865a8b7b 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -926,8 +926,6 @@ extern int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, sector_t offset); void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, sector_t offset, const char *pfx); -extern void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, - sector_t offset); extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int); extern void blk_queue_segment_boundary(struct request_queue *, unsigned long); extern void blk_queue_virt_boundary(struct request_queue *, unsigned long);