From patchwork Tue Mar 5 13:40:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13582412 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93B0D5A7A4 for ; Tue, 5 Mar 2024 13:40:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646046; cv=none; b=DuTIARgdapnaK/SC5slIGbGgewWlL2GDdC/EQ4KStFkaDmjuDjTD6MuedHThyQmBfvIFD4XcRRZ7S4XaXD3WyH7qCOlZywLfc/h+QhXl76dkpT5he6StsolirlAOuJN1w0IBCJlCGyKBQQBr1G7VYCR3iIU16E4Ma/f+R9/K0ds= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646046; c=relaxed/simple; bh=XpLAWU/T29R3Tx7M5y0bBtN8HvoYCHtVCtBpYdyGND0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rofzoGA6DQ1KSybDckK4SUq1Ui9N8yrPbdSwSaGB3C60zJ6OiGHKn86L5unvnbm6uImJBkD2OIk7bUsg8l4zO4Nmm1ZK2R6Ory7o0l7Gh4YroHyqs9PaZ188eLHdiicGduAkIjmRbbE/NYdwQC9l3RW9fXWu8pJWDA/Z6dzoDuc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=s/SuFmhR; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="s/SuFmhR" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=f3zgMPRzyEaVBtSoLHLNya392ZrbcPY31mEsET5i7T0=; b=s/SuFmhRABlJihe04dU7SVg3al liZ5j3moGRX+lOh6f1gdCeUqjBGRciENRlMw1s+7qpjD3jmEABmXHP3BuarEZ28efhJajIrJFnXTA r5nZA8GME/Mzic/dbLGtm+ZaK27uy3fIR9+/s3bbWQT52j8QYIdnrQLMm3zI8JOkSg7l0qfK40Yiw lMNbXOd7g2LC6Sjqtnag/ILvhDhzDhhe71KPUSiaRAfXbRHaA4+Pq2kqjXUqzRQv794JzkehY1TiF 81YdFKKP2LT0mP3J4YOlPFEh2KnlKi4WxE6hDvy3ZPkfh0uPaHCq+5ZolFgKKcc6PBNtG28/pjTp3 mm+gyoWA==; Received: from [50.219.53.154] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rhV2A-0000000Dqwj-3i1T; Tue, 05 Mar 2024 13:40:43 +0000 From: Christoph Hellwig To: Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe Cc: drbd-dev@lists.linbit.com, linux-block@vger.kernel.org Subject: [PATCH 1/7] drbd: pass the max_hw_sectors limit to blk_alloc_disk Date: Tue, 5 Mar 2024 06:40:35 -0700 Message-Id: <20240305134041.137006-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240305134041.137006-1-hch@lst.de> References: <20240305134041.137006-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Pass a queue_limits structure with the max_hw_sectors limit to blk_alloc_disk instead of updating the limit on the allocated gendisk. Signed-off-by: Christoph Hellwig --- drivers/block/drbd/drbd_main.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index cea1e537fd56c1..113b441d4d3670 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -2690,6 +2690,14 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig int id; int vnr = adm_ctx->volume; enum drbd_ret_code err = ERR_NOMEM; + struct queue_limits lim = { + /* + * Setting the max_hw_sectors to an odd value of 8kibyte here. + * This triggers a max_bio_size message upon first attach or + * connect. + */ + .max_hw_sectors = DRBD_MAX_BIO_SIZE_SAFE >> 8, + }; device = minor_to_device(minor); if (device) @@ -2708,7 +2716,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig drbd_init_set_defaults(device); - disk = blk_alloc_disk(NULL, NUMA_NO_NODE); + disk = blk_alloc_disk(&lim, NUMA_NO_NODE); if (IS_ERR(disk)) { err = PTR_ERR(disk); goto out_no_disk; @@ -2729,9 +2737,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, disk->queue); blk_queue_write_cache(disk->queue, true, true); - /* Setting the max_hw_sectors to an odd value of 8kibyte here - This triggers a max_bio_size message upon first attach or connect */ - blk_queue_max_hw_sectors(disk->queue, DRBD_MAX_BIO_SIZE_SAFE >> 8); device->md_io.page = alloc_page(GFP_KERNEL); if (!device->md_io.page) From patchwork Tue Mar 5 13:40:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13582413 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16FF985920 for ; Tue, 5 Mar 2024 13:40:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646047; cv=none; b=KiPukhszLIpsfRH4GYf48zY6HV7nEJH0ukKLRWRmCmbQuMTJ7RjRNJi3PlbQs5HIsnL3ZOwuA8Yp2gnsEFG0vnXlID7H416R3mRTBivttJJqXU4GANi8FIH37bvRprI2/ZsrsQAxAJY7BuefMKXybd0FOc3LdCBw7cnCtpsD44A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646047; c=relaxed/simple; bh=tOmNqTt6Uod/QfwH0I/6BBHuD/cSjNyPVCopoGh7ipk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tF6mQn0Qmizkk+0BZpjrtMBXvP1fcSwOYm6/ctnvMLNCc/XImfC+oCOtwzfiGQO4GqlX9SnrFH74/gXvqkTF+cj7KE2keJaFprNp71GEY3Zzbf9rLXvw8bHBe0W9fM4q1meHFbDZUp9nQORlN5GbTAB1biRKKdlKknah/SAdSdo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=ecMbkdtQ; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ecMbkdtQ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=CBcRvD9T7h9nmDg518Gqzpn+ib6AyjtQBtWpmonmt1k=; b=ecMbkdtQZJuDggvyU4hWQBcAe0 rSIYHAWON4zvz6gpl9PlxoMa7CsH2FSkCCA8QgZ94gv1vcKCq6Yi4JLemX51iOR0qzVNCeOLjyUqx FoahsTbpgT+VPREq1m1B+yvWwXbiQaQscRqYihoaIk8nNvtW8CwnRVe4xt1nf/0nUtoVDvgoyxFyv sQ6m+yKRwCfCNSxBDrRDfR51vol2h2ljN8nN9SURHIp8YVGxmNtkra8HA1viqZST1RH2e/9FsNBqY QWohfxeUAoneHMiwkLGxjDnTNGN+riSe/sl5SBlPJNCNr3lqaWXOEqLNtw4Gd1/tYD2yDWHv3CkIf 0bP21nuA==; Received: from [50.219.53.154] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rhV2C-0000000Dqx4-0MDA; Tue, 05 Mar 2024 13:40:44 +0000 From: Christoph Hellwig To: Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe Cc: drbd-dev@lists.linbit.com, linux-block@vger.kernel.org Subject: [PATCH 2/7] drbd: refactor drbd_reconsider_queue_parameters Date: Tue, 5 Mar 2024 06:40:36 -0700 Message-Id: <20240305134041.137006-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240305134041.137006-1-hch@lst.de> References: <20240305134041.137006-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Split out a drbd_max_peer_bio_size helper for the peer I/O size, and condense the various checks to a nested min3(..., max())) instead of using a lot of local variables. Signed-off-by: Christoph Hellwig --- drivers/block/drbd/drbd_nl.c | 84 +++++++++++++++++++++--------------- 1 file changed, 49 insertions(+), 35 deletions(-) diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index 43747a1aae4353..9135001a8e572d 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -1189,6 +1189,33 @@ static int drbd_check_al_size(struct drbd_device *device, struct disk_conf *dc) return 0; } +static unsigned int drbd_max_peer_bio_size(struct drbd_device *device) +{ + /* + * We may ignore peer limits if the peer is modern enough. From 8.3.8 + * onwards the peer can use multiple BIOs for a single peer_request. + */ + if (device->state.conn < C_WF_REPORT_PARAMS) + return device->peer_max_bio_size; + + if (first_peer_device(device)->connection->agreed_pro_version < 94) + return min(device->peer_max_bio_size, DRBD_MAX_SIZE_H80_PACKET); + + /* + * Correct old drbd (up to 8.3.7) if it believes it can do more than + * 32KiB. + */ + if (first_peer_device(device)->connection->agreed_pro_version == 94) + return DRBD_MAX_SIZE_H80_PACKET; + + /* + * drbd 8.3.8 onwards, before 8.4.0 + */ + if (first_peer_device(device)->connection->agreed_pro_version < 100) + return DRBD_MAX_BIO_SIZE_P95; + return DRBD_MAX_BIO_SIZE; +} + static void blk_queue_discard_granularity(struct request_queue *q, unsigned int granularity) { q->limits.discard_granularity = granularity; @@ -1303,48 +1330,35 @@ static void drbd_setup_queue_param(struct drbd_device *device, struct drbd_backi fixup_discard_support(device, q); } -void drbd_reconsider_queue_parameters(struct drbd_device *device, struct drbd_backing_dev *bdev, struct o_qlim *o) +void drbd_reconsider_queue_parameters(struct drbd_device *device, + struct drbd_backing_dev *bdev, struct o_qlim *o) { - unsigned int now, new, local, peer; - - now = queue_max_hw_sectors(device->rq_queue) << 9; - local = device->local_max_bio_size; /* Eventually last known value, from volatile memory */ - peer = device->peer_max_bio_size; /* Eventually last known value, from meta data */ + unsigned int now = queue_max_hw_sectors(device->rq_queue) << + SECTOR_SHIFT; + unsigned int new; if (bdev) { - local = queue_max_hw_sectors(bdev->backing_bdev->bd_disk->queue) << 9; - device->local_max_bio_size = local; - } - local = min(local, DRBD_MAX_BIO_SIZE); - - /* We may ignore peer limits if the peer is modern enough. - Because new from 8.3.8 onwards the peer can use multiple - BIOs for a single peer_request */ - if (device->state.conn >= C_WF_REPORT_PARAMS) { - if (first_peer_device(device)->connection->agreed_pro_version < 94) - peer = min(device->peer_max_bio_size, DRBD_MAX_SIZE_H80_PACKET); - /* Correct old drbd (up to 8.3.7) if it believes it can do more than 32KiB */ - else if (first_peer_device(device)->connection->agreed_pro_version == 94) - peer = DRBD_MAX_SIZE_H80_PACKET; - else if (first_peer_device(device)->connection->agreed_pro_version < 100) - peer = DRBD_MAX_BIO_SIZE_P95; /* drbd 8.3.8 onwards, before 8.4.0 */ - else - peer = DRBD_MAX_BIO_SIZE; + struct request_queue *b = bdev->backing_bdev->bd_disk->queue; - /* We may later detach and re-attach on a disconnected Primary. - * Avoid this setting to jump back in that case. - * We want to store what we know the peer DRBD can handle, - * not what the peer IO backend can handle. */ - if (peer > device->peer_max_bio_size) - device->peer_max_bio_size = peer; + device->local_max_bio_size = + queue_max_hw_sectors(b) << SECTOR_SHIFT; } - new = min(local, peer); - if (device->state.role == R_PRIMARY && new < now) - drbd_err(device, "ASSERT FAILED new < now; (%u < %u)\n", new, now); - - if (new != now) + /* + * We may later detach and re-attach on a disconnected Primary. Avoid + * decreasing the value in this case. + * + * We want to store what we know the peer DRBD can handle, not what the + * peer IO backend can handle. + */ + new = min3(DRBD_MAX_BIO_SIZE, device->local_max_bio_size, + max(drbd_max_peer_bio_size(device), device->peer_max_bio_size)); + if (new != now) { + if (device->state.role == R_PRIMARY && new < now) + drbd_err(device, "ASSERT FAILED new < now; (%u < %u)\n", + new, now); drbd_info(device, "max BIO size = %u\n", new); + } drbd_setup_queue_param(device, bdev, new, o); } From patchwork Tue Mar 5 13:40:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13582414 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 671BB85C74 for ; Tue, 5 Mar 2024 13:40:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646047; cv=none; b=Tmipv3n4sFbOxbolto2vI8zBxbROWtNv8RpA+AsPK2WXuiYLpzu4TcAeB+OBD2RjlTvGIzlNPTUz34ldhMiCpmEiziuV/qhHHI94WK7S1ijpLPphxbuNACy64ias3ULJLRcEcp4DUEFrdKDyKJoxjMerlctnN63MhN+hy/MekhE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646047; c=relaxed/simple; bh=AsrYHsG7r6XadTI3QCTSg4hezSD4FYHHHuLdbXXQ+TA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pVFX60yciloeKlVCS5EmutnJIlKCySPDVYtruhiaQeYAi9RxK8yXq9ii5Bs0Q9H+m20ZeeYMgnYAmFsONRLLedmRCxM1inElw2kYjJo478jQ5IcV21eRcv1xdN9ZWY74A4GAmIuryuptin08mYDHGsbGx0R7FtZ2tEZhXzQBOVM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=dFP+iUTx; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="dFP+iUTx" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=0KKDb4GuesD/jXOw/b5195T9S3k6Ul/k1CdB+JFeMjg=; b=dFP+iUTx2o+lCf3Y9d7gSOwI9t Xmn4p9+u1ZOhK7D8OiTfR5tZzwdxiGTE3opudJ7Cvs6pINLaIywa1t+3lBgJ2F6KWCSJDUQnsWhPL 1jZOhsPeZOrS10UlXyj3eOR0TglqnVHSjTeXwfI1ZwKhk3QHUNapl1d3Hra6VtBPiHyz3WMbjk8mS FPWVaAmF8gTtECBkoMUU61QtFIoMhLR/sjhI8hCVCvCoTsXw3pFWuXu4ZWV3lQsEvxkktrQVf0HsW 3zMePC3dBSsrHjkncm65otmUJL1Aeha+o6kPrkx10zqzKQFkUf+hCl9EA5wxaZ5rRqxBUjlKLW34v OKmNsxuA==; Received: from [50.219.53.154] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rhV2D-0000000Dqxj-0BPR; Tue, 05 Mar 2024 13:40:45 +0000 From: Christoph Hellwig To: Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe Cc: drbd-dev@lists.linbit.com, linux-block@vger.kernel.org Subject: [PATCH 3/7] drbd: refactor the backing dev max_segments calculation Date: Tue, 5 Mar 2024 06:40:37 -0700 Message-Id: <20240305134041.137006-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240305134041.137006-1-hch@lst.de> References: <20240305134041.137006-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Factor out a drbd_backing_dev_max_segments helper that checks the backing device limitation. Signed-off-by: Christoph Hellwig --- drivers/block/drbd/drbd_nl.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index 9135001a8e572d..0326b7322ceb48 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -1295,30 +1295,39 @@ static void fixup_discard_support(struct drbd_device *device, struct request_que } } +/* This is the workaround for "bio would need to, but cannot, be split" */ +static unsigned int drbd_backing_dev_max_segments(struct drbd_device *device) +{ + unsigned int max_segments; + + rcu_read_lock(); + max_segments = rcu_dereference(device->ldev->disk_conf)->max_bio_bvecs; + rcu_read_unlock(); + + if (!max_segments) + return BLK_MAX_SEGMENTS; + return max_segments; +} + static void drbd_setup_queue_param(struct drbd_device *device, struct drbd_backing_dev *bdev, unsigned int max_bio_size, struct o_qlim *o) { struct request_queue * const q = device->rq_queue; unsigned int max_hw_sectors = max_bio_size >> 9; - unsigned int max_segments = 0; + unsigned int max_segments = BLK_MAX_SEGMENTS; struct request_queue *b = NULL; - struct disk_conf *dc; if (bdev) { b = bdev->backing_bdev->bd_disk->queue; max_hw_sectors = min(queue_max_hw_sectors(b), max_bio_size >> 9); - rcu_read_lock(); - dc = rcu_dereference(device->ldev->disk_conf); - max_segments = dc->max_bio_bvecs; - rcu_read_unlock(); + max_segments = drbd_backing_dev_max_segments(device); blk_set_stacking_limits(&q->limits); } blk_queue_max_hw_sectors(q, max_hw_sectors); - /* This is the workaround for "bio would need to, but cannot, be split" */ - blk_queue_max_segments(q, max_segments ? max_segments : BLK_MAX_SEGMENTS); + blk_queue_max_segments(q, max_segments); blk_queue_segment_boundary(q, PAGE_SIZE-1); decide_on_discard_support(device, bdev); From patchwork Tue Mar 5 13:40:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13582415 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6889685C79 for ; Tue, 5 Mar 2024 13:40:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646047; cv=none; b=naWywfUlkIjKMqPIXDHcexqIl/RonkPlfGm+vDqnzvUqkqIKCE+bfX2f8A14nCC82wtTXVvPFRk3Ew4tCRcQwcBEzanNDcG5Zz5qpavbRwTQp5OVnhS2A9kZE47YArYXhwvNt1yOY+iEvPXIYEkS/CafEjUKwGjBl8253EdNTMg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646047; c=relaxed/simple; bh=YEE0K3C4YljkDRp3e1N4QWFxGsJWb+5rxLnM57XLEB0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fUSeoU37Zr8bjhk2ComRqlzY+BAUUOdVtuQE8rzO7Dx0agd6LAX1LS/hEBcTzDE+tTs1tbLzOC+HlChcEsL0l9zBrETmn3k9EOAWKtcc5ICE9Vwo065JEKiU+L2qsEoMEWbyGSbtrH/B4XfMkA9tJ6nmr7c41UE4gV868Y8y678= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=1TNntUgE; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="1TNntUgE" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=oCgQZaiF/v38Xtofqeuu4eAUS7fzPBaKg2U7fjijjbw=; b=1TNntUgEzKJE7T+k4owIJpt1qM 1GdQyaTvG6M2I7Kj3J8p3HA5vdC7ym8oEXX0QV/Y70doQGszK1qddpUfi7rw5MFHxf5Xcr8CeNYvK 51wj+h3+xRfpXwbMPKqCvsjCqntTuKZCTvn1hA82zE8AfscdisXXfh7w5s/bS45NwGhcAL8PaiDuz HvYIP1DH+q1X6PlmhGp1GjoGpjaEXj7XESgW8Q7d4JsV5TytTpZjDE23JdbjVr9UXlVHCE2jxaULC KVQgCWRQwEBHnlUarmxUoU2FEEizXPHazlbO2FI3KPW+YsrWAcv5iQXkuJW/vpURdED+ocoKizK2f MBm9rmVw==; Received: from [50.219.53.154] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rhV2D-0000000Dqxt-2hjl; Tue, 05 Mar 2024 13:40:45 +0000 From: Christoph Hellwig To: Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe Cc: drbd-dev@lists.linbit.com, linux-block@vger.kernel.org Subject: [PATCH 4/7] drbd: merge drbd_setup_queue_param into drbd_reconsider_queue_parameters Date: Tue, 5 Mar 2024 06:40:38 -0700 Message-Id: <20240305134041.137006-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240305134041.137006-1-hch@lst.de> References: <20240305134041.137006-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html drbd_setup_queue_param is only called by drbd_reconsider_queue_parameters and there is no really clear boundary of responsibilities between the two. Signed-off-by: Christoph Hellwig --- drivers/block/drbd/drbd_nl.c | 56 ++++++++++++++---------------------- 1 file changed, 22 insertions(+), 34 deletions(-) diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index 0326b7322ceb48..0f40fdee089971 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -1309,45 +1309,16 @@ static unsigned int drbd_backing_dev_max_segments(struct drbd_device *device) return max_segments; } -static void drbd_setup_queue_param(struct drbd_device *device, struct drbd_backing_dev *bdev, - unsigned int max_bio_size, struct o_qlim *o) -{ - struct request_queue * const q = device->rq_queue; - unsigned int max_hw_sectors = max_bio_size >> 9; - unsigned int max_segments = BLK_MAX_SEGMENTS; - struct request_queue *b = NULL; - - if (bdev) { - b = bdev->backing_bdev->bd_disk->queue; - - max_hw_sectors = min(queue_max_hw_sectors(b), max_bio_size >> 9); - max_segments = drbd_backing_dev_max_segments(device); - - blk_set_stacking_limits(&q->limits); - } - - blk_queue_max_hw_sectors(q, max_hw_sectors); - blk_queue_max_segments(q, max_segments); - blk_queue_segment_boundary(q, PAGE_SIZE-1); - decide_on_discard_support(device, bdev); - - if (b) { - blk_stack_limits(&q->limits, &b->limits, 0); - disk_update_readahead(device->vdisk); - } - fixup_write_zeroes(device, q); - fixup_discard_support(device, q); -} - void drbd_reconsider_queue_parameters(struct drbd_device *device, struct drbd_backing_dev *bdev, struct o_qlim *o) { - unsigned int now = queue_max_hw_sectors(device->rq_queue) << - SECTOR_SHIFT; + struct request_queue * const q = device->rq_queue; + unsigned int now = queue_max_hw_sectors(q) << 9; + struct request_queue *b = NULL; unsigned int new; if (bdev) { - struct request_queue *b = bdev->backing_bdev->bd_disk->queue; + b = bdev->backing_bdev->bd_disk->queue; device->local_max_bio_size = queue_max_hw_sectors(b) << SECTOR_SHIFT; @@ -1369,7 +1340,24 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device, drbd_info(device, "max BIO size = %u\n", new); } - drbd_setup_queue_param(device, bdev, new, o); + if (bdev) { + blk_set_stacking_limits(&q->limits); + blk_queue_max_segments(q, + drbd_backing_dev_max_segments(device)); + } else { + blk_queue_max_segments(q, BLK_MAX_SEGMENTS); + } + + blk_queue_max_hw_sectors(q, new >> SECTOR_SHIFT); + blk_queue_segment_boundary(q, PAGE_SIZE - 1); + decide_on_discard_support(device, bdev); + + if (bdev) { + blk_stack_limits(&q->limits, &b->limits, 0); + disk_update_readahead(device->vdisk); + } + fixup_write_zeroes(device, q); + fixup_discard_support(device, q); } /* Starts the worker thread */ From patchwork Tue Mar 5 13:40:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13582416 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4604E8593E for ; Tue, 5 Mar 2024 13:40:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646048; cv=none; b=fsKQRZ62cK62DZPGfkYZa3FXwOtgiFVwCwDc9J74Y3obC1lTjmxNM5pt1JcUPRmqOeXowXRGPCwtu2b7688lN9Wj5Juq39GFJEGMqUCwwa+i0wLmojmi2QQUIXUZoEaRq0jyestJuseSDn6sAPwxGHmHkstjr0VUcEBmNOf2wPU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646048; c=relaxed/simple; bh=WZ2XdsWjVwQod0N+mJSg6AOyJz4P/zy2BHCxKr7xQYs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=p2lmBXVeybaFtrTTHEO6VG4RrAKPHdi+N5bhmwjKe/BncJnr2DQ6PXRJE4jPEHtuhdtckkbP8padbv7de0i1hi41gb4Wp1YLsbSusMUgtOPzjDdOaR5BWlRdNlaJmnhDOnigxsRxpMQWtUsUS2rb3Z/gcBEGAfo/ggBieTIPa5s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=JzetSQPh; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="JzetSQPh" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Wu4aQNgacSiuDXEhKe8iEjuSJnZOtB4WhweP8yWkb8A=; b=JzetSQPho4DWyiYrFfDYFnS/Bg RATsclB+qRKWQDSHUiDY6ZJBqofBZWsxPIZIhoDXsA2s7E1oGIBB9HwsyiYt+1i23QWWYlnWkpFlC /AwdJBnSrkxoBl/3Zar2fnYiXyGU5qQDQrTBWXgo1IZwKS9BBWBnBlFndleZ14ew5jF5CUQ7nNAXe UmlbqrzvZgB7G6tcHC+kaWYXVl7qDcs+pEdVihw7/JUzAK/LW/dbNk29Sj9BRH8oNRNiMdnaeJjnK v5Lq68NKmC5i2lquTyU2OrchTa5tQYzYpE1RAn8K3oia+1q4PIoncsmWXGOJG5dCauXvlGMa+nn3g Qd3At4pw==; Received: from [50.219.53.154] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rhV2E-0000000Dqy8-1j4m; Tue, 05 Mar 2024 13:40:46 +0000 From: Christoph Hellwig To: Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe Cc: drbd-dev@lists.linbit.com, linux-block@vger.kernel.org Subject: [PATCH 5/7] drbd: don't set max_write_zeroes_sectors in decide_on_discard_support Date: Tue, 5 Mar 2024 06:40:39 -0700 Message-Id: <20240305134041.137006-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240305134041.137006-1-hch@lst.de> References: <20240305134041.137006-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html fixup_write_zeroes always overrides the max_write_zeroes_sectors value a little further down the callchain, so don't bother to setup a limit in decide_on_discard_support. Signed-off-by: Christoph Hellwig --- drivers/block/drbd/drbd_nl.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index 0f40fdee089971..a79b7fe5335de4 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -1260,7 +1260,6 @@ static void decide_on_discard_support(struct drbd_device *device, blk_queue_discard_granularity(q, 512); max_discard_sectors = drbd_max_discard_sectors(connection); blk_queue_max_discard_sectors(q, max_discard_sectors); - blk_queue_max_write_zeroes_sectors(q, max_discard_sectors); return; not_supported: From patchwork Tue Mar 5 13:40:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13582417 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C9618613E for ; Tue, 5 Mar 2024 13:40:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646049; cv=none; b=A6a2GQMKpRNjTzpDZ1t3/bok2Vf0C/KCfcLG8Ut5RahDfIWsQmjRBW8ly9B8EdNRfukhHZ0haTclMVE9DgAKmOjC7/d4ArrrxCI4VpdGz3xT1lQsmyRFaIoCReowA4lia7DxM20hTyk0qtHpQGybR8jO95M8eTUyZxWorYuiFmw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646049; c=relaxed/simple; bh=hzsCenhqVervqFR03nuiQmMOWXDgHrCuxk16LwcLHhU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gKR2gkhNhS2aDLLH5o0wWJGIG1A5mQJu2n6j7L/ULZ4IlOI+H+K8OdNvZ/Zpp6dS3Qhbsby2fTWw9qE/1FXKNzm+KdG9MkM00h7edG7Vjhl4HMk7Z7DGh4g9SDaAiDKIZ12yoVbFS165siPsy1bXPA6MsPaxl3U0N0Qkv8Ma2CA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=4RndfurZ; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="4RndfurZ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=HGtAw79TfbpqxN7DjdZnzGM38FdIDqy2dwX3NjUJYVw=; b=4RndfurZVf4hgZKhOw+MHdL7Al XmJGWhauhoCg+bS2ZCB3elzlA+Y5V1fln1gh7uiWnD1dNfTBVbbz1oODpsJ7QSl4CXSVHK37IU+Lt P+7lYzbopD9yXtQJt1g34ivs/pPckR7SYjX4ryqWIEBEtN5LW+eNyb27qbq0u4o6W97MRu3xVOx7i 0o3D7afmlSymQp8uPSvt7tAD/6tqCctHhvM+ZkOE/1dVAxQY6wUzAAkt9AzxiXLxaAtD4bbHtdCn1 UMUzmROnQowAfrHiNz+O9oi/n+XHURck+wH+Sy0WizF2qEP6oVE0vhVDoQXLQ8rMBl/yTrdwM88M2 P+gN80gA==; Received: from [50.219.53.154] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rhV2F-0000000DqyR-1ll0; Tue, 05 Mar 2024 13:40:47 +0000 From: Christoph Hellwig To: Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe Cc: drbd-dev@lists.linbit.com, linux-block@vger.kernel.org Subject: [PATCH 6/7] drbd: split out a drbd_discard_supported helper Date: Tue, 5 Mar 2024 06:40:40 -0700 Message-Id: <20240305134041.137006-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240305134041.137006-1-hch@lst.de> References: <20240305134041.137006-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a helper to check if discard is supported for a given connection / backing device combination. Signed-off-by: Christoph Hellwig --- drivers/block/drbd/drbd_nl.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index a79b7fe5335de4..94ed2b3ea6361d 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -1231,24 +1231,33 @@ static unsigned int drbd_max_discard_sectors(struct drbd_connection *connection) return AL_EXTENT_SIZE >> 9; } -static void decide_on_discard_support(struct drbd_device *device, +static bool drbd_discard_supported(struct drbd_connection *connection, struct drbd_backing_dev *bdev) { - struct drbd_connection *connection = - first_peer_device(device)->connection; - struct request_queue *q = device->rq_queue; - unsigned int max_discard_sectors; - if (bdev && !bdev_max_discard_sectors(bdev->backing_bdev)) - goto not_supported; + return false; if (connection->cstate >= C_CONNECTED && !(connection->agreed_features & DRBD_FF_TRIM)) { drbd_info(connection, "peer DRBD too old, does not support TRIM: disabling discards\n"); - goto not_supported; + return false; } + return true; +} + +static void decide_on_discard_support(struct drbd_device *device, + struct drbd_backing_dev *bdev) +{ + struct drbd_connection *connection = + first_peer_device(device)->connection; + struct request_queue *q = device->rq_queue; + unsigned int max_discard_sectors; + + if (!drbd_discard_supported(connection, bdev)) + goto not_supported; + /* * We don't care for the granularity, really. * From patchwork Tue Mar 5 13:40:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13582418 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B90D8615A for ; Tue, 5 Mar 2024 13:40:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646050; cv=none; b=e4CLZwXWTtpPn7DmxJVu4HkY3UwO66L4EC1Ph2HVZqvqVZbc9+exjN0Uq+xfRhYGKnarr0W4uVBKhAbrSbyt4EivzFau1r3E8MNIzkrPRB7hv2rzGbJ2SC16H/6xdK97XMw+MzZfgNprW1QfRXTW3NfdIvubXmWM5eZLsXcxop0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709646050; c=relaxed/simple; bh=QyWcRrNxVgK384e9YKYldUdazeaevTm41k1lsqyTxkw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KGCdAlgysN1EGJtVTMnYL7em6mMT0FommshhI0+g2cBI7x+s7dDNVm8UROZN6ENCi9MbzJy0W+mcQK5vWm9XxxE8VAT/4Gh0K/uuyDIPbQALWuVqg3QgiIGArka3Z/7l1dghe72RHS+ibv+5Ahhh58zNtU0DevmDBfJHvUV5G9Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=RLgA+UhI; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="RLgA+UhI" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=fs+hHzeY50NjZ+wR6CygHlhg6UmFfdzm3awOpeO+YYg=; b=RLgA+UhInXIE1bKXuhC8w9AVUm VqQ+Rqt84ukF0fJbaimZ03kclW6G8eHGodix2gT85jEX+UIFzR1qfqSgzgUUJTgF+Hl8NQVQvdwUr T6YdOYhChJ7MRj0U4WjF/lkXC+6HRQWoap82E/wqiAOMnCJqbG5scpmJYHGWoYfsUWr+S98eOymPV r0qdgrjTzvZfAakrxyp4O5bQhHasfXcu3roYA4WtzPcXp60sJrT1WECfzy0o5m6YwagUxNiL0nt+c EMFdbYrswcJJNe6O4ZEhQWAki8yvY9giwmbJI9GmglQo/0Ce7EaNT4AEKGLAlK3SKQVYxLqstvO3o 6BcsEoOg==; Received: from [50.219.53.154] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rhV2G-0000000Dqyt-1SBp; Tue, 05 Mar 2024 13:40:48 +0000 From: Christoph Hellwig To: Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe Cc: drbd-dev@lists.linbit.com, linux-block@vger.kernel.org Subject: [PATCH 7/7] drbd: atomically update queue limits in drbd_reconsider_queue_parameters Date: Tue, 5 Mar 2024 06:40:41 -0700 Message-Id: <20240305134041.137006-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240305134041.137006-1-hch@lst.de> References: <20240305134041.137006-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Switch drbd_reconsider_queue_parameters to set up the queue parameters in an on-stack queue_limits structure and apply the atomically. Remove various helpers that have become so trivial that they can be folded into drbd_reconsider_queue_parameters. Signed-off-by: Christoph Hellwig --- drivers/block/drbd/drbd_nl.c | 119 ++++++++++++++--------------------- 1 file changed, 46 insertions(+), 73 deletions(-) diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index 94ed2b3ea6361d..fbd92803dc1da4 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -1216,11 +1216,6 @@ static unsigned int drbd_max_peer_bio_size(struct drbd_device *device) return DRBD_MAX_BIO_SIZE; } -static void blk_queue_discard_granularity(struct request_queue *q, unsigned int granularity) -{ - q->limits.discard_granularity = granularity; -} - static unsigned int drbd_max_discard_sectors(struct drbd_connection *connection) { /* when we introduced REQ_WRITE_SAME support, we also bumped @@ -1247,62 +1242,6 @@ static bool drbd_discard_supported(struct drbd_connection *connection, return true; } -static void decide_on_discard_support(struct drbd_device *device, - struct drbd_backing_dev *bdev) -{ - struct drbd_connection *connection = - first_peer_device(device)->connection; - struct request_queue *q = device->rq_queue; - unsigned int max_discard_sectors; - - if (!drbd_discard_supported(connection, bdev)) - goto not_supported; - - /* - * We don't care for the granularity, really. - * - * Stacking limits below should fix it for the local device. Whether or - * not it is a suitable granularity on the remote device is not our - * problem, really. If you care, you need to use devices with similar - * topology on all peers. - */ - blk_queue_discard_granularity(q, 512); - max_discard_sectors = drbd_max_discard_sectors(connection); - blk_queue_max_discard_sectors(q, max_discard_sectors); - return; - -not_supported: - blk_queue_discard_granularity(q, 0); - blk_queue_max_discard_sectors(q, 0); -} - -static void fixup_write_zeroes(struct drbd_device *device, struct request_queue *q) -{ - /* Fixup max_write_zeroes_sectors after blk_stack_limits(): - * if we can handle "zeroes" efficiently on the protocol, - * we want to do that, even if our backend does not announce - * max_write_zeroes_sectors itself. */ - struct drbd_connection *connection = first_peer_device(device)->connection; - /* If the peer announces WZEROES support, use it. Otherwise, rather - * send explicit zeroes than rely on some discard-zeroes-data magic. */ - if (connection->agreed_features & DRBD_FF_WZEROES) - q->limits.max_write_zeroes_sectors = DRBD_MAX_BBIO_SECTORS; - else - q->limits.max_write_zeroes_sectors = 0; -} - -static void fixup_discard_support(struct drbd_device *device, struct request_queue *q) -{ - unsigned int max_discard = device->rq_queue->limits.max_discard_sectors; - unsigned int discard_granularity = - device->rq_queue->limits.discard_granularity >> SECTOR_SHIFT; - - if (discard_granularity > max_discard) { - blk_queue_discard_granularity(q, 0); - blk_queue_max_discard_sectors(q, 0); - } -} - /* This is the workaround for "bio would need to, but cannot, be split" */ static unsigned int drbd_backing_dev_max_segments(struct drbd_device *device) { @@ -1320,8 +1259,11 @@ static unsigned int drbd_backing_dev_max_segments(struct drbd_device *device) void drbd_reconsider_queue_parameters(struct drbd_device *device, struct drbd_backing_dev *bdev, struct o_qlim *o) { + struct drbd_connection *connection = + first_peer_device(device)->connection; struct request_queue * const q = device->rq_queue; unsigned int now = queue_max_hw_sectors(q) << 9; + struct queue_limits lim; struct request_queue *b = NULL; unsigned int new; @@ -1348,24 +1290,55 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device, drbd_info(device, "max BIO size = %u\n", new); } + lim = queue_limits_start_update(q); if (bdev) { - blk_set_stacking_limits(&q->limits); - blk_queue_max_segments(q, - drbd_backing_dev_max_segments(device)); + blk_set_stacking_limits(&lim); + lim.max_segments = drbd_backing_dev_max_segments(device); } else { - blk_queue_max_segments(q, BLK_MAX_SEGMENTS); + lim.max_segments = BLK_MAX_SEGMENTS; } - blk_queue_max_hw_sectors(q, new >> SECTOR_SHIFT); - blk_queue_segment_boundary(q, PAGE_SIZE - 1); - decide_on_discard_support(device, bdev); + lim.max_hw_sectors = new >> SECTOR_SHIFT; + lim.seg_boundary_mask = PAGE_SIZE - 1; - if (bdev) { - blk_stack_limits(&q->limits, &b->limits, 0); - disk_update_readahead(device->vdisk); + /* + * We don't care for the granularity, really. + * + * Stacking limits below should fix it for the local device. Whether or + * not it is a suitable granularity on the remote device is not our + * problem, really. If you care, you need to use devices with similar + * topology on all peers. + */ + if (drbd_discard_supported(connection, bdev)) { + lim.discard_granularity = 512; + lim.max_hw_discard_sectors = + drbd_max_discard_sectors(connection); + } else { + lim.discard_granularity = 0; + lim.max_hw_discard_sectors = 0; } - fixup_write_zeroes(device, q); - fixup_discard_support(device, q); + + if (bdev) + blk_stack_limits(&lim, &b->limits, 0); + + /* + * If we can handle "zeroes" efficiently on the protocol, we want to do + * that, even if our backend does not announce max_write_zeroes_sectors + * itself. + */ + if (connection->agreed_features & DRBD_FF_WZEROES) + lim.max_write_zeroes_sectors = DRBD_MAX_BBIO_SECTORS; + else + lim.max_write_zeroes_sectors = 0; + + if ((lim.discard_granularity >> SECTOR_SHIFT) > + lim.max_hw_discard_sectors) { + lim.discard_granularity = 0; + lim.max_hw_discard_sectors = 0; + } + + if (queue_limits_commit_update(q, &lim)) + drbd_err(device, "setting new queue limits failed\n"); } /* Starts the worker thread */