From patchwork Thu Sep 10 14:48:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11767973 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D01714F6 for ; Thu, 10 Sep 2020 14:49:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1D2C221D90 for ; Thu, 10 Sep 2020 14:49:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="qnDaccOt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D2C221D90 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 47B2E90001B; Thu, 10 Sep 2020 10:48:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3BD1090000D; Thu, 10 Sep 2020 10:48:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C61090001C; Thu, 10 Sep 2020 10:48:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id CE1DC90000D for ; Thu, 10 Sep 2020 10:48:55 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7C8703620 for ; Thu, 10 Sep 2020 14:48:55 +0000 (UTC) X-FDA: 77247433830.06.list27_330dcc5270e6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 9F886100CF97E for ; Thu, 10 Sep 2020 14:48:53 +0000 (UTC) X-Spam-Summary: 1,0,0,e48898ef168cf2c7,d41d8cd98f00b204,batv+eb7f8e2ffdbc6cab23f7+6227+infradead.org+hch@casper.srs.infradead.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2895:3138:3139:3140:3141:3142:3353:3865:3867:3868:3871:3872:4321:4384:4605:5007:6261:6653:6742:7875:8603:10004:11026:11658:11914:12043:12160:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13894:14096:14181:14394:14721:21080:21451:21627:21990:30054:30070,0,RBL:90.155.50.34:@casper.srs.infradead.org:.lbl8.mailshell.net-62.8.15.100 64.201.201.201;04yg7p55ccfxnb7jfsyubras5c3hmycqwnhrzo4qfr35dcotaxypk1kgihu3yqg.razny4c1e66whatbxuk8m87newe7pxck157336rq7rdjc5n3uy8mb8bd3ucnjdt.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: list27_330dcc5270e6 X-Filterd-Recvd-Size: 5557 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Thu, 10 Sep 2020 14:48:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GZc+J6ND1n6cPOJZ32fVjVSmmANLqKUqlMZxw//aLlg=; b=qnDaccOtti1d5o5Y6nShuZ29+/ T1Ze6yEfD9DDR0j8mvRqkHcfvi8jVk9iiENuPF5l6mq9EALMCwRnx0YWrH+BCDw5xvaSXd/zutHyS ummr6rSV/vSxtHBdvBMT0dt4/XPv8OA3fsWeGbbsfWXy4AWB/DVziFc1dBmgjGNQc35AjQEsVWTWE 1RcWYz+K7Wg91DV54MLolzspjTdmLv4BC3KSbalV6W1GhBUNgCCFeWrQMO4L/BLcTlkDlJNzHxMB3 F+ZpqAcrsbvI7+1rR1R1Ggajwj0w+WcQDQ9h2TQp2AHRGsKP93tZ6e2Gy7HD9I4173cyzA0na71aM 0u32pGlQ==; Received: from [2001:4bb8:184:af1:3ecc:ac5b:136f:434a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kGNsQ-0006wO-JK; Thu, 10 Sep 2020 14:48:42 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Song Liu , Hans de Goede , Richard Weinberger , Minchan Kim , linux-mtd@lists.infradead.org, dm-devel@redhat.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, drbd-dev@lists.linbit.com, linux-raid@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org Subject: [PATCH 05/12] md: update the optimal I/O size on reshape Date: Thu, 10 Sep 2020 16:48:25 +0200 Message-Id: <20200910144833.742260-6-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910144833.742260-1-hch@lst.de> References: <20200910144833.742260-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-Rspamd-Queue-Id: 9F886100CF97E X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The raid5 and raid10 drivers currently update the read-ahead size, but not the optimal I/O size on reshape. To prepare for deriving the read-ahead size from the optimal I/O size make sure it is updated as well. Signed-off-by: Christoph Hellwig Acked-by: Song Liu --- drivers/md/raid10.c | 22 ++++++++++++++-------- drivers/md/raid5.c | 10 ++++++++-- 2 files changed, 22 insertions(+), 10 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index e8fa327339171c..9956a04ac13bd6 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3703,10 +3703,20 @@ static struct r10conf *setup_conf(struct mddev *mddev) return ERR_PTR(err); } +static void raid10_set_io_opt(struct r10conf *conf) +{ + int raid_disks = conf->geo.raid_disks; + + if (!(conf->geo.raid_disks % conf->geo.near_copies)) + raid_disks /= conf->geo.near_copies; + blk_queue_io_opt(conf->mddev->queue, (conf->mddev->chunk_sectors << 9) * + raid_disks); +} + static int raid10_run(struct mddev *mddev) { struct r10conf *conf; - int i, disk_idx, chunk_size; + int i, disk_idx; struct raid10_info *disk; struct md_rdev *rdev; sector_t size; @@ -3742,18 +3752,13 @@ static int raid10_run(struct mddev *mddev) mddev->thread = conf->thread; conf->thread = NULL; - chunk_size = mddev->chunk_sectors << 9; if (mddev->queue) { blk_queue_max_discard_sectors(mddev->queue, mddev->chunk_sectors); blk_queue_max_write_same_sectors(mddev->queue, 0); blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - blk_queue_io_min(mddev->queue, chunk_size); - if (conf->geo.raid_disks % conf->geo.near_copies) - blk_queue_io_opt(mddev->queue, chunk_size * conf->geo.raid_disks); - else - blk_queue_io_opt(mddev->queue, chunk_size * - (conf->geo.raid_disks / conf->geo.near_copies)); + blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); + raid10_set_io_opt(conf); } rdev_for_each(rdev, mddev) { @@ -4727,6 +4732,7 @@ static void end_reshape(struct r10conf *conf) stripe /= conf->geo.near_copies; if (conf->mddev->queue->backing_dev_info->ra_pages < 2 * stripe) conf->mddev->queue->backing_dev_info->ra_pages = 2 * stripe; + raid10_set_io_opt(conf); } conf->fullsync = 0; } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 225380efd1e24f..9a7d1250894ef1 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7232,6 +7232,12 @@ static int only_parity(int raid_disk, int algo, int raid_disks, int max_degraded return 0; } +static void raid5_set_io_opt(struct r5conf *conf) +{ + blk_queue_io_opt(conf->mddev->queue, (conf->chunk_sectors << 9) * + (conf->raid_disks - conf->max_degraded)); +} + static int raid5_run(struct mddev *mddev) { struct r5conf *conf; @@ -7521,8 +7527,7 @@ static int raid5_run(struct mddev *mddev) chunk_size = mddev->chunk_sectors << 9; blk_queue_io_min(mddev->queue, chunk_size); - blk_queue_io_opt(mddev->queue, chunk_size * - (conf->raid_disks - conf->max_degraded)); + raid5_set_io_opt(conf); mddev->queue->limits.raid_partial_stripes_expensive = 1; /* * We can only discard a whole stripe. It doesn't make sense to @@ -8115,6 +8120,7 @@ static void end_reshape(struct r5conf *conf) / PAGE_SIZE); if (conf->mddev->queue->backing_dev_info->ra_pages < 2 * stripe) conf->mddev->queue->backing_dev_info->ra_pages = 2 * stripe; + raid5_set_io_opt(conf); } } }