Message ID | 20250412073202.3085138-1-yukuai1@huaweicloud.com (mailing list archive) |
---|---|
Headers | show
Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE6FE18BC1D; Sat, 12 Apr 2025 07:38:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744443534; cv=none; b=jA3aHLEkYNfBLCVzhre/0hLdfwOUAwRxTHIVuqiYSQaCwNkal9/RNpMsHuT7/8vBvkPqdOCziUmJuvhdgf777KOMme1C3K2/7WcsuqwQqY9Hs67hKTtS2QN8ELVLI0zrusBT11gMc9kIH3Ct/p4Tap1+a5kqqIzAjkiaon8q7LI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744443534; c=relaxed/simple; bh=kubeogLQpHAXcRV4aoJBU/xNkOB3D8QMn02xuklfBvg=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=hmnQLFYiRT4E86k9tigJfBkdbSv4eNqEjWQTZD7DuStUG1zjFc2iMo6i3sDp+MGEXJTkwT8GDlX5dhsHWjACgPD4rozqOagkJ47KadLqvGVGmXRaEXntA4QmMi2/ExrpBym321x1C841ZyGunEGmqC0p/xOjZuv876MCA0yPUfY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4ZZQQS1YG7z4f3jMV; Sat, 12 Apr 2025 15:38:24 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id BEF531A06DE; Sat, 12 Apr 2025 15:38:47 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgCnC2CFGPpnI5n6JA--.63008S4; Sat, 12 Apr 2025 15:38:47 +0800 (CST) From: Yu Kuai <yukuai1@huaweicloud.com> To: axboe@kernel.dk, song@kernel.org, yukuai3@huawei.com, xni@redhat.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 0/4] md: fix is_mddev_idle() Date: Sat, 12 Apr 2025 15:31:58 +0800 Message-Id: <20250412073202.3085138-1-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: <linux-raid.vger.kernel.org> List-Subscribe: <mailto:linux-raid+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-raid+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: gCh0CgCnC2CFGPpnI5n6JA--.63008S4 X-Coremail-Antispam: 1UD129KBjvJXoW7WF4xuF18XF1ftr1ftw17trb_yoW8WFWDpF WUZa4SvFyj9r9xZr9xJw10yFyrt3yfA390qFy3A348Z3Z8XryrtF43tw4Sq34kJ393Aa42 q3W5Ga98C3WjyrJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9014x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK02 1l84ACjcxK6xIIjxv20xvE14v26ryj6F1UM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26F4j 6r4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oV Cq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0 I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r 4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628v n2kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7x kEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E 67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCw CI42IY6xIIjxv20xvEc7CjxVAFwI0_Jr0_Gr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1x MIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIda VFxhVjvjDU0xZFpf9x0JUd-B_UUUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ |
Series | md: fix is_mddev_idle() | expand |
From: Yu Kuai <yukuai3@huawei.com> If sync_speed is above speed_min, then is_mddev_idle() will be called for each sync IO to check if the array is idle, and inflihgt sync_io will be limited if the array is not idle. However, while mkfs.ext4 for a large raid5 array while recovery is in progress, it's found that sync_speed is already above speed_min while lots of stripes are used for sync IO, causing long delay for mkfs.ext4. Root cause is the following checking from is_mddev_idle(): t1: submit sync IO: events1 = completed IO - issued sync IO t2: submit next sync IO: events2 = completed IO - issued sync IO if (events2 - events1 > 64) For consequence, the more sync IO issued, the less likely checking will pass. And when completed normal IO is more than issued sync IO, the condition will finally pass and is_mddev_idle() will return false, however, last_events will be updated hence is_mddev_idle() can only return false once in a while. Fix this problem by changing the checking as following: 1) mddev doesn't have normal IO completed; 2) mddev doesn't have normal IO inflight; 3) if any member disks is partition, and all other partitions doesn't have IO completed. Yu Kuai (4): block: export part_in_flight() md: add a new api sync_io_depth md: fix is_mddev_idle() md: cleanup accounting for issued sync IO block/blk.h | 1 - block/genhd.c | 1 + drivers/md/md.c | 181 ++++++++++++++++++++++++++------------ drivers/md/md.h | 15 +--- drivers/md/raid1.c | 3 - drivers/md/raid10.c | 9 -- drivers/md/raid5.c | 8 -- include/linux/blkdev.h | 1 - include/linux/part_stat.h | 1 + 9 files changed, 130 insertions(+), 90 deletions(-)