From patchwork Mon Nov 18 11:41:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 13878455 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2FA0194A68; Mon, 18 Nov 2024 11:44:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731930258; cv=none; b=udB1Ib0B2Inl4VEfch3CfSF8ewk5ujWraZbdOqBicfKWQkuGbUHP5ZObqAhtj4RqMhbhnI4h+4juRcfS+jPQ8y7Z/8zvwTlXkLb2Se8YZwuNCqezvQ3RMxzz8NOlvlMySfCYga2UJHkQcet119PnHv//nHvIYQTtO58c2GG3CdY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731930258; c=relaxed/simple; bh=4c6qd/RPuYy60VIvKodf7LOi/kdAOPiwhpN8VGtq9CQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=L8+jtqSgWuK8gnONyPW3zzqXHHEQaNpn8nx9hx3DkSpT7R0VjyjRW81ZBTpTFHqggiVfNP87J9luWeYke8EyK1VFOzleY2fMN/YxQYE/0FB4rzMNnfx0/e8kEuZ03b51MY8gCBXnXt4wC8AV4mNmmb2ufyOXszt2rrIfvmGcw+A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4XsQkg248pz4f3jXs; Mon, 18 Nov 2024 19:43:55 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id B63671A0194; Mon, 18 Nov 2024 19:44:13 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgBHI4eKKDtn_9+KCA--.2699S6; Mon, 18 Nov 2024 19:44:13 +0800 (CST) From: Yu Kuai To: song@kernel.org, xni@redhat.com Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH md-6.13 2/5] md/md-bitmap: remove the last parameter for bimtap_ops->endwrite() Date: Mon, 18 Nov 2024 19:41:54 +0800 Message-Id: <20241118114157.355749-3-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241118114157.355749-1-yukuai1@huaweicloud.com> References: <20241118114157.355749-1-yukuai1@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgBHI4eKKDtn_9+KCA--.2699S6 X-Coremail-Antispam: 1UD129KBjvJXoW3AF15JFWDWry3AFWrXr4DCFg_yoW7CF4Upa 9xXFy3C3y5KrWYvw17JFyDuFyFv3s2grZrtFyxu3yrua4F9r98GF48WFWjqrn8JFy3AFW3 X3W5trWUCryUKFUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UM2 8EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxkF7I0En4kS14v26r126r1DMx AIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_ Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwI xGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWx JwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcV C2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUUApnJUUUUU== X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ From: Yu Kuai It is useless, because for the case IO failed for one rdev: - If badblocks is set and rdev is not faulty, there is no need to mark the bit as NEEDED; - If rdev is faulty, then mddev->degraded will be set, and we can use it to mard the bit as NEEDED; Signed-off-by: Yu Kuai --- drivers/md/md-bitmap.c | 19 ++++++++++--------- drivers/md/md-bitmap.h | 2 +- drivers/md/raid1.c | 3 +-- drivers/md/raid10.c | 3 +-- drivers/md/raid5-cache.c | 3 +-- drivers/md/raid5.c | 9 +++------ 6 files changed, 17 insertions(+), 22 deletions(-) diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index 84fb4cc67d5e..b40a84b01085 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -1726,7 +1726,7 @@ static int bitmap_startwrite(struct mddev *mddev, sector_t offset, } static void bitmap_endwrite(struct mddev *mddev, sector_t offset, - unsigned long sectors, bool success) + unsigned long sectors) { struct bitmap *bitmap = mddev->bitmap; @@ -1745,15 +1745,16 @@ static void bitmap_endwrite(struct mddev *mddev, sector_t offset, return; } - if (success && !bitmap->mddev->degraded && - bitmap->events_cleared < bitmap->mddev->events) { - bitmap->events_cleared = bitmap->mddev->events; - bitmap->need_sync = 1; - sysfs_notify_dirent_safe(bitmap->sysfs_can_clear); - } - - if (!success && !NEEDED(*bmc)) + if (!bitmap->mddev->degraded) { + if (bitmap->events_cleared < bitmap->mddev->events) { + bitmap->events_cleared = bitmap->mddev->events; + bitmap->need_sync = 1; + sysfs_notify_dirent_safe( + bitmap->sysfs_can_clear); + } + } else if (!NEEDED(*bmc)) { *bmc |= NEEDED_MASK; + } if (COUNTER(*bmc) == COUNTER_MAX) wake_up(&bitmap->overflow_wait); diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h index e87a1f493d3c..31c93019c76b 100644 --- a/drivers/md/md-bitmap.h +++ b/drivers/md/md-bitmap.h @@ -92,7 +92,7 @@ struct bitmap_operations { int (*startwrite)(struct mddev *mddev, sector_t offset, unsigned long sectors); void (*endwrite)(struct mddev *mddev, sector_t offset, - unsigned long sectors, bool success); + unsigned long sectors); bool (*start_sync)(struct mddev *mddev, sector_t offset, sector_t *blocks, bool degraded); void (*end_sync)(struct mddev *mddev, sector_t offset, sector_t *blocks); diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index e6e1228d9b3a..b9819f9c8ed2 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -423,8 +423,7 @@ static void close_write(struct r1bio *r1_bio) if (test_bit(R1BIO_BehindIO, &r1_bio->state)) mddev->bitmap_ops->end_behind_write(mddev); /* clear the bitmap if all writes complete successfully */ - mddev->bitmap_ops->endwrite(mddev, r1_bio->sector, r1_bio->sectors, - !test_bit(R1BIO_Degraded, &r1_bio->state)); + mddev->bitmap_ops->endwrite(mddev, r1_bio->sector, r1_bio->sectors); md_write_end(mddev); } diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 61ac074e821f..efbc0bd92479 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -429,8 +429,7 @@ static void close_write(struct r10bio *r10_bio) struct mddev *mddev = r10_bio->mddev; /* clear the bitmap if all writes complete successfully */ - mddev->bitmap_ops->endwrite(mddev, r10_bio->sector, r10_bio->sectors, - !test_bit(R10BIO_Degraded, &r10_bio->state)); + mddev->bitmap_ops->endwrite(mddev, r10_bio->sector, r10_bio->sectors); md_write_end(mddev); } diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c index 4c7ecdd5c1f3..ba4f9577c737 100644 --- a/drivers/md/raid5-cache.c +++ b/drivers/md/raid5-cache.c @@ -314,8 +314,7 @@ void r5c_handle_cached_data_endio(struct r5conf *conf, set_bit(R5_UPTODATE, &sh->dev[i].flags); r5c_return_dev_pending_writes(conf, &sh->dev[i]); conf->mddev->bitmap_ops->endwrite(conf->mddev, - sh->sector, RAID5_STRIPE_SECTORS(conf), - !test_bit(STRIPE_DEGRADED, &sh->state)); + sh->sector, RAID5_STRIPE_SECTORS(conf)); } } } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 84f3d27e0fa4..2f10cca4b7b1 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -3664,8 +3664,7 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh, } if (bitmap_end) conf->mddev->bitmap_ops->endwrite(conf->mddev, - sh->sector, RAID5_STRIPE_SECTORS(conf), - false); + sh->sector, RAID5_STRIPE_SECTORS(conf)); bitmap_end = 0; /* and fail all 'written' */ bi = sh->dev[i].written; @@ -3711,8 +3710,7 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh, } if (bitmap_end) conf->mddev->bitmap_ops->endwrite(conf->mddev, - sh->sector, RAID5_STRIPE_SECTORS(conf), - false); + sh->sector, RAID5_STRIPE_SECTORS(conf)); /* If we were in the middle of a write the parity block might * still be locked - so just clear all R5_LOCKED flags */ @@ -4062,8 +4060,7 @@ static void handle_stripe_clean_event(struct r5conf *conf, wbi = wbi2; } conf->mddev->bitmap_ops->endwrite(conf->mddev, - sh->sector, RAID5_STRIPE_SECTORS(conf), - !test_bit(STRIPE_DEGRADED, &sh->state)); + sh->sector, RAID5_STRIPE_SECTORS(conf)); if (head_sh->batch_head) { sh = list_first_entry(&sh->batch_list, struct stripe_head,