From patchwork Mon Jun 26 08:09:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Nan X-Patchwork-Id: 13292449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 549E9EB64D7 for ; Mon, 26 Jun 2023 08:10:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230110AbjFZIKR (ORCPT ); Mon, 26 Jun 2023 04:10:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229693AbjFZIKL (ORCPT ); Mon, 26 Jun 2023 04:10:11 -0400 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52DF510CC; Mon, 26 Jun 2023 01:10:06 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4QqL9h0wZYz4f3nyZ; Mon, 26 Jun 2023 16:10:00 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAHcLPWR5lkQbNDMg--.19922S5; Mon, 26 Jun 2023 16:10:02 +0800 (CST) From: linan666@huaweicloud.com To: axboe@kernel.dk, linan122@huawei.com, vishal.l.verma@intel.com, dan.j.williams@intel.com, ashok_raj@linux.intel.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yi.zhang@huawei.com, houtao1@huawei.com, yangerkun@huawei.com Subject: [PATCH v4 1/4] block/badblocks: change some members of badblocks to bool Date: Mon, 26 Jun 2023 16:09:10 +0800 Message-Id: <20230626080913.3493135-2-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230626080913.3493135-1-linan666@huaweicloud.com> References: <20230626080913.3493135-1-linan666@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAHcLPWR5lkQbNDMg--.19922S5 X-Coremail-Antispam: 1UD129KBjvJXoWxAw45KFW3uryUWFWftw48tFb_yoW5XryUpF nxGwn3tr4jgr1vqF1DZ3W7Cr109w4xJF48t3y7Jw15K34Utw1xJa4kXryrtFWjqrW3CrsI vF1FgrW5Zry8C3DanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmjb4IE77IF4wAFF20E14v26ryj6rWUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2vYz4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrV ACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWU JVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2 ka0xkIwI1lw4CEc2x0rVAKj4xxMxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j 6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7 AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE 2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcV C2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2Kfnx nUUI43ZEXa7IU8BWlDUUUUU== X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Li Nan "changed" and "unacked_exist" are used as boolean type. Change the type of them to bool. No functional changed intended. Signed-off-by: Li Nan --- include/linux/badblocks.h | 9 +++++---- block/badblocks.c | 14 +++++++------- 2 files changed, 12 insertions(+), 11 deletions(-) diff --git a/include/linux/badblocks.h b/include/linux/badblocks.h index 2426276b9bd3..2feb143a1856 100644 --- a/include/linux/badblocks.h +++ b/include/linux/badblocks.h @@ -27,15 +27,16 @@ struct badblocks { struct device *dev; /* set by devm_init_badblocks */ int count; /* count of bad blocks */ - int unacked_exist; /* there probably are unacknowledged - * bad blocks. This is only cleared - * when a read discovers none + bool unacked_exist; /* there probably are unacknowledged + * bad blocks. This is only cleared + * when a read of unacknowledged bad + * blocks discovers none */ int shift; /* shift from sectors to block size * a -ve shift means badblocks are * disabled.*/ u64 *page; /* badblock list */ - int changed; + bool changed; seqlock_t lock; sector_t sector; sector_t size; /* in sectors */ diff --git a/block/badblocks.c b/block/badblocks.c index 3afb550c0f7b..f28e971a4b94 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -141,7 +141,7 @@ static void badblocks_update_acked(struct badblocks *bb) } if (!unacked) - bb->unacked_exist = 0; + bb->unacked_exist = false; } /** @@ -302,9 +302,9 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors, } } - bb->changed = 1; + bb->changed = true; if (!acknowledged) - bb->unacked_exist = 1; + bb->unacked_exist = true; else badblocks_update_acked(bb); write_sequnlock_irqrestore(&bb->lock, flags); @@ -414,7 +414,7 @@ int badblocks_clear(struct badblocks *bb, sector_t s, int sectors) } badblocks_update_acked(bb); - bb->changed = 1; + bb->changed = true; out: write_sequnlock_irq(&bb->lock); return rv; @@ -435,7 +435,7 @@ void ack_all_badblocks(struct badblocks *bb) return; write_seqlock_irq(&bb->lock); - if (bb->changed == 0 && bb->unacked_exist) { + if (!bb->changed && bb->unacked_exist) { u64 *p = bb->page; int i; @@ -447,7 +447,7 @@ void ack_all_badblocks(struct badblocks *bb) p[i] = BB_MAKE(start, len, 1); } } - bb->unacked_exist = 0; + bb->unacked_exist = false; } write_sequnlock_irq(&bb->lock); } @@ -493,7 +493,7 @@ ssize_t badblocks_show(struct badblocks *bb, char *page, int unack) length << bb->shift); } if (unack && len == 0) - bb->unacked_exist = 0; + bb->unacked_exist = false; if (read_seqretry(&bb->lock, seq)) goto retry; From patchwork Mon Jun 26 08:09:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Nan X-Patchwork-Id: 13292451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A34C2EB64DA for ; Mon, 26 Jun 2023 08:10:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229472AbjFZIKT (ORCPT ); Mon, 26 Jun 2023 04:10:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230104AbjFZIKQ (ORCPT ); Mon, 26 Jun 2023 04:10:16 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99F8110DF; Mon, 26 Jun 2023 01:10:07 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4QqL9j66NBz4f4FTB; Mon, 26 Jun 2023 16:10:01 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAHcLPWR5lkQbNDMg--.19922S6; Mon, 26 Jun 2023 16:10:02 +0800 (CST) From: linan666@huaweicloud.com To: axboe@kernel.dk, linan122@huawei.com, vishal.l.verma@intel.com, dan.j.williams@intel.com, ashok_raj@linux.intel.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yi.zhang@huawei.com, houtao1@huawei.com, yangerkun@huawei.com Subject: [PATCH v4 2/4] block/badblocks: only set bb->changed/unacked_exist when badblocks changes Date: Mon, 26 Jun 2023 16:09:11 +0800 Message-Id: <20230626080913.3493135-3-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230626080913.3493135-1-linan666@huaweicloud.com> References: <20230626080913.3493135-1-linan666@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAHcLPWR5lkQbNDMg--.19922S6 X-Coremail-Antispam: 1UD129KBjvJXoW7Cry8uF1DCr43tFyrAw1UWrg_yoW8Cr15pF 98C3WftrWjg3WIgF1UZ3ZxKw1FgayfXF48Gr4ay345Gry8G3s3tF1vq34aqa4jgr1avrnI qa4FgryYva4DC37anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmjb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2vYz4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrV ACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWU JVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2 ka0xkIwI1lw4CEc2x0rVAKj4xxMxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j 6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7 AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE 2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcV C2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2Kfnx nUUI43ZEXa7IU1H7K7UUUUU== X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Li Nan In badblocks_set(), even if no badblocks changes, bb->changed and unacked_exist will still be set. Only set them when badblocks changes. Fixes: 9e0e252a048b ("badblocks: Add core badblock management code") Signed-off-by: Li Nan --- block/badblocks.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index f28e971a4b94..7b1ad364e85c 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -166,6 +166,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors, int lo, hi; int rv = 0; unsigned long flags; + bool changed = false; if (bb->shift < 0) /* badblocks are disabled */ @@ -229,6 +230,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors, s = a + BB_MAX_LEN; } sectors = e - s; + changed = true; } } if (sectors && hi < bb->count) { @@ -259,6 +261,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors, sectors = e - s; lo = hi; hi++; + changed = true; } } if (sectors == 0 && hi < bb->count) { @@ -277,6 +280,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors, memmove(p + hi, p + hi + 1, (bb->count - hi - 1) * 8); bb->count--; + changed = true; } } while (sectors) { @@ -299,14 +303,17 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors, p[hi] = BB_MAKE(s, this_sectors, acknowledged); sectors -= this_sectors; s += this_sectors; + changed = true; } } - bb->changed = true; - if (!acknowledged) - bb->unacked_exist = true; - else - badblocks_update_acked(bb); + if (changed) { + bb->changed = changed; + if (!acknowledged) + bb->unacked_exist = true; + else + badblocks_update_acked(bb); + } write_sequnlock_irqrestore(&bb->lock, flags); return rv; From patchwork Mon Jun 26 08:09:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Nan X-Patchwork-Id: 13292450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C30AAEB64DD for ; Mon, 26 Jun 2023 08:10:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229701AbjFZIKT (ORCPT ); Mon, 26 Jun 2023 04:10:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230076AbjFZIKQ (ORCPT ); Mon, 26 Jun 2023 04:10:16 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A576E7A; Mon, 26 Jun 2023 01:10:07 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4QqL9k1cbVz4f41Vn; Mon, 26 Jun 2023 16:10:02 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAHcLPWR5lkQbNDMg--.19922S7; Mon, 26 Jun 2023 16:10:03 +0800 (CST) From: linan666@huaweicloud.com To: axboe@kernel.dk, linan122@huawei.com, vishal.l.verma@intel.com, dan.j.williams@intel.com, ashok_raj@linux.intel.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yi.zhang@huawei.com, houtao1@huawei.com, yangerkun@huawei.com Subject: [PATCH v4 3/4] block/badblocks: fix badblocks loss when badblocks combine Date: Mon, 26 Jun 2023 16:09:12 +0800 Message-Id: <20230626080913.3493135-4-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230626080913.3493135-1-linan666@huaweicloud.com> References: <20230626080913.3493135-1-linan666@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAHcLPWR5lkQbNDMg--.19922S7 X-Coremail-Antispam: 1UD129KBjvJXoW7ur17tr4UuFWkKF1UtF1UKFg_yoW8CF18pw 1fXw1agF18W3W093W8XF1UGF1093W7JF4UJ3y5J3W8WFyUAw1xKF1qqw1YvrW0qF4fXFn0 vay8WryUXFWfC37anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUmjb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2vYz4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrV ACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWU JVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2 ka0xkIwI1lw4CEc2x0rVAKj4xxMxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j 6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7 AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE 2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcV C2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2Kfnx nUUI43ZEXa7IU1BOJ7UUUUU== X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Li Nan badblocks will loss if set it as below: $ echo 1 1 > bad_blocks $ echo 3 1 > bad_blocks $ echo 1 4 > bad_blocks $ cat bad_blocks 1 3 After the fix, in the same scenario, it will be: $ cat bad_blocks 1 4 In badblocks_set(), if set a new badblocks, first find two existing badblocks adjacent to it, named lo and hi. Then try to merge new with lo. If merge success and there is an intersection between lo and hi, try to combine lo and hi. set 1 4 binary-search: lo: 1 1 |____| hi: 3 1 |____| merge with lo: lo: 1 4 |___________________| hi: 3 1 |____| combine lo and hi: result: 1 3 |______________| | -> hi's end |____| -> lost Now, the end of combined badblocks must be hi's end. However, it should be the larger one between lo and hi. Fix it. Fixes: 9e0e252a048b ("badblocks: Add core badblock management code") Signed-off-by: Li Nan --- block/badblocks.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index 7b1ad364e85c..c1745b76d8f1 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -267,16 +267,14 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors, if (sectors == 0 && hi < bb->count) { /* we might be able to combine lo and hi */ /* Note: 's' is at the end of 'lo' */ - sector_t a = BB_OFFSET(p[hi]); - int lolen = BB_LEN(p[lo]); - int hilen = BB_LEN(p[hi]); - int newlen = lolen + hilen - (s - a); + sector_t a = BB_OFFSET(p[lo]); + int newlen = max(s, BB_OFFSET(p[hi]) + BB_LEN(p[hi])) - a; - if (s >= a && newlen < BB_MAX_LEN) { + if (s >= BB_OFFSET(p[hi]) && newlen < BB_MAX_LEN) { /* yes, we can combine them */ int ack = BB_ACK(p[lo]) && BB_ACK(p[hi]); - p[lo] = BB_MAKE(BB_OFFSET(p[lo]), newlen, ack); + p[lo] = BB_MAKE(a, newlen, ack); memmove(p + hi, p + hi + 1, (bb->count - hi - 1) * 8); bb->count--; From patchwork Mon Jun 26 08:09:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Nan X-Patchwork-Id: 13292452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5B2EC0015E for ; Mon, 26 Jun 2023 08:10:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230096AbjFZIKV (ORCPT ); Mon, 26 Jun 2023 04:10:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230088AbjFZIKN (ORCPT ); Mon, 26 Jun 2023 04:10:13 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43AAC10DB; Mon, 26 Jun 2023 01:10:06 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4QqL9k4m2yz4f4689; Mon, 26 Jun 2023 16:10:02 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgAHcLPWR5lkQbNDMg--.19922S8; Mon, 26 Jun 2023 16:10:03 +0800 (CST) From: linan666@huaweicloud.com To: axboe@kernel.dk, linan122@huawei.com, vishal.l.verma@intel.com, dan.j.williams@intel.com, ashok_raj@linux.intel.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yi.zhang@huawei.com, houtao1@huawei.com, yangerkun@huawei.com Subject: [PATCH v4 4/4] block/badblocks: fix the bug of reverse order Date: Mon, 26 Jun 2023 16:09:13 +0800 Message-Id: <20230626080913.3493135-5-linan666@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230626080913.3493135-1-linan666@huaweicloud.com> References: <20230626080913.3493135-1-linan666@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAHcLPWR5lkQbNDMg--.19922S8 X-Coremail-Antispam: 1UD129KBjvJXoW7uF43Kw4rWFy3GrWDtF18Zrb_yoW8Gr4rpF nxJwn3Gryjgr1UZa18Za4UGr4xCa43XF4UGw45Zr1UGasrJw1xJF1kXayYqryjqF43Xw1q v3W5uryUZa48C37anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUm2b4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2vYz4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I 8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AK xVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxV A2Y2ka0xkIwI1lw4CEc2x0rVAKj4xxMxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY 6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17 CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF 0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMI IF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVF xhVjvjDU0xZFpf9x07UMa0PUUUUU= X-CM-SenderInfo: polqt0awwwqx5xdzvxpfor3voofrz/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Li Nan Badblocks are arranged from small to large, but order of badblocks will be reversed if we set a large area at once as below: $ echo 0 2048 > bad_blocks $ cat bad_blocks 1536 512 1024 512 512 512 0 512 Actually, it should be: $ echo 0 2048 > bad_blocks $ cat bad_blocks 0 512 512 512 1024 512 1536 512 'hi' remains unchanged while setting continuous badblocks is wrong, the next badblocks is greater than 'p[hi]', and it should be added to 'p[hi+1]'. Let 'hi' +1 each cycle. (0 512) 0 512 |_________| |_________| p[hi] p[hi] (512 512) (0 512) fix 0 512 1024 |_________|_________| ===> |_________|_________| p[hi] p[hi+1] (1024 512)(512 512) (0 512) 0 512 1024 1536 |_________|_________|_________| |_________|_________|_________| p[hi] p[hi+2] ... Fixes: 9e0e252a048b ("badblocks: Add core badblock management code") Signed-off-by: Li Nan --- block/badblocks.c | 1 + 1 file changed, 1 insertion(+) diff --git a/block/badblocks.c b/block/badblocks.c index c1745b76d8f1..b79d37a4bf0e 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -301,6 +301,7 @@ int badblocks_set(struct badblocks *bb, sector_t s, int sectors, p[hi] = BB_MAKE(s, this_sectors, acknowledged); sectors -= this_sectors; s += this_sectors; + hi++; changed = true; } }