From patchwork Fri Feb 21 08:10:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984988 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 202F81FFC5B for ; Fri, 21 Feb 2025 08:14:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125700; cv=none; b=jLWotYxBF6/ZukGozPk2o04P2TY6m2l68HMLcHpaQkVK7bDq23B3P/xPUajQ5+FCwYgExsZb21bIuQdrHvVygmhQyfHtM1h62p5SNFHoIlkv9Bn9hdVjYjN+Ub6cS0q0dIogdvENeyPjppfeqGja3o4k2dZrwWxACV7nZA2WhWU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125700; c=relaxed/simple; bh=BBVR4KXIIdkps/hjFxQA/XkTAjVcHGeHkHfbmS5j0Ig=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=J9SIQRt5M0MxRxodYr7R7MIsefmkVdGkaQziIQ1aa0QcN+WtvofnsYgRFMXxJA6YFsUa6t+i7e08uO5Dbf7iHAvKEKpquW0GLZmyX1jih0qgEQvNXMyxPEnP8kJyJzqHFdPAqf1j9s4kNnwGfP4geqGfPLLF3izc5vIz/CLZlRU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4YzjbF1ksdz4f3jYB for ; Fri, 21 Feb 2025 16:14:33 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id E132B1A0DC7 for ; Fri, 21 Feb 2025 16:14:54 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S5; Fri, 21 Feb 2025 16:14:54 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 01/12] badblocks: Fix error shitf ops Date: Fri, 21 Feb 2025 16:10:58 +0800 Message-Id: <20250221081109.734170-2-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S5 X-Coremail-Antispam: 1UD129KBjvJXoW7CrykAw4UJw1rCr1DCrW8Xrb_yoW8Xr15pr 4DJ343GrW7W3yj93W5J3WUGr9aqw15JF43Ca1xJa4jkry5K3srta4kXryfZa4a9FW3Jrn0 ga1ruryruFZ7C37anT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUP2b4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI 0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG 67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MI IYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Gr0_Xr1lIxAIcVC0I7IYx2IY6xkF7I0E 14v26F4j6r4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr 0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7IU0Ft C7UUUUU== X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Li Nan 'bb->shift' is used directly in badblocks. It is wrong, fix it. Fixes: 3ea3354cb9f0 ("badblocks: improve badblocks_check() for multiple ranges handling") Signed-off-by: Li Nan Reviewed-by: Yu Kuai Acked-by: Coly Li --- block/badblocks.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index db4ec8b9b2a8..bcee057efc47 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -880,8 +880,8 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, /* round the start down, and the end up */ sector_t next = s + sectors; - rounddown(s, bb->shift); - roundup(next, bb->shift); + rounddown(s, 1 << bb->shift); + roundup(next, 1 << bb->shift); sectors = next - s; } @@ -1157,8 +1157,8 @@ static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) * isn't than to think a block is not bad when it is. */ target = s + sectors; - roundup(s, bb->shift); - rounddown(target, bb->shift); + roundup(s, 1 << bb->shift); + rounddown(target, 1 << bb->shift); sectors = target - s; } @@ -1288,8 +1288,8 @@ static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors, /* round the start down, and the end up */ target = s + sectors; - rounddown(s, bb->shift); - roundup(target, bb->shift); + rounddown(s, 1 << bb->shift); + roundup(target, 1 << bb->shift); sectors = target - s; } From patchwork Fri Feb 21 08:10:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984990 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE66F1FFC66 for ; Fri, 21 Feb 2025 08:14:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125700; cv=none; b=XqYoj2V1r8+zZKY6oY4gabisaoR87BH6RCMa+5DP0TqBWPy7xu9Ah+9w62zBVIuWg+9O48ZAtfkEfjBr7S4gB0Kb/crnNgso+P/K/Mfkl3QM82EJkpwgCxz75m7RJO3BHNH9iG6QUjS2XTLdleVvn1zFm6lXbEJaq3IA2gyTiYo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125700; c=relaxed/simple; bh=GmSI9j+0w1+KoAlLwFcKpK37RVlcl8GSiVylg774Ieo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SSe5pcHwzyQmgMDgfD6h4zk6iHHirBUxcbj2QpEIFyTecPo+8LDwkju2fm0+7cyWeLLeufjyxMpslJfxSq4j/U9/ETmP9DsqorQp33CLxnLwxw7M5+cGSBcYczCEWuyeXFLhVo7CsNDLMc8HrmMUgruAgj4bEQMLxTvtmnffPu4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YzjbD4tv3z4f3kvw for ; Fri, 21 Feb 2025 16:14:32 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id BCC131A1316 for ; Fri, 21 Feb 2025 16:14:55 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S6; Fri, 21 Feb 2025 16:14:55 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 02/12] badblocks: factor out a helper try_adjacent_combine Date: Fri, 21 Feb 2025 16:10:59 +0800 Message-Id: <20250221081109.734170-3-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S6 X-Coremail-Antispam: 1UD129KBjvJXoW7AF13Ary5Ar1DWrW3ury7Awb_yoW8uF4xpr nxAr1avryxG3WS9anxXanrAw15C3WxJrWYyFW7J348CFyUtw1I9F18t34fZF9FvrWxJFna qw1UuFyv9FW8t37anT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUP2b4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI 0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG 67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MI IYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Gr0_Xr1lIxAIcVC0I7IYx2IY6xkF7I0E 14v26F4j6r4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr 0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7IU0zp BDUUUUU== X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Li Nan Factor out try_adjacent_combine(), and it will be used in the later patch. Signed-off-by: Li Nan Reviewed-by: Yu Kuai --- block/badblocks.c | 40 ++++++++++++++++++++++++++-------------- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index bcee057efc47..f069c93e986d 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -855,6 +855,31 @@ static void badblocks_update_acked(struct badblocks *bb) bb->unacked_exist = 0; } +/* + * Return 'true' if the range indicated by 'bad' is exactly backward + * overlapped with the bad range (from bad table) indexed by 'behind'. + */ +static bool try_adjacent_combine(struct badblocks *bb, int prev) +{ + u64 *p = bb->page; + + if (prev >= 0 && (prev + 1) < bb->count && + BB_END(p[prev]) == BB_OFFSET(p[prev + 1]) && + (BB_LEN(p[prev]) + BB_LEN(p[prev + 1])) <= BB_MAX_LEN && + BB_ACK(p[prev]) == BB_ACK(p[prev + 1])) { + p[prev] = BB_MAKE(BB_OFFSET(p[prev]), + BB_LEN(p[prev]) + BB_LEN(p[prev + 1]), + BB_ACK(p[prev])); + + if ((prev + 2) < bb->count) + memmove(p + prev + 1, p + prev + 2, + (bb->count - (prev + 2)) * 8); + bb->count--; + return true; + } + return false; +} + /* Do exact work to set bad block range into the bad block table */ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, int acknowledged) @@ -1022,20 +1047,7 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, * merged. (prev < 0) condition is not handled here, * because it's already complicated enough. */ - if (prev >= 0 && - (prev + 1) < bb->count && - BB_END(p[prev]) == BB_OFFSET(p[prev + 1]) && - (BB_LEN(p[prev]) + BB_LEN(p[prev + 1])) <= BB_MAX_LEN && - BB_ACK(p[prev]) == BB_ACK(p[prev + 1])) { - p[prev] = BB_MAKE(BB_OFFSET(p[prev]), - BB_LEN(p[prev]) + BB_LEN(p[prev + 1]), - BB_ACK(p[prev])); - - if ((prev + 2) < bb->count) - memmove(p + prev + 1, p + prev + 2, - (bb->count - (prev + 2)) * 8); - bb->count--; - } + try_adjacent_combine(bb, prev); if (space_desired && !badblocks_full(bb)) { s = orig_start; From patchwork Fri Feb 21 08:11:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984991 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A201920010A for ; Fri, 21 Feb 2025 08:14:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125701; cv=none; b=Wm5osKU7kSSyPLofhj5nMxTkqpnpH6N4uH1jMY5XDANqmtX3MJzsvck33r6Jv0mvfZnjkWkEroyZjRESl0ONXn0uwZYoYAQ6kEoKbS6CvfrQJ3HtTf0fLJx3hSx+I55XGd2JYP8ed7iZQyctj+ype65c+jhRiNekR7KnvzHcqOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125701; c=relaxed/simple; bh=inw6EWFD77UrpetEvUb7B9biT+ZNTa+rIlC6cJXxygc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oEcwErpPMY0gA4avJjbI1rqXmrDXPdv0GAvF5jAey9pB0Dme2yn157EtEQKOZsp9jkteOJ7/2NvL27PxEOFv0VRzNm0Sz2jxYxF98WzVTor0TaS4CVENvPIYZVojvYRqSkdm/+6IVzSpYJTTUqFaW5l/tJyi62l/sot3PN+GPtk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4YzjbG6d2mz4f3jYG for ; Fri, 21 Feb 2025 16:14:34 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 93CE41A1320 for ; Fri, 21 Feb 2025 16:14:56 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S7; Fri, 21 Feb 2025 16:14:56 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 03/12] badblocks: attempt to merge adjacent badblocks during ack_all_badblocks Date: Fri, 21 Feb 2025 16:11:00 +0800 Message-Id: <20250221081109.734170-4-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S7 X-Coremail-Antispam: 1UD129KBjvdXoW7Wr4fJF1UXF4rGrWDur4rZrb_yoWDCrg_Ja s7tryUG3s5AF13Cr1Syr1Utr4SgF4DCrn7Kr1UAr1rZr17tFZrAan0krs8Wrn8GFyUu3sx Ar95Xr13urWI9jkaLaAFLSUrUUUUbb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbqkYFVCjjxCrM7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20E Y4v20xvaj40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l82xGYIkIc2x26280x7IE14v26r1rM2 8IrcIa0xkI8VCY1x0267AKxVW5JVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK 021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r 4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_ GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx 0E2Ix0cI8IcVAFwI0_Jrv_JF1lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWU JVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7AKxV W8ZVWrXwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_WrylIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVW8JVW5JwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr 0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x07j4 T5LUUUUU= X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Li Nan If ack and unack badblocks are adjacent, they will not be merged and will remain as two separate badblocks. Even after the bad blocks are written to disk and both become ack, they will still remain as two independent bad blocks. This is not ideal as it wastes the limited space for badblocks. Therefore, during ack_all_badblocks(), attempt to merge badblocks if they are adjacent. Fixes: aa511ff8218b ("badblocks: switch to the improved badblock handling code") Signed-off-by: Li Nan Reviewed-by: Yu Kuai Acked-by: Coly Li --- block/badblocks.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/block/badblocks.c b/block/badblocks.c index f069c93e986d..ad8652fbe1c8 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -1491,6 +1491,11 @@ void ack_all_badblocks(struct badblocks *bb) p[i] = BB_MAKE(start, len, 1); } } + + for (i = 0; i < bb->count ; i++) + while (try_adjacent_combine(bb, i)) + ; + bb->unacked_exist = 0; } write_sequnlock_irq(&bb->lock); From patchwork Fri Feb 21 08:11:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984992 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9EE26200B9B for ; Fri, 21 Feb 2025 08:15:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125702; cv=none; b=Fv+bhHXYK4EaC3Afw77V6vQBymPWxTmJ1rF0XTGKDcEQymMAxH3LoIi/R0B8x+P2qJTu0leQrgWeAYYzW+ttR3gSQqwSFS7TJcq0/a2imcJgZmNk8+qKUFpWM0Z7pAgK3+eV+2S6Zlnz/B5UV3yqCAdSfSHsSDDJz33QW75wOSs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125702; c=relaxed/simple; bh=DgtB7y4vZh9te6M7bK3173Va3ntgqMaiEcp563JO4AE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=nXMyGzc45v7kWwkMltDtF7uNi2Z2fFncnNj8XnRLsmEdJhMPxKVhoQz+f7ctTrQR0cl2lLwPVXyWTEQKYgnRqfbjztUKzlm0eyc7HSecBbAK1xPOijxxeI2ULQUOWrG2mfgENWreYmOp5rY/3DsGVxAQ1LFNSc5Ktvng6Hvowf4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4YzjbH5Ly1z4f3jRG for ; Fri, 21 Feb 2025 16:14:35 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 68FA71A058E for ; Fri, 21 Feb 2025 16:14:57 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S8; Fri, 21 Feb 2025 16:14:57 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 04/12] badblocks: return error directly when setting badblocks exceeds 512 Date: Fri, 21 Feb 2025 16:11:01 +0800 Message-Id: <20250221081109.734170-5-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S8 X-Coremail-Antispam: 1UD129KBjvJXoW3Gry8Xry8ZFW3ur4UXFykAFb_yoW7ZF4kpF sxW393tryDtr1Fg3WkZa1DJr1F934xJFWUCay5Xw10kFy0k3s7WF18X34F9Fyj9rWfGrn0 qa18uryrZFWkG3DanT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Gr0_Xr1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI1v3UUUUU X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Li Nan In the current handling of badblocks settings, a lot of processing has been done for scenarios where the number of badblocks exceeds 512. This makes the code look quite complex and also introduces some issues, Fixing those issues wouldn’t be too complicated, but it wouldn’t simplify the code. In fact, a disk shouldn’t have too many badblocks, and for disks with 512 badblocks, attempting to set more bad blocks doesn’t make much sense. At that point, the more appropriate action would be to replace the disk. Therefore, to resolve these issues and simplify the code somewhat, return error directly when setting badblocks exceeds 512. Fixes: aa511ff8218b ("badblocks: switch to the improved badblock handling code") Signed-off-by: Li Nan Reviewed-by: Yu Kuai --- block/badblocks.c | 121 ++++++++-------------------------------------- 1 file changed, 19 insertions(+), 102 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index ad8652fbe1c8..1c8b8f65f6df 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -527,51 +527,6 @@ static int prev_badblocks(struct badblocks *bb, struct badblocks_context *bad, return ret; } -/* - * Return 'true' if the range indicated by 'bad' can be backward merged - * with the bad range (from the bad table) index by 'behind'. - */ -static bool can_merge_behind(struct badblocks *bb, - struct badblocks_context *bad, int behind) -{ - sector_t sectors = bad->len; - sector_t s = bad->start; - u64 *p = bb->page; - - if ((s < BB_OFFSET(p[behind])) && - ((s + sectors) >= BB_OFFSET(p[behind])) && - ((BB_END(p[behind]) - s) <= BB_MAX_LEN) && - BB_ACK(p[behind]) == bad->ack) - return true; - return false; -} - -/* - * Do backward merge for range indicated by 'bad' and the bad range - * (from the bad table) indexed by 'behind'. The return value is merged - * sectors from bad->len. - */ -static int behind_merge(struct badblocks *bb, struct badblocks_context *bad, - int behind) -{ - sector_t sectors = bad->len; - sector_t s = bad->start; - u64 *p = bb->page; - int merged = 0; - - WARN_ON(s >= BB_OFFSET(p[behind])); - WARN_ON((s + sectors) < BB_OFFSET(p[behind])); - - if (s < BB_OFFSET(p[behind])) { - merged = BB_OFFSET(p[behind]) - s; - p[behind] = BB_MAKE(s, BB_LEN(p[behind]) + merged, bad->ack); - - WARN_ON((BB_LEN(p[behind]) + merged) >= BB_MAX_LEN); - } - - return merged; -} - /* * Return 'true' if the range indicated by 'bad' can be forward * merged with the bad range (from the bad table) indexed by 'prev'. @@ -884,11 +839,9 @@ static bool try_adjacent_combine(struct badblocks *bb, int prev) static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, int acknowledged) { - int retried = 0, space_desired = 0; - int orig_len, len = 0, added = 0; + int len = 0, added = 0; struct badblocks_context bad; int prev = -1, hint = -1; - sector_t orig_start; unsigned long flags; int rv = 0; u64 *p; @@ -912,8 +865,6 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, write_seqlock_irqsave(&bb->lock, flags); - orig_start = s; - orig_len = sectors; bad.ack = acknowledged; p = bb->page; @@ -922,6 +873,11 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, bad.len = sectors; len = 0; + if (badblocks_full(bb)) { + rv = 1; + goto out; + } + if (badblocks_empty(bb)) { len = insert_at(bb, 0, &bad); bb->count++; @@ -933,32 +889,14 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, /* start before all badblocks */ if (prev < 0) { - if (!badblocks_full(bb)) { - /* insert on the first */ - if (bad.len > (BB_OFFSET(p[0]) - bad.start)) - bad.len = BB_OFFSET(p[0]) - bad.start; - len = insert_at(bb, 0, &bad); - bb->count++; - added++; - hint = 0; - goto update_sectors; - } - - /* No sapce, try to merge */ - if (overlap_behind(bb, &bad, 0)) { - if (can_merge_behind(bb, &bad, 0)) { - len = behind_merge(bb, &bad, 0); - added++; - } else { - len = BB_OFFSET(p[0]) - s; - space_desired = 1; - } - hint = 0; - goto update_sectors; - } - - /* no table space and give up */ - goto out; + /* insert on the first */ + if (bad.len > (BB_OFFSET(p[0]) - bad.start)) + bad.len = BB_OFFSET(p[0]) - bad.start; + len = insert_at(bb, 0, &bad); + bb->count++; + added++; + hint = 0; + goto update_sectors; } /* in case p[prev-1] can be merged with p[prev] */ @@ -978,6 +916,11 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, int extra = 0; if (!can_front_overwrite(bb, prev, &bad, &extra)) { + if (extra > 0) { + rv = 1; + goto out; + } + len = min_t(sector_t, BB_END(p[prev]) - s, sectors); hint = prev; @@ -1004,24 +947,6 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, goto update_sectors; } - /* if no space in table, still try to merge in the covered range */ - if (badblocks_full(bb)) { - /* skip the cannot-merge range */ - if (((prev + 1) < bb->count) && - overlap_behind(bb, &bad, prev + 1) && - ((s + sectors) >= BB_END(p[prev + 1]))) { - len = BB_END(p[prev + 1]) - s; - hint = prev + 1; - goto update_sectors; - } - - /* no retry any more */ - len = sectors; - space_desired = 1; - hint = -1; - goto update_sectors; - } - /* cannot merge and there is space in bad table */ if ((prev + 1) < bb->count && overlap_behind(bb, &bad, prev + 1)) @@ -1049,14 +974,6 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, */ try_adjacent_combine(bb, prev); - if (space_desired && !badblocks_full(bb)) { - s = orig_start; - sectors = orig_len; - space_desired = 0; - if (retried++ < 3) - goto re_insert; - } - out: if (added) { set_changed(bb); From patchwork Fri Feb 21 08:11:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984993 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3492E201021 for ; Fri, 21 Feb 2025 08:15:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125703; cv=none; b=oT47chFL2ySKtkUQLys4CX/cV/mloHUiFdt8atF1hctLr3moMaBeL6jn1lq4I0eejUgCJcQnaDa0V7/i3cpR5e7ty+zhbetKZLzOP3BhlVSscDHtjGBq0nYLb4jLiGwPB2vMsOBN2gPxXuE0ntNgrb2LiOM2qBMBtRt2r4Yp5pw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125703; c=relaxed/simple; bh=CbvBFTs6oVuHlnnYB0uVi6sV0YMKAMhrhPzqxWvsxxc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=lWS06RTyos0kq1Qq9C7BM0rdLCAGtavRyZ5hgIV60pdRzTEwEsNV6NsXQKxY5XKG0hEG+SSteErka21fz3MhmeR/Jk2A6dxe43awXKUXwXgERzYZgMUxozDQt1A3qdJ2C84UXnjF+FUG9znIkQUqp2FF/F9XuFOlYLOawMFnHG4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4YzjbJ43wPz4f3jYC for ; Fri, 21 Feb 2025 16:14:36 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 3BD301A16AA for ; Fri, 21 Feb 2025 16:14:58 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S9; Fri, 21 Feb 2025 16:14:58 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 05/12] badblocks: return error if any badblock set fails Date: Fri, 21 Feb 2025 16:11:02 +0800 Message-Id: <20250221081109.734170-6-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S9 X-Coremail-Antispam: 1UD129KBjvJXoWxAF4rtw4kJrWUtw4xAw15twb_yoW5GF1Dpr sxC3s3KrWjgr1UXF4UZ3Zrtr1Fg34fJF4UW3yrG34jkryUW343tF1kXr4YgFyjqry3AFn0 q3W5urWrZ34DG3DanT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Gr0_Xr1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI1v3UUUUU X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Li Nan _badblocks_set() returns success if at least one badblock is set successfully, even if others fail. This can lead to data inconsistencies in raid, where a failed badblock set should trigger the disk to be kicked out to prevent future reads from failed write areas. _badblocks_set() should return error if any badblock set fails. Instead of relying on 'rv', directly returning 'sectors' for clearer logic. If all badblocks are successfully set, 'sectors' will be 0, otherwise it indicates the number of badblocks that have not been set yet, thus signaling failure. By the way, it can also fix an issue: when a newly set unack badblock is included in an existing ack badblock, the setting will return an error. ··· echo "0 100" /sys/block/md0/md/dev-loop1/bad_blocks echo "0 100" /sys/block/md0/md/dev-loop1/unacknowledged_bad_blocks -bash: echo: write error: No space left on device ``` After fix, it will return success. Fixes: aa511ff8218b ("badblocks: switch to the improved badblock handling code") Signed-off-by: Li Nan Acked-by: Coly Li --- block/badblocks.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index 1c8b8f65f6df..a953d2e9417f 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -843,7 +843,6 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, struct badblocks_context bad; int prev = -1, hint = -1; unsigned long flags; - int rv = 0; u64 *p; if (bb->shift < 0) @@ -873,10 +872,8 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, bad.len = sectors; len = 0; - if (badblocks_full(bb)) { - rv = 1; + if (badblocks_full(bb)) goto out; - } if (badblocks_empty(bb)) { len = insert_at(bb, 0, &bad); @@ -916,10 +913,8 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, int extra = 0; if (!can_front_overwrite(bb, prev, &bad, &extra)) { - if (extra > 0) { - rv = 1; + if (extra > 0) goto out; - } len = min_t(sector_t, BB_END(p[prev]) - s, sectors); @@ -986,10 +981,7 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, write_sequnlock_irqrestore(&bb->lock, flags); - if (!added) - rv = 1; - - return rv; + return sectors; } /* @@ -1353,7 +1345,7 @@ EXPORT_SYMBOL_GPL(badblocks_check); * * Return: * 0: success - * 1: failed to set badblocks (out of space) + * other: failed to set badblocks (out of space) */ int badblocks_set(struct badblocks *bb, sector_t s, int sectors, int acknowledged) From patchwork Fri Feb 21 08:11:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984994 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38350202C56 for ; Fri, 21 Feb 2025 08:15:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125704; cv=none; b=CuR5vYdGHB3kEV+n5G4+pslmhGWdKOPaKC/huapDmnOyz6/n9QD8wCvBTc9GK/qazmycDIv3Cb6PF4k3AMxFHv+CiZRepRVJ5JC5A3pXxQJSSrwdYpWqZxrW63KYg0uYHPqpNpiag6cOa35yFiCL3LRzIVp61VfDAeMq56+SqhY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125704; c=relaxed/simple; bh=BVQ2JLHfvl7SJ1XVdk8tTQQ6U/sJhzvQG0qFBGRubgE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CVL914r/fr/4wihPu6nimI9uUcDcnntfrLDgwSOA+mPfo38SQZkhTmoAGRJAgJPwMj4YpuBnd9exjP5encZo6ePh/JjslCS67GzncKs8W7gRakeUUlPmk0mtz26p7VEFSd01hCHVUQ+vtcCrdSTEV1fkVyW//vWDS6FQ7ZxxN3k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4YzjbK2vMjz4f3jXt for ; Fri, 21 Feb 2025 16:14:37 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 152C41A1AC2 for ; Fri, 21 Feb 2025 16:14:59 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S10; Fri, 21 Feb 2025 16:14:58 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 06/12] badblocks: fix the using of MAX_BADBLOCKS Date: Fri, 21 Feb 2025 16:11:03 +0800 Message-Id: <20250221081109.734170-7-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S10 X-Coremail-Antispam: 1UD129KBjvdXoWrtF48ZF4rCw1DAr47uw43trb_yoWkGwb_J3 WDt3ykWr4kJr1rCw1ayryDtrWFyF47Cr4SkrZ2yr1kZr47tF1DZw45Xr98Xrs8CFWUJanx tw1fZrWS9F4IqjkaLaAFLSUrUUUUbb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbgxYFVCjjxCrM7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20E Y4v20xvaj40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l82xGYIkIc2x26280x7IE14v26r126s 0DM28IrcIa0xkI8VCY1x0267AKxVW5JVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4AK67xG Y2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF7I0E14 v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAF wI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2 WlYx0E2Ix0cI8IcVAFwI0_Jrv_JF1lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkE bVWUJVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7 AKxVW8ZVWrXwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_Wr ylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVW8JVW5JwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI 0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x 07jxwIDUUUUU= X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Li Nan The number of badblocks cannot exceed MAX_BADBLOCKS, but it should be allowed to equal MAX_BADBLOCKS. Fixes: aa511ff8218b ("badblocks: switch to the improved badblock handling code") Signed-off-by: Li Nan Reviewed-by: Zhu Yanjun Acked-by: Coly Li Reviewed-by: Yu Kuai --- block/badblocks.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index a953d2e9417f..87267bae6836 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -700,7 +700,7 @@ static bool can_front_overwrite(struct badblocks *bb, int prev, *extra = 2; } - if ((bb->count + (*extra)) >= MAX_BADBLOCKS) + if ((bb->count + (*extra)) > MAX_BADBLOCKS) return false; return true; @@ -1135,7 +1135,7 @@ static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) if ((BB_OFFSET(p[prev]) < bad.start) && (BB_END(p[prev]) > (bad.start + bad.len))) { /* Splitting */ - if ((bb->count + 1) < MAX_BADBLOCKS) { + if ((bb->count + 1) <= MAX_BADBLOCKS) { len = front_splitting_clear(bb, prev, &bad); bb->count += 1; cleared++; From patchwork Fri Feb 21 08:11:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984995 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6A5A204087 for ; Fri, 21 Feb 2025 08:15:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125704; cv=none; b=CaVBK7JMMeIZjQga5zO8Q3X5xzzz5Q7fg8Q2Q9ISbMIJRmNP86JQ+kna5DDYM8rE61DvsLphbQl5UyUzv92cy/uAWu6osMD6OPaEb+9BYX7bweHvAdCE3VCCYTBy1AhWjNv8AdNWnvyHBqD2coPtLS8Zz+Sh8+tnEANcT3YGui4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125704; c=relaxed/simple; bh=N0UfP5QlKELJCsqi9sDDnbLbts2YD1i7p9VUY508QCI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sQh6woAanRsrRjXh7yeutnKGsEBEIdm4bcEl6dAh1FTvDZBanstYAvoQmY6XYgtbWUHeYLlLQuj8SQZwYKj2HkzsVvgiyRp7oc2FswzwXgNm2JzsskgZwI/bHSg+e/Bo31okOHFWP0WuJpAn82Se5hGCQusb9FsFGzd0w2icGoM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4YzjbL1k0yz4f3jMn for ; Fri, 21 Feb 2025 16:14:38 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id DE6601A1ACA for ; Fri, 21 Feb 2025 16:14:59 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S11; Fri, 21 Feb 2025 16:14:59 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 07/12] badblocks: try can_merge_front before overlap_front Date: Fri, 21 Feb 2025 16:11:04 +0800 Message-Id: <20250221081109.734170-8-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S11 X-Coremail-Antispam: 1UD129KBjvJXoW7trWxJFWUtF1UuF1UZw1kKrg_yoW8CF47pw nIvr1akrZ7tw13Wr43u3ZFqr1agrW8GFsrKa17Jw1FkryIvas3KF10q3WxKrWjqFZxAr1q qw15CFy0vFy8trJanT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI1v3UUUUU X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Li Nan Regardless of whether overlap_front() returns true or false, can_merge_front() will be executed first. Therefore, move can_merge_front() in front of can_merge_front() to simplify code. Signed-off-by: Li Nan Reviewed-by: Yu Kuai --- block/badblocks.c | 48 ++++++++++++++++++++++------------------------- 1 file changed, 22 insertions(+), 26 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index 87267bae6836..bb46bab7e99f 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -905,39 +905,35 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, goto update_sectors; } + if (can_merge_front(bb, prev, &bad)) { + len = front_merge(bb, prev, &bad); + added++; + hint = prev; + goto update_sectors; + } + if (overlap_front(bb, prev, &bad)) { - if (can_merge_front(bb, prev, &bad)) { - len = front_merge(bb, prev, &bad); - added++; - } else { - int extra = 0; + int extra = 0; - if (!can_front_overwrite(bb, prev, &bad, &extra)) { - if (extra > 0) - goto out; + if (!can_front_overwrite(bb, prev, &bad, &extra)) { + if (extra > 0) + goto out; - len = min_t(sector_t, - BB_END(p[prev]) - s, sectors); - hint = prev; - goto update_sectors; - } + len = min_t(sector_t, + BB_END(p[prev]) - s, sectors); + hint = prev; + goto update_sectors; + } - len = front_overwrite(bb, prev, &bad, extra); - added++; - bb->count += extra; + len = front_overwrite(bb, prev, &bad, extra); + added++; + bb->count += extra; - if (can_combine_front(bb, prev, &bad)) { - front_combine(bb, prev); - bb->count--; - } + if (can_combine_front(bb, prev, &bad)) { + front_combine(bb, prev); + bb->count--; } - hint = prev; - goto update_sectors; - } - if (can_merge_front(bb, prev, &bad)) { - len = front_merge(bb, prev, &bad); - added++; hint = prev; goto update_sectors; } From patchwork Fri Feb 21 08:11:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984996 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E246520468E for ; Fri, 21 Feb 2025 08:15:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125705; cv=none; b=cPA61z4kCJP6mEe6UNZbldwimgdV3Y7rtWBfT4AsoW0rRXbdFXuP6dlK/QqlXE5jIsECbiV3xJutILgpCY7VDlUZITy7esmapDH73EGO9c4LbqD9defbN7+Bo0rEI/afEwFvmVw0Ipe9nzE9axO6Z0bzsvTo9mE04geqt5QT30E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125705; c=relaxed/simple; bh=7ZRWqUauzNgndbsVmlesgyLeNJlD1ke3gVJGXBhMU3A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CKDjTYRTGxKChmiuoixREvoki85lHrwbhvdJJqYAJGk7PwuNu30RjDwoS2ibYGpyCGOsCGcJkqzw/vE3KCssbqUQG1HYNARLBe6g1hAt/maFEwGjs7cNyjftUupq2VMqJ+rWUaR2VsLz73+hUAzQ/duWaTfwf5iAXJa4XGte+rM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YzjbS2ndfz4f3jqj for ; Fri, 21 Feb 2025 16:14:44 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id BB2D01A058E for ; Fri, 21 Feb 2025 16:15:00 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S12; Fri, 21 Feb 2025 16:15:00 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 08/12] badblocks: fix merge issue when new badblocks align with pre+1 Date: Fri, 21 Feb 2025 16:11:05 +0800 Message-Id: <20250221081109.734170-9-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S12 X-Coremail-Antispam: 1UD129KBjvJXoW7Cr45KrW3Jr4DurWDAr1DKFg_yoW8Gw18pr n8Cw1SkrWqgF18Za1Uu3W7Wr1F9a4fGF4UCa1UJr4jkr98A3WIqF1kXrWYqryjqr4fGrn0 q3WY9FykZa4kG3DanT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI1v3UUUUU X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Li Nan There is a merge issue when adding badblocks as follow: echo 0 10 > bad_blocks echo 30 10 > bad_blocks echo 20 10 > bad_blocks cat bad_blocks 0 10 20 10 //should be merged with (30 10) 30 10 In this case, if new badblocks does not intersect with prev, it is added by insert_at(). If there is an intersection with prev+1, the merge will be processed in the next re_insert loop. However, when the end of the new badblocks is exactly equal to the offset of prev+1, no further re_insert loop occurs, and the two badblocks are not merge. Fix it by inc prev, badblocks can be merged during the subsequent code. Fixes: aa511ff8218b ("badblocks: switch to the improved badblock handling code") Signed-off-by: Li Nan Reviewed-by: Yu Kuai --- block/badblocks.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index bb46bab7e99f..381f9db423d6 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -892,7 +892,7 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, len = insert_at(bb, 0, &bad); bb->count++; added++; - hint = 0; + hint = ++prev; goto update_sectors; } @@ -947,7 +947,7 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, len = insert_at(bb, prev + 1, &bad); bb->count++; added++; - hint = prev + 1; + hint = ++prev; update_sectors: s += len; From patchwork Fri Feb 21 08:11:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984997 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EBA5202C32 for ; Fri, 21 Feb 2025 08:15:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125706; cv=none; b=rAuqCw5x9kAbc6UM5abCWxlMCwcozixYmYvwbTui1h9ZpCpbayd1G+t6/XlTsO2Lwm5ky2UUztN9rl0np8h5wuK3+ar+UIqdvmsvhfEa74VU1zUg1zEBKBkooIRET3JP4SKaSpbue1Aw0Mb7cKqZjosIACS75U5Qe/oAL+D6JiM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125706; c=relaxed/simple; bh=Yp7v2oZW2J6dUnVZu1oprUC3k2FLZixhLh3Yd6lHxvY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sNc4lSwqQoXU6mQjU7z3uoY9eOPNyKLGiau9qwgx094OERfU++30VihQcBapctk1QpIgK9rXF73dnHncGTU859EDBklR+juAKej0sCSdQhxEru4fWG4u5eh2gLq4ild/lyZ5ZTwjyhE1wXRW7gICD30BEeZ8ddDAtDWzACXV4y0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YzjbT1lvyz4f3jt4 for ; Fri, 21 Feb 2025 16:14:45 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 947251A1792 for ; Fri, 21 Feb 2025 16:15:01 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S13; Fri, 21 Feb 2025 16:15:01 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 09/12] badblocks: fix missing bad blocks on retry in _badblocks_check() Date: Fri, 21 Feb 2025 16:11:06 +0800 Message-Id: <20250221081109.734170-10-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S13 X-Coremail-Antispam: 1UD129KBjvJXoWxGFyfKw4DKFyfWw1DKrWxZwb_yoW5Xr48pF nxG34ftryjgry8Wa1Yva1qgrnYk34fJF47J3y7Ga48Cry8Kwn7tFykWr1rZFyYgFW3Jrn0 qa1rury3uryDGaDanT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI1v3UUUUU X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Zheng Qixing The bad blocks check would miss bad blocks when retrying under contention, as checking parameters are not reset. These stale values from the previous attempt could lead to incorrect scanning in the subsequent retry. Move seqlock to outer function and reinitialize checking state for each retry. This ensures a clean state for each check attempt, preventing any missed bad blocks. Fixes: 3ea3354cb9f0 ("badblocks: improve badblocks_check() for multiple ranges handling") Signed-off-by: Zheng Qixing Acked-by: Coly Li Reviewed-by: Yu Kuai --- block/badblocks.c | 50 +++++++++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 26 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index 381f9db423d6..79d91be468c4 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -1191,31 +1191,12 @@ static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors, sector_t *first_bad, int *bad_sectors) { - int unacked_badblocks, acked_badblocks; int prev = -1, hint = -1, set = 0; struct badblocks_context bad; - unsigned int seq; + int unacked_badblocks = 0; + int acked_badblocks = 0; + u64 *p = bb->page; int len, rv; - u64 *p; - - WARN_ON(bb->shift < 0 || sectors == 0); - - if (bb->shift > 0) { - sector_t target; - - /* round the start down, and the end up */ - target = s + sectors; - rounddown(s, 1 << bb->shift); - roundup(target, 1 << bb->shift); - sectors = target - s; - } - -retry: - seq = read_seqbegin(&bb->lock); - - p = bb->page; - unacked_badblocks = 0; - acked_badblocks = 0; re_check: bad.start = s; @@ -1281,9 +1262,6 @@ static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors, else rv = 0; - if (read_seqretry(&bb->lock, seq)) - goto retry; - return rv; } @@ -1324,7 +1302,27 @@ static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors, int badblocks_check(struct badblocks *bb, sector_t s, int sectors, sector_t *first_bad, int *bad_sectors) { - return _badblocks_check(bb, s, sectors, first_bad, bad_sectors); + unsigned int seq; + int rv; + + WARN_ON(bb->shift < 0 || sectors == 0); + + if (bb->shift > 0) { + /* round the start down, and the end up */ + sector_t target = s + sectors; + + rounddown(s, 1 << bb->shift); + roundup(target, 1 << bb->shift); + sectors = target - s; + } + +retry: + seq = read_seqbegin(&bb->lock); + rv = _badblocks_check(bb, s, sectors, first_bad, bad_sectors); + if (read_seqretry(&bb->lock, seq)) + goto retry; + + return rv; } EXPORT_SYMBOL_GPL(badblocks_check); From patchwork Fri Feb 21 08:11:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984998 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EF6E204C37 for ; Fri, 21 Feb 2025 08:15:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125707; cv=none; b=l4/ChwNOHLG+WaK1+HRLbjGqzfXSHVPfU6L9jVqgWR0Rj9a8t0BCgu4DBt+/s/UEUf5GxQ/wJ4RDlQs2yHXlFtMF5yj3RwOn+F1MY6w2NTpODvBfM9RroF3qc3IdUNW+lT16x6clfED2LEn8aOlrfBe4kvY+UbEfJf5LjLwgyBs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125707; c=relaxed/simple; bh=A6gnyyT5/Tb4TC6XpGOVH0o0PRn0XU0L7ShiWJqdDPc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cfzZQo3DsF3mEWu2vPh4a5RvJ+EGt6+CjNMiYnHTunOivwCr7b9D1vufiCD0Hix52U9iErQoGu3jlzrLFDvM9d4BBC1YGVQS7HsvLlK58lGBVs/SZNPooIfBU8jw+nqvE1u2n4h1kzIrNN6cVFpEBz/KtS6qpPmONiAq0zx9XQ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4YzjbN5ds0z4f3jYB for ; Fri, 21 Feb 2025 16:14:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 704201A058E for ; Fri, 21 Feb 2025 16:15:02 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S14; Fri, 21 Feb 2025 16:15:02 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 10/12] badblocks: return boolen from badblocks_set() and badblocks_clear() Date: Fri, 21 Feb 2025 16:11:07 +0800 Message-Id: <20250221081109.734170-11-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S14 X-Coremail-Antispam: 1UD129KBjvJXoW3WF47uFyxZw4DtF47Jr4DCFg_yoWfGr4kpa 9xJa4fJrWUWr18WF1UZ3Z5tr1Fg343tF4UK3y3J340kryqy3yxtF1kXryYqFyjgrW3CrnI qa15urW5ua4DW37anT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI1v3UUUUU X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Zheng Qixing Change the return type of badblocks_set() and badblocks_clear() from int to bool, indicating success or failure. Specifically: - _badblocks_set() and _badblocks_clear() functions now return true for success and false for failure. - All calls to these functions have been updated to handle the new boolean return type. - This change improves code clarity and ensures a more consistent handling of success and failure states. Signed-off-by: Zheng Qixing Reviewed-by: Yu Kuai Acked-by: Coly Li --- block/badblocks.c | 37 +++++++++++++++++------------------ drivers/block/null_blk/main.c | 17 ++++++++-------- drivers/md/md.c | 35 +++++++++++++++++---------------- drivers/nvdimm/badrange.c | 2 +- include/linux/badblocks.h | 6 +++--- 5 files changed, 49 insertions(+), 48 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index 79d91be468c4..8f057563488a 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -836,8 +836,8 @@ static bool try_adjacent_combine(struct badblocks *bb, int prev) } /* Do exact work to set bad block range into the bad block table */ -static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, - int acknowledged) +static bool _badblocks_set(struct badblocks *bb, sector_t s, int sectors, + int acknowledged) { int len = 0, added = 0; struct badblocks_context bad; @@ -847,11 +847,11 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, if (bb->shift < 0) /* badblocks are disabled */ - return 1; + return false; if (sectors == 0) /* Invalid sectors number */ - return 1; + return false; if (bb->shift) { /* round the start down, and the end up */ @@ -977,7 +977,7 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors, write_sequnlock_irqrestore(&bb->lock, flags); - return sectors; + return sectors == 0; } /* @@ -1048,21 +1048,20 @@ static int front_splitting_clear(struct badblocks *bb, int prev, } /* Do the exact work to clear bad block range from the bad block table */ -static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) +static bool _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) { struct badblocks_context bad; int prev = -1, hint = -1; int len = 0, cleared = 0; - int rv = 0; u64 *p; if (bb->shift < 0) /* badblocks are disabled */ - return 1; + return false; if (sectors == 0) /* Invalid sectors number */ - return 1; + return false; if (bb->shift) { sector_t target; @@ -1182,9 +1181,9 @@ static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) write_sequnlock_irq(&bb->lock); if (!cleared) - rv = 1; + return false; - return rv; + return true; } /* Do the exact work to check bad blocks range from the bad block table */ @@ -1338,11 +1337,11 @@ EXPORT_SYMBOL_GPL(badblocks_check); * decide how best to handle it. * * Return: - * 0: success - * other: failed to set badblocks (out of space) + * true: success + * false: failed to set badblocks (out of space) */ -int badblocks_set(struct badblocks *bb, sector_t s, int sectors, - int acknowledged) +bool badblocks_set(struct badblocks *bb, sector_t s, int sectors, + int acknowledged) { return _badblocks_set(bb, s, sectors, acknowledged); } @@ -1359,10 +1358,10 @@ EXPORT_SYMBOL_GPL(badblocks_set); * drop the remove request. * * Return: - * 0: success - * 1: failed to clear badblocks + * true: success + * false: failed to clear badblocks */ -int badblocks_clear(struct badblocks *bb, sector_t s, int sectors) +bool badblocks_clear(struct badblocks *bb, sector_t s, int sectors) { return _badblocks_clear(bb, s, sectors); } @@ -1484,7 +1483,7 @@ ssize_t badblocks_store(struct badblocks *bb, const char *page, size_t len, return -EINVAL; } - if (badblocks_set(bb, sector, length, !unack)) + if (!badblocks_set(bb, sector, length, !unack)) return -ENOSPC; else return len; diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index d94ef37480bd..623db72ad66b 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -559,14 +559,15 @@ static ssize_t nullb_device_badblocks_store(struct config_item *item, goto out; /* enable badblocks */ cmpxchg(&t_dev->badblocks.shift, -1, 0); - if (buf[0] == '+') - ret = badblocks_set(&t_dev->badblocks, start, - end - start + 1, 1); - else - ret = badblocks_clear(&t_dev->badblocks, start, - end - start + 1); - if (ret == 0) - ret = count; + if (buf[0] == '+') { + if (badblocks_set(&t_dev->badblocks, start, + end - start + 1, 1)) + ret = count; + } else { + if (badblocks_clear(&t_dev->badblocks, start, + end - start + 1)) + ret = count; + } out: kfree(orig); return ret; diff --git a/drivers/md/md.c b/drivers/md/md.c index 30b3dbbce2d2..49d826e475cb 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -1748,7 +1748,7 @@ static int super_1_load(struct md_rdev *rdev, struct md_rdev *refdev, int minor_ count <<= sb->bblog_shift; if (bb + 1 == 0) break; - if (badblocks_set(&rdev->badblocks, sector, count, 1)) + if (!badblocks_set(&rdev->badblocks, sector, count, 1)) return -EINVAL; } } else if (sb->bblog_offset != 0) @@ -9846,7 +9846,6 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, int is_new) { struct mddev *mddev = rdev->mddev; - int rv; /* * Recording new badblocks for faulty rdev will force unnecessary @@ -9862,33 +9861,35 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, s += rdev->new_data_offset; else s += rdev->data_offset; - rv = badblocks_set(&rdev->badblocks, s, sectors, 0); - if (rv == 0) { - /* Make sure they get written out promptly */ - if (test_bit(ExternalBbl, &rdev->flags)) - sysfs_notify_dirent_safe(rdev->sysfs_unack_badblocks); - sysfs_notify_dirent_safe(rdev->sysfs_state); - set_mask_bits(&mddev->sb_flags, 0, - BIT(MD_SB_CHANGE_CLEAN) | BIT(MD_SB_CHANGE_PENDING)); - md_wakeup_thread(rdev->mddev->thread); - return 1; - } else + + if (!badblocks_set(&rdev->badblocks, s, sectors, 0)) return 0; + + /* Make sure they get written out promptly */ + if (test_bit(ExternalBbl, &rdev->flags)) + sysfs_notify_dirent_safe(rdev->sysfs_unack_badblocks); + sysfs_notify_dirent_safe(rdev->sysfs_state); + set_mask_bits(&mddev->sb_flags, 0, + BIT(MD_SB_CHANGE_CLEAN) | BIT(MD_SB_CHANGE_PENDING)); + md_wakeup_thread(rdev->mddev->thread); + return 1; } EXPORT_SYMBOL_GPL(rdev_set_badblocks); int rdev_clear_badblocks(struct md_rdev *rdev, sector_t s, int sectors, int is_new) { - int rv; if (is_new) s += rdev->new_data_offset; else s += rdev->data_offset; - rv = badblocks_clear(&rdev->badblocks, s, sectors); - if ((rv == 0) && test_bit(ExternalBbl, &rdev->flags)) + + if (!badblocks_clear(&rdev->badblocks, s, sectors)) + return 0; + + if (test_bit(ExternalBbl, &rdev->flags)) sysfs_notify_dirent_safe(rdev->sysfs_badblocks); - return rv; + return 1; } EXPORT_SYMBOL_GPL(rdev_clear_badblocks); diff --git a/drivers/nvdimm/badrange.c b/drivers/nvdimm/badrange.c index a002ea6fdd84..ee478ccde7c6 100644 --- a/drivers/nvdimm/badrange.c +++ b/drivers/nvdimm/badrange.c @@ -167,7 +167,7 @@ static void set_badblock(struct badblocks *bb, sector_t s, int num) dev_dbg(bb->dev, "Found a bad range (0x%llx, 0x%llx)\n", (u64) s * 512, (u64) num * 512); /* this isn't an error as the hardware will still throw an exception */ - if (badblocks_set(bb, s, num, 1)) + if (!badblocks_set(bb, s, num, 1)) dev_info_once(bb->dev, "%s: failed for sector %llx\n", __func__, (u64) s); } diff --git a/include/linux/badblocks.h b/include/linux/badblocks.h index 670f2dae692f..8764bed9ff16 100644 --- a/include/linux/badblocks.h +++ b/include/linux/badblocks.h @@ -50,9 +50,9 @@ struct badblocks_context { int badblocks_check(struct badblocks *bb, sector_t s, int sectors, sector_t *first_bad, int *bad_sectors); -int badblocks_set(struct badblocks *bb, sector_t s, int sectors, - int acknowledged); -int badblocks_clear(struct badblocks *bb, sector_t s, int sectors); +bool badblocks_set(struct badblocks *bb, sector_t s, int sectors, + int acknowledged); +bool badblocks_clear(struct badblocks *bb, sector_t s, int sectors); void ack_all_badblocks(struct badblocks *bb); ssize_t badblocks_show(struct badblocks *bb, char *page, int unack); ssize_t badblocks_store(struct badblocks *bb, const char *page, size_t len, From patchwork Fri Feb 21 08:11:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13984999 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 945AD205AB4 for ; Fri, 21 Feb 2025 08:15:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125709; cv=none; b=BF/0mY7fXF05HP9Ox3wPqJ8EMHuFAM2enS/6BaA2TqSqVbHqiwX2I1/E3C7mQ7j8WVHCwRP8zMb8wmkxMHsgErXI7UveF//G8ir4hjRZ7vXKu/Y/NrtJmHE2XnY5wM31zaRzbTmKCoXmo0DGXD5c7jZu06YNhKLzElvOkGYwtUM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125709; c=relaxed/simple; bh=11XAKjDb8ctvnl/snHyBqoGFs46CDZxNcRc/kytrgGs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=V9CQ1BsqMVfVNAlPOKxmvV9rP8vUjszk/tMenQfiujVMDzNpuRRWr421aaorpIdvCDy0XS/I+AdJd2YgNtACevPt35CBPjqbAk2J74PfTSLpkaJYwc0x51T+7d/pAMolYZalArdDdxnTy46HikY+65G3ZKb75jcveKp5xNnvMPQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YzjbN1R3pz4f3lDh for ; Fri, 21 Feb 2025 16:14:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 453D11A1489 for ; Fri, 21 Feb 2025 16:15:03 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S15; Fri, 21 Feb 2025 16:15:03 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 11/12] md: improve return types of badblocks handling functions Date: Fri, 21 Feb 2025 16:11:08 +0800 Message-Id: <20250221081109.734170-12-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S15 X-Coremail-Antispam: 1UD129KBjvJXoW3Aw1rJrykGryfAF4fZryxZrb_yoW7GF43pa yDJF93J3yUW348W3WUZrWDu3WF9a43KFW2krWfC34Ik34DKrZ7tF48XryYvFykKF9xur12 q3W5WrWUuw18Wa7anT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI1v3UUUUU X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Zheng Qixing rdev_set_badblocks() only indicates success/failure, so convert its return type from int to boolean for better semantic clarity. rdev_clear_badblocks() return value is never used by any caller, convert it to void. This removes unnecessary value returns. Also update narrow_write_error() in both raid1 and raid10 to use boolean return type to match rdev_set_badblocks(). Signed-off-by: Zheng Qixing --- drivers/md/md.c | 20 +++++++++----------- drivers/md/md.h | 8 ++++---- drivers/md/raid1.c | 6 +++--- drivers/md/raid10.c | 6 +++--- 4 files changed, 19 insertions(+), 21 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 49d826e475cb..76c437376542 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9841,9 +9841,9 @@ EXPORT_SYMBOL(md_finish_reshape); /* Bad block management */ -/* Returns 1 on success, 0 on failure */ -int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, - int is_new) +/* Returns true on success, false on failure */ +bool rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, + int is_new) { struct mddev *mddev = rdev->mddev; @@ -9855,7 +9855,7 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, * avoid it. */ if (test_bit(Faulty, &rdev->flags)) - return 1; + return true; if (is_new) s += rdev->new_data_offset; @@ -9863,7 +9863,7 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, s += rdev->data_offset; if (!badblocks_set(&rdev->badblocks, s, sectors, 0)) - return 0; + return false; /* Make sure they get written out promptly */ if (test_bit(ExternalBbl, &rdev->flags)) @@ -9872,24 +9872,22 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, set_mask_bits(&mddev->sb_flags, 0, BIT(MD_SB_CHANGE_CLEAN) | BIT(MD_SB_CHANGE_PENDING)); md_wakeup_thread(rdev->mddev->thread); - return 1; + return true; } EXPORT_SYMBOL_GPL(rdev_set_badblocks); -int rdev_clear_badblocks(struct md_rdev *rdev, sector_t s, int sectors, - int is_new) +void rdev_clear_badblocks(struct md_rdev *rdev, sector_t s, int sectors, + int is_new) { if (is_new) s += rdev->new_data_offset; else s += rdev->data_offset; - if (!badblocks_clear(&rdev->badblocks, s, sectors)) - return 0; + badblocks_clear(&rdev->badblocks, s, sectors); if (test_bit(ExternalBbl, &rdev->flags)) sysfs_notify_dirent_safe(rdev->sysfs_badblocks); - return 1; } EXPORT_SYMBOL_GPL(rdev_clear_badblocks); diff --git a/drivers/md/md.h b/drivers/md/md.h index def808064ad8..923a0ef51efe 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -289,10 +289,10 @@ static inline int rdev_has_badblock(struct md_rdev *rdev, sector_t s, return is_badblock(rdev, s, sectors, &first_bad, &bad_sectors); } -extern int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, - int is_new); -extern int rdev_clear_badblocks(struct md_rdev *rdev, sector_t s, int sectors, - int is_new); +extern bool rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors, + int is_new); +extern void rdev_clear_badblocks(struct md_rdev *rdev, sector_t s, int sectors, + int is_new); struct md_cluster_info; /** diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 9d57a88dbd26..8beb8cccc6af 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2486,7 +2486,7 @@ static void fix_read_error(struct r1conf *conf, struct r1bio *r1_bio) } } -static int narrow_write_error(struct r1bio *r1_bio, int i) +static bool narrow_write_error(struct r1bio *r1_bio, int i) { struct mddev *mddev = r1_bio->mddev; struct r1conf *conf = mddev->private; @@ -2507,10 +2507,10 @@ static int narrow_write_error(struct r1bio *r1_bio, int i) sector_t sector; int sectors; int sect_to_write = r1_bio->sectors; - int ok = 1; + bool ok = true; if (rdev->badblocks.shift < 0) - return 0; + return false; block_sectors = roundup(1 << rdev->badblocks.shift, bdev_logical_block_size(rdev->bdev) >> 9); diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index efe93b979167..7ed933181712 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2786,7 +2786,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10 } } -static int narrow_write_error(struct r10bio *r10_bio, int i) +static bool narrow_write_error(struct r10bio *r10_bio, int i) { struct bio *bio = r10_bio->master_bio; struct mddev *mddev = r10_bio->mddev; @@ -2807,10 +2807,10 @@ static int narrow_write_error(struct r10bio *r10_bio, int i) sector_t sector; int sectors; int sect_to_write = r10_bio->sectors; - int ok = 1; + bool ok = true; if (rdev->badblocks.shift < 0) - return 0; + return false; block_sectors = roundup(1 << rdev->badblocks.shift, bdev_logical_block_size(rdev->bdev) >> 9); From patchwork Fri Feb 21 08:11:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheng Qixing X-Patchwork-Id: 13985000 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9451B205AB0 for ; Fri, 21 Feb 2025 08:15:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125709; cv=none; b=bBtH0pF10ssbMrEldJp7kRbQkzsPVd6jW+mXbr2rJrYwDsF6Qj1NBp3k2Txzn9G15xgSna9YRGS0ydYZe07FAxpfF5JhOIUaVOooo7XUTcQQqDS4JTUYN6F2mkpk5q3nC4Iuwispmcnk9wmsHDDkCdIl//Z2JTPShf2TWzHZ0fI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740125709; c=relaxed/simple; bh=/3Nv2WH63vhmb0IIk1pnnQl9RFsEL+I20wtkbhg6JHc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YpJ2A4tyBvre/CfRdZaSpzPCO8slPVkZzmblI/MG/mWIYPrjKnadtIWZvweQN2wwJNEgztpu308vAQRv2k4xSrlKReM3pIv5/96YYyFyO9T3PmHDNemBRAjFUg/jFpJxJASz7gRapK/AlQ5U5Bt1NIBdnUIcq1OsOvedQ2Ti5U4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YzjbP0H01z4f3lCf for ; Fri, 21 Feb 2025 16:14:41 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 1E33B1A149B for ; Fri, 21 Feb 2025 16:15:04 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP4 (Coremail) with SMTP id gCh0CgC3Gl_8NbhnHF3eEQ--.3944S16; Fri, 21 Feb 2025 16:15:03 +0800 (CST) From: Zheng Qixing To: axboe@kernel.dk, song@kernel.org, colyli@kernel.org, yukuai3@huawei.com, dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, ira.weiny@intel.com, dlemoal@kernel.org, yanjun.zhu@linux.dev, kch@nvidia.com, hare@suse.de, zhengqixing@huawei.com, john.g.garry@oracle.com, geliang@kernel.org, xni@redhat.com, colyli@suse.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, nvdimm@lists.linux.dev, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH 12/12] badblocks: use sector_t instead of int to avoid truncation of badblocks length Date: Fri, 21 Feb 2025 16:11:09 +0800 Message-Id: <20250221081109.734170-13-zhengqixing@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250221081109.734170-1-zhengqixing@huaweicloud.com> References: <20250221081109.734170-1-zhengqixing@huaweicloud.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgC3Gl_8NbhnHF3eEQ--.3944S16 X-Coremail-Antispam: 1UD129KBjvJXoW3KrW3KrW7XF1DWF1Duw1kuFg_yoWkAr13pa 1DJa4fJryUWF18W3WUZayq9r1Fy34ftFWUK3yUW34jgFykK3s7tF1kXFyYqFyjgFW3Grn0 q3WY9rW3ua4kGrJanT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPqb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVWxJr0_GcWl84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAF wI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2 WlYx0E2Ix0cI8IcVAFwI0_Jrv_JF1lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkE bVWUJVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7 AKxVW8ZVWrXwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_Wr ylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVW5JVW7JwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI 0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x 07jxwIDUUUUU= X-CM-SenderInfo: x2kh0wptl0x03j6k3tpzhluzxrxghudrp/ From: Zheng Qixing There is a truncation of badblocks length issue when set badblocks as follow: echo "2055 4294967299" > bad_blocks cat bad_blocks 2055 3 Change 'sectors' argument type from 'int' to 'sector_t'. This change avoids truncation of badblocks length for large sectors by replacing 'int' with 'sector_t' (u64), enabling proper handling of larger disk sizes and ensuring compatibility with 64-bit sector addressing. Fixes: 9e0e252a048b ("badblocks: Add core badblock management code") Signed-off-by: Zheng Qixing Acked-by: Coly Li Reviewed-by: Yu Kuai --- block/badblocks.c | 20 ++++++++------------ drivers/block/null_blk/main.c | 2 +- drivers/md/md.h | 6 +++--- drivers/md/raid1-10.c | 2 +- drivers/md/raid1.c | 4 ++-- drivers/md/raid10.c | 8 ++++---- drivers/nvdimm/nd.h | 2 +- drivers/nvdimm/pfn_devs.c | 7 ++++--- drivers/nvdimm/pmem.c | 2 +- include/linux/badblocks.h | 8 ++++---- 10 files changed, 29 insertions(+), 32 deletions(-) diff --git a/block/badblocks.c b/block/badblocks.c index 8f057563488a..14e3be47d22d 100644 --- a/block/badblocks.c +++ b/block/badblocks.c @@ -836,7 +836,7 @@ static bool try_adjacent_combine(struct badblocks *bb, int prev) } /* Do exact work to set bad block range into the bad block table */ -static bool _badblocks_set(struct badblocks *bb, sector_t s, int sectors, +static bool _badblocks_set(struct badblocks *bb, sector_t s, sector_t sectors, int acknowledged) { int len = 0, added = 0; @@ -956,8 +956,6 @@ static bool _badblocks_set(struct badblocks *bb, sector_t s, int sectors, if (sectors > 0) goto re_insert; - WARN_ON(sectors < 0); - /* * Check whether the following already set range can be * merged. (prev < 0) condition is not handled here, @@ -1048,7 +1046,7 @@ static int front_splitting_clear(struct badblocks *bb, int prev, } /* Do the exact work to clear bad block range from the bad block table */ -static bool _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) +static bool _badblocks_clear(struct badblocks *bb, sector_t s, sector_t sectors) { struct badblocks_context bad; int prev = -1, hint = -1; @@ -1171,8 +1169,6 @@ static bool _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) if (sectors > 0) goto re_clear; - WARN_ON(sectors < 0); - if (cleared) { badblocks_update_acked(bb); set_changed(bb); @@ -1187,8 +1183,8 @@ static bool _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) } /* Do the exact work to check bad blocks range from the bad block table */ -static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors, - sector_t *first_bad, int *bad_sectors) +static int _badblocks_check(struct badblocks *bb, sector_t s, sector_t sectors, + sector_t *first_bad, sector_t *bad_sectors) { int prev = -1, hint = -1, set = 0; struct badblocks_context bad; @@ -1298,8 +1294,8 @@ static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors, * -1: there are bad blocks which have not yet been acknowledged in metadata. * plus the start/length of the first bad section we overlap. */ -int badblocks_check(struct badblocks *bb, sector_t s, int sectors, - sector_t *first_bad, int *bad_sectors) +int badblocks_check(struct badblocks *bb, sector_t s, sector_t sectors, + sector_t *first_bad, sector_t *bad_sectors) { unsigned int seq; int rv; @@ -1340,7 +1336,7 @@ EXPORT_SYMBOL_GPL(badblocks_check); * true: success * false: failed to set badblocks (out of space) */ -bool badblocks_set(struct badblocks *bb, sector_t s, int sectors, +bool badblocks_set(struct badblocks *bb, sector_t s, sector_t sectors, int acknowledged) { return _badblocks_set(bb, s, sectors, acknowledged); @@ -1361,7 +1357,7 @@ EXPORT_SYMBOL_GPL(badblocks_set); * true: success * false: failed to clear badblocks */ -bool badblocks_clear(struct badblocks *bb, sector_t s, int sectors) +bool badblocks_clear(struct badblocks *bb, sector_t s, sector_t sectors) { return _badblocks_clear(bb, s, sectors); } diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 623db72ad66b..6e7d80b6e92b 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1302,7 +1302,7 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, { struct badblocks *bb = &cmd->nq->dev->badblocks; sector_t first_bad; - int bad_sectors; + sector_t bad_sectors; if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) return BLK_STS_IOERR; diff --git a/drivers/md/md.h b/drivers/md/md.h index 923a0ef51efe..6edc0f71b7d4 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -266,8 +266,8 @@ enum flag_bits { Nonrot, /* non-rotational device (SSD) */ }; -static inline int is_badblock(struct md_rdev *rdev, sector_t s, int sectors, - sector_t *first_bad, int *bad_sectors) +static inline int is_badblock(struct md_rdev *rdev, sector_t s, sector_t sectors, + sector_t *first_bad, sector_t *bad_sectors) { if (unlikely(rdev->badblocks.count)) { int rv = badblocks_check(&rdev->badblocks, rdev->data_offset + s, @@ -284,7 +284,7 @@ static inline int rdev_has_badblock(struct md_rdev *rdev, sector_t s, int sectors) { sector_t first_bad; - int bad_sectors; + sector_t bad_sectors; return is_badblock(rdev, s, sectors, &first_bad, &bad_sectors); } diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c index 4378d3250bd7..62b980b12f93 100644 --- a/drivers/md/raid1-10.c +++ b/drivers/md/raid1-10.c @@ -247,7 +247,7 @@ static inline int raid1_check_read_range(struct md_rdev *rdev, sector_t this_sector, int *len) { sector_t first_bad; - int bad_sectors; + sector_t bad_sectors; /* no bad block overlap */ if (!is_badblock(rdev, this_sector, *len, &first_bad, &bad_sectors)) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 8beb8cccc6af..0b2839105857 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1537,7 +1537,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, atomic_inc(&rdev->nr_pending); if (test_bit(WriteErrorSeen, &rdev->flags)) { sector_t first_bad; - int bad_sectors; + sector_t bad_sectors; int is_bad; is_bad = is_badblock(rdev, r1_bio->sector, max_sectors, @@ -2886,7 +2886,7 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, } else { /* may need to read from here */ sector_t first_bad = MaxSector; - int bad_sectors; + sector_t bad_sectors; if (is_badblock(rdev, sector_nr, good_sectors, &first_bad, &bad_sectors)) { diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 7ed933181712..a8664e29aada 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -747,7 +747,7 @@ static struct md_rdev *read_balance(struct r10conf *conf, for (slot = 0; slot < conf->copies ; slot++) { sector_t first_bad; - int bad_sectors; + sector_t bad_sectors; sector_t dev_sector; unsigned int pending; bool nonrot; @@ -1438,7 +1438,7 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, if (rdev && test_bit(WriteErrorSeen, &rdev->flags)) { sector_t first_bad; sector_t dev_sector = r10_bio->devs[i].addr; - int bad_sectors; + sector_t bad_sectors; int is_bad; is_bad = is_badblock(rdev, dev_sector, max_sectors, @@ -3413,7 +3413,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, sector_t from_addr, to_addr; struct md_rdev *rdev = conf->mirrors[d].rdev; sector_t sector, first_bad; - int bad_sectors; + sector_t bad_sectors; if (!rdev || !test_bit(In_sync, &rdev->flags)) continue; @@ -3609,7 +3609,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, for (i = 0; i < conf->copies; i++) { int d = r10_bio->devs[i].devnum; sector_t first_bad, sector; - int bad_sectors; + sector_t bad_sectors; struct md_rdev *rdev; if (r10_bio->devs[i].repl_bio) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 5ca06e9a2d29..cc5c8f3f81e8 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -673,7 +673,7 @@ static inline bool is_bad_pmem(struct badblocks *bb, sector_t sector, { if (bb->count) { sector_t first_bad; - int num_bad; + sector_t num_bad; return !!badblocks_check(bb, sector, len / 512, &first_bad, &num_bad); diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index cfdfe0eaa512..8f3e816e805d 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -367,9 +367,10 @@ static int nd_pfn_clear_memmap_errors(struct nd_pfn *nd_pfn) struct nd_namespace_common *ndns = nd_pfn->ndns; void *zero_page = page_address(ZERO_PAGE(0)); struct nd_pfn_sb *pfn_sb = nd_pfn->pfn_sb; - int num_bad, meta_num, rc, bb_present; + int meta_num, rc, bb_present; sector_t first_bad, meta_start; struct nd_namespace_io *nsio; + sector_t num_bad; if (nd_pfn->mode != PFN_MODE_PMEM) return 0; @@ -394,7 +395,7 @@ static int nd_pfn_clear_memmap_errors(struct nd_pfn *nd_pfn) bb_present = badblocks_check(&nd_region->bb, meta_start, meta_num, &first_bad, &num_bad); if (bb_present) { - dev_dbg(&nd_pfn->dev, "meta: %x badblocks at %llx\n", + dev_dbg(&nd_pfn->dev, "meta: %llx badblocks at %llx\n", num_bad, first_bad); nsoff = ALIGN_DOWN((nd_region->ndr_start + (first_bad << 9)) - nsio->res.start, @@ -413,7 +414,7 @@ static int nd_pfn_clear_memmap_errors(struct nd_pfn *nd_pfn) } if (rc) { dev_err(&nd_pfn->dev, - "error clearing %x badblocks at %llx\n", + "error clearing %llx badblocks at %llx\n", num_bad, first_bad); return rc; } diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index d81faa9d89c9..43156e1576c9 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -249,7 +249,7 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, unsigned int num = PFN_PHYS(nr_pages) >> SECTOR_SHIFT; struct badblocks *bb = &pmem->bb; sector_t first_bad; - int num_bad; + sector_t num_bad; if (kaddr) *kaddr = pmem->virt_addr + offset; diff --git a/include/linux/badblocks.h b/include/linux/badblocks.h index 8764bed9ff16..996493917f36 100644 --- a/include/linux/badblocks.h +++ b/include/linux/badblocks.h @@ -48,11 +48,11 @@ struct badblocks_context { int ack; }; -int badblocks_check(struct badblocks *bb, sector_t s, int sectors, - sector_t *first_bad, int *bad_sectors); -bool badblocks_set(struct badblocks *bb, sector_t s, int sectors, +int badblocks_check(struct badblocks *bb, sector_t s, sector_t sectors, + sector_t *first_bad, sector_t *bad_sectors); +bool badblocks_set(struct badblocks *bb, sector_t s, sector_t sectors, int acknowledged); -bool badblocks_clear(struct badblocks *bb, sector_t s, int sectors); +bool badblocks_clear(struct badblocks *bb, sector_t s, sector_t sectors); void ack_all_badblocks(struct badblocks *bb); ssize_t badblocks_show(struct badblocks *bb, char *page, int unack); ssize_t badblocks_store(struct badblocks *bb, const char *page, size_t len,