From patchwork Wed Dec 18 07:49:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shinichiro Kawasaki X-Patchwork-Id: 13913142 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DEB195FEED for ; Wed, 18 Dec 2024 07:49:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734508165; cv=none; b=WOaKPsKdCfA6FNe4fD5N30PNEpnjIKPFQEprZZ+j9lJ3RbNU/KovZdR1fRw/P5H3GpQAVNQLISZkSET+gkr/3kSScedQmq/BQi3mjVWX7n4SfIBOZRc421bzwODEcgzNJ91xDeM+R3Gks6CnJF9WvtCC1f9urlX+AqQ8E90t4j8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734508165; c=relaxed/simple; bh=E63JxYB7LSkjK413CzHl2y+wDZA0LhAWLrrPvos+zMQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NAviiy8SVBLEKD4FzmB2GV0NH8Ia2vJZH5PALEDI39XkNi+eCkeq6lA72AF66epG/9endVj65nUof3R/8MEWm/uX8qxmkA2A9xKGNiBs+HzWWKGGI32mvO7WbME1u+y1Phf/goPvoPv2ewh7XCe8/jQnDpseoOnYo4sFCCnT1Eg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=Zw1gW2eQ; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Zw1gW2eQ" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1734508163; x=1766044163; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=E63JxYB7LSkjK413CzHl2y+wDZA0LhAWLrrPvos+zMQ=; b=Zw1gW2eQt0FqnRwyvsZm8uoxzMVaUko0e8KdULpibL8w1xEY3q1W0UrN vNh/tVBDFJ5QpHWuU46+Iy/8L8ssts9mxB0Q8n0v+YA/UTHDs+58dqDzi nP7clTzYy++yAM/x8vkWaKw3rPgU+LZ1qD3/OPLt3VfzzPu4jAJaEATnI S5IkVJ3ZFLBLmVQxcclVtTjnEgMIeZQwOZXGVeZl7SIleUJyjnqRXgJ5K vE0JY8MybX/nin5fig7fqzhlKgNC1SANDpsK5KTD+7N1LsnL6Xt2pE1I2 t3jiPXcs5axufUt0DQGGmm552GzmnwffCetSh+2LfUoXMdnZw+Ym6Tl1F w==; X-CSE-ConnectionGUID: 9oUQgST1SEK5S1U1R6AIUQ== X-CSE-MsgGUID: A9J25bNoSOmIzY0vvgrmqQ== X-IronPort-AV: E=Sophos;i="6.12,244,1728921600"; d="scan'208";a="35269662" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Dec 2024 15:49:16 +0800 IronPort-SDR: 67626fc6_VkM4yfR7yL/WM7WYd8V/6teSoY5PwMwBaLA9PEKt0+8AhBk kaA9NysePJTs/dUUH0NneYj6o+YWPC6riviob4Q== Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 17 Dec 2024 22:46:31 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Dec 2024 23:49:16 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal Subject: [PATCH for-next 1/3] null_blk: do partial IO for bad blocks Date: Wed, 18 Dec 2024 16:49:12 +0900 Message-ID: <20241218074914.814913-2-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241218074914.814913-1-shinichiro.kawasaki@wdc.com> References: <20241218074914.814913-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The current null_blk implementation checks if any bad blocks exist in the target blocks of each IO. If so, the IO fails and data is not transferred for all of the IO target blocks. However, when real storage devices have bad blocks, the devices may transfer data partially up to the first bad blocks. Especially, when the IO is a write operation, such partial IO leaves partially written data on the device. To simulate such partial IO using null_blk, perform the data transfer from the IO start block to the block just before the first bad block. Introduce __null_handle_rq() to support partial data transfer. Modify null_handle_badblocks() to calculate the size of the partial data transfer and call __null_handle_rq(). Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 33 +++++++++++++++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 7b674187c096..018a1a54dfa1 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1249,31 +1249,50 @@ static int null_transfer(struct nullb *nullb, struct page *page, return err; } -static blk_status_t null_handle_rq(struct nullb_cmd *cmd) +/* + * Transfer data for the given request. The transfer size is capped with the + * max_bytes argument. If max_bytes is zero, transfer all of the requested data. + */ +static blk_status_t __null_handle_rq(struct nullb_cmd *cmd, + unsigned int max_bytes) { struct request *rq = blk_mq_rq_from_pdu(cmd); struct nullb *nullb = cmd->nq->dev->nullb; int err = 0; unsigned int len; sector_t sector = blk_rq_pos(rq); + unsigned int transferred_bytes = 0; struct req_iterator iter; struct bio_vec bvec; + if (!max_bytes) + max_bytes = blk_rq_bytes(rq); + spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { len = bvec.bv_len; + if (transferred_bytes + len > max_bytes) + len = max_bytes - transferred_bytes; err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, op_is_write(req_op(rq)), sector, rq->cmd_flags & REQ_FUA); if (err) break; sector += len >> SECTOR_SHIFT; + transferred_bytes += len; + if (transferred_bytes >= max_bytes) + break; } spin_unlock_irq(&nullb->lock); return errno_to_blk_status(err); } +static blk_status_t null_handle_rq(struct nullb_cmd *cmd) +{ + return __null_handle_rq(cmd, 0); +} + static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd) { struct nullb_device *dev = cmd->nq->dev; @@ -1300,11 +1319,21 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t nr_sectors) { struct badblocks *bb = &cmd->nq->dev->badblocks; + struct nullb_device *dev = cmd->nq->dev; + unsigned int block_sectors = dev->blocksize >> SECTOR_SHIFT; + unsigned int transfer_bytes; sector_t first_bad; int bad_sectors; - if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) + if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) { + if (!IS_ALIGNED(first_bad, block_sectors)) + first_bad = ALIGN_DOWN(first_bad, block_sectors); + if (dev->memory_backed && sector < first_bad) { + transfer_bytes = (first_bad - sector) << SECTOR_SHIFT; + __null_handle_rq(cmd, transfer_bytes); + } return BLK_STS_IOERR; + } return BLK_STS_OK; } From patchwork Wed Dec 18 07:49:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shinichiro Kawasaki X-Patchwork-Id: 13913143 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 783B7156F44 for ; Wed, 18 Dec 2024 07:49:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734508166; cv=none; b=F3Kj2mCCr20w+5+gllpspPJhp7k2v5YqlAw1Fi+YO7Gbz01ZAgvQXDYmQYR7cbGCLpr7vjEr5wc5ZyIkc8cwRpz+Al21TZugcFQ+xkvcMGnQRe1o0dWIR0kFSDfQS/jxrWwZFW9OSCjB/1JuG7xgjr61Iq2LQe82vu9W1uoKib8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734508166; c=relaxed/simple; bh=3KB3H4DfwDXO8hdAQQG+UYF9+AbD1zJdISjf4AjmmHw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nPKZNaMGSaFPgYxBeIwsAPGx+cq5AQW1UmlK27UGg3iHK2e/HruF01Bf12X9H3GWQQET8jVH65wmX/wFiK7SmVtmGwkRwiLmv4A3b1gMHXu75k7Zt0CmREtK50+RJZBB/Cesfk+BKsjjxn3e324rFj3TFBBsf/FCUzTS7VDTnFk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=EG4roO8+; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="EG4roO8+" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1734508164; x=1766044164; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3KB3H4DfwDXO8hdAQQG+UYF9+AbD1zJdISjf4AjmmHw=; b=EG4roO8+DCEmOF/JGuq9gvvXb9cUnSxuRt1N9zOTajLd2NCfrQX4ZoO8 U/xLo+DIfozsDGjMaFnZZTLFn5Rz4pBOdLyLs/OcHiJkcktW7+7gjBMWJ 14PCplEh2R1XNcUIeVJdm3wLV4ZAc3SMjPcGtDizkV1I5iHyb11heRMEY i65qXsMSC7jEh4QhNrX1SXDaMItrcC6YWjuj6G16U2qoJ8TVNvpRGN5ZW F5VCb7n0dwyM/PeEc1NkXqJWSBJymQGukx7HyQksSTTMn/wxskMOp9uvE 0AwE+swWlsen4J+dv64T2zwhUdpuc48YxvsBdSP88MW01EUGXYLq0nISS A==; X-CSE-ConnectionGUID: tKyrRf+qS16ci++XI3OcVA== X-CSE-MsgGUID: /7ZaP3QUTliYKrWy0c5+0Q== X-IronPort-AV: E=Sophos;i="6.12,244,1728921600"; d="scan'208";a="35269664" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Dec 2024 15:49:16 +0800 IronPort-SDR: 67626fc7_gTxOnrVn4vkX1GiktErsMRYTg8jhGBxTa+z5N/YUjU0fGZg ZOII68e4mw8b2ypNRggvRNt/DvIWLVMsKPdWvjw== Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 17 Dec 2024 22:46:32 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Dec 2024 23:49:17 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal Subject: [PATCH for-next 2/3] null_blk: move write pointers for partial writes Date: Wed, 18 Dec 2024 16:49:13 +0900 Message-ID: <20241218074914.814913-3-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241218074914.814913-1-shinichiro.kawasaki@wdc.com> References: <20241218074914.814913-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The previous commit modified bad blocks handling to do the partial IOs. When such partial IOs happen for zoned null_blk devices, it is expected that the write pointers also move partially. To test and debug partial write by userland tools for zoned block devices, move write pointers partially. Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 5 ++++- drivers/block/null_blk/null_blk.h | 6 ++++++ drivers/block/null_blk/zoned.c | 10 ++++++++++ 3 files changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 018a1a54dfa1..0f02e763cd9e 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1315,6 +1315,7 @@ static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd) } static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, + enum req_op op, sector_t sector, sector_t nr_sectors) { @@ -1332,6 +1333,8 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, transfer_bytes = (first_bad - sector) << SECTOR_SHIFT; __null_handle_rq(cmd, transfer_bytes); } + if (dev->zoned && op == REQ_OP_WRITE) + null_move_zone_wp(dev, sector, first_bad - sector); return BLK_STS_IOERR; } @@ -1398,7 +1401,7 @@ blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op, blk_status_t ret; if (dev->badblocks.shift != -1) { - ret = null_handle_badblocks(cmd, sector, nr_sectors); + ret = null_handle_badblocks(cmd, op, sector, nr_sectors); if (ret != BLK_STS_OK) return ret; } diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index 6f9fe6171087..c6ceede691ba 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -144,6 +144,8 @@ size_t null_zone_valid_read_len(struct nullb *nullb, sector_t sector, unsigned int len); ssize_t zone_cond_store(struct nullb_device *dev, const char *page, size_t count, enum blk_zone_cond cond); +void null_move_zone_wp(struct nullb_device *dev, sector_t zone_sector, + sector_t nr_sectors); #else static inline int null_init_zoned_dev(struct nullb_device *dev, struct queue_limits *lim) @@ -173,6 +175,10 @@ static inline ssize_t zone_cond_store(struct nullb_device *dev, { return -EOPNOTSUPP; } +static inline void null_move_zone_wp(struct nullb_device *dev, + sector_t zone_sector, sector_t nr_sectors) +{ +} #define null_report_zones NULL #endif /* CONFIG_BLK_DEV_ZONED */ #endif /* __NULL_BLK_H */ diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 0d5f9bf95229..e2b8396aa318 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -347,6 +347,16 @@ static blk_status_t null_check_zone_resources(struct nullb_device *dev, } } +void null_move_zone_wp(struct nullb_device *dev, sector_t zone_sector, + sector_t nr_sectors) +{ + unsigned int zno = null_zone_no(dev, zone_sector); + struct nullb_zone *zone = &dev->zones[zno]; + + if (zone->type != BLK_ZONE_TYPE_CONVENTIONAL) + zone->wp += nr_sectors; +} + static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, unsigned int nr_sectors, bool append) { From patchwork Wed Dec 18 07:49:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shinichiro Kawasaki X-Patchwork-Id: 13913144 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1377157472 for ; Wed, 18 Dec 2024 07:49:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734508167; cv=none; b=pUwQ2Qe8QTd4s81pgALIgrZiXVZf2ivLEVMm8CyIrHMwKx9rxqqL4pPkrRV800E0JOkAoi/YKZUiiVMqJxRr9Y+6tw6kX99NrJMy5TltJD7wOSWnPxxtf7QidU4ItXJs7/08o+D0lKgt3CgOVmh0Ntr962YRlXwwKRwWyeqWeMU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734508167; c=relaxed/simple; bh=ArHPCKbBcyvmFDJUzdBCChdAglHFP6YwsnM4roDKVw8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GrcvTUnX5gYNoTjz0UAF09yvGuosK8FDWvqU6w6LV8A8avdmWW2tYBM2+wxLkKz50sscMmfroELv+7yEWuoZkU4su0I6jdsUdHTQwDI1VEiDxLU2QB8lTF87GXHro3D4db+yNl5s9nJOEc//I0RRKhIYAKe6Sp2TMaufJgz/ht8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=ldNFj04B; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="ldNFj04B" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1734508165; x=1766044165; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ArHPCKbBcyvmFDJUzdBCChdAglHFP6YwsnM4roDKVw8=; b=ldNFj04BHDxVopQQCbuOvjF6XvnLgTMGXLHrvEfcStGjibRGfDxX1tuR tdPgyH5YBKaJ7DJc19dRzHBsGUEuiFoGQsi+B5nfOQHyTXGloBSQ9YGGn 8IXAGxot1mDMX2V3bzU6mEopT7kRRkGHhB+ztex+gNpBEuLvKOOwFyz2V xD5xWi/hyq829ZNG65Z0JAkuJ6++4XWIEscKZaVoHWY34kdKjZF7gCUL3 NT8b64zyBMoGi8KA5puhrgnUyL+ovbiUmijpb6FEqIQzv9aRlDbnDSZjF Tj8/HJQebklPFX+NrgVe16es9/nKP6MO4lqsPys5g06ECWLA9ExQ6ZPNX A==; X-CSE-ConnectionGUID: 7P1mSLJRTr6b+rw/o9C0+w== X-CSE-MsgGUID: DyxcYAITR66i3KKadC7bzg== X-IronPort-AV: E=Sophos;i="6.12,244,1728921600"; d="scan'208";a="35269665" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Dec 2024 15:49:17 +0800 IronPort-SDR: 67626fc8_e6fuRHvNEg3KuVOSeZDzAJhdLfhtObUHcaOWbV7DvruPrD7 X9MgFFQOfjzGGh2N2+ou0/mHECZ0gbA20xAYuhA== Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 17 Dec 2024 22:46:33 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Dec 2024 23:49:18 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal Subject: [PATCH for-next 3/3] null_blk: introduce badblocks_once parameter Date: Wed, 18 Dec 2024 16:49:14 +0900 Message-ID: <20241218074914.814913-4-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241218074914.814913-1-shinichiro.kawasaki@wdc.com> References: <20241218074914.814913-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When IO errors happen on real storage devices, the IOs repeated to the same target range can success by virtue of recovery features by devices, such as reserved block assignment. To simulate such IO errors and recoveries, introduce the new parameter badblocks_once parameter. When this parameter is set to 1, the specified badblocks are cleared after the first IO error, so that the next IO to the blocks succeed. While at it, split the long string constant in memb_group_features_show() into multiple lines to make future changes easier to understand. Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 30 +++++++++++++++++++++--------- drivers/block/null_blk/null_blk.h | 1 + 2 files changed, 22 insertions(+), 9 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 0f02e763cd9e..f5dd25fd1bbf 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -473,6 +473,7 @@ NULLB_DEVICE_ATTR(shared_tags, bool, NULL); NULLB_DEVICE_ATTR(shared_tag_bitmap, bool, NULL); NULLB_DEVICE_ATTR(fua, bool, NULL); NULLB_DEVICE_ATTR(rotational, bool, NULL); +NULLB_DEVICE_ATTR(badblocks_once, bool, NULL); static ssize_t nullb_device_power_show(struct config_item *item, char *page) { @@ -611,6 +612,7 @@ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_mbps, &nullb_device_attr_cache_size, &nullb_device_attr_badblocks, + &nullb_device_attr_badblocks_once, &nullb_device_attr_zoned, &nullb_device_attr_zone_size, &nullb_device_attr_zone_capacity, @@ -705,15 +707,23 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item) static ssize_t memb_group_features_show(struct config_item *item, char *page) { return snprintf(page, PAGE_SIZE, - "badblocks,blocking,blocksize,cache_size,fua," - "completion_nsec,discard,home_node,hw_queue_depth," - "irqmode,max_sectors,mbps,memory_backed,no_sched," - "poll_queues,power,queue_mode,shared_tag_bitmap," - "shared_tags,size,submit_queues,use_per_node_hctx," - "virt_boundary,zoned,zone_capacity,zone_max_active," - "zone_max_open,zone_nr_conv,zone_offline,zone_readonly," - "zone_size,zone_append_max_sectors,zone_full," - "rotational\n"); + "badblocks,badblocks_once,blocking,blocksize," + "cache_size,completion_nsec," + "discard," + "fua," + "home_node,hw_queue_depth," + "irqmode," + "max_sectors,mbps,memory_backed," + "no_sched," + "poll_queues,power," + "queue_mode," + "rotational," + "shared_tag_bitmap,shared_tags,size,submit_queues," + "use_per_node_hctx," + "virt_boundary," + "zoned,zone_capacity,zone_max_active,zone_max_open," + "zone_nr_conv,zone_offline,zone_readonly,zone_size," + "zone_append_max_sectors,zone_full\n"); } CONFIGFS_ATTR_RO(memb_group_, features); @@ -1327,6 +1337,8 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, int bad_sectors; if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) { + if (cmd->nq->dev->badblocks_once) + badblocks_clear(bb, first_bad, bad_sectors); if (!IS_ALIGNED(first_bad, block_sectors)) first_bad = ALIGN_DOWN(first_bad, block_sectors); if (dev->memory_backed && sector < first_bad) { diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index c6ceede691ba..b9cd85542498 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -63,6 +63,7 @@ struct nullb_device { unsigned long flags; /* device flags */ unsigned int curr_cache; struct badblocks badblocks; + bool badblocks_once; unsigned int nr_zones; unsigned int nr_zones_imp_open;