From patchwork Wed Dec 25 10:09:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shinichiro Kawasaki X-Patchwork-Id: 13920715 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C37D81514DC for ; Wed, 25 Dec 2024 10:09:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735121400; cv=none; b=egGGSXFpkWkcFe3C9PGoIDCDHZVEcGcQZPIw/vAeuZsI9kmbF2yvFq5hpS4OZtXd7okpAul3zbyBvi1yHPGIBRDpg2xhNt8nxAM/2PEfNrHyGXg9lzO4J6Kk4IX5jYWQBw55UiWJgtrMyVVi57VUOuI6oK4dBQEDgV67V04zVQw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735121400; c=relaxed/simple; bh=TC+1dy04XXSHSKy2pYHcUFbRo1HknDnYg6m9PLGQoJM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GZFio0r9KTrBynSB2vFNY3lF9duNpYN4ZMHRkRd6uP1m0CSqw7o0d+AmfGkvaaUnZnGg7WVMCqYrwizNrqH0HVWAToBGb/k8ZNf7KCmySPi8i2njwmt5amhH66V2opU7qHGJtuHo4ouQOsFRYNwvitAyaaZXY3m3T1C7Pym5AGA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=C0prhYJw; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="C0prhYJw" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1735121398; x=1766657398; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TC+1dy04XXSHSKy2pYHcUFbRo1HknDnYg6m9PLGQoJM=; b=C0prhYJw9p1Cou9F/+YDbA9gq1OgUljclMsZ7nIJ1qdISLFoDlW9QhVl eNl6hi2ipOhNNXEC1+Pr/nGH0z04cxOERm3LxVL+q7twiQgOJkh8GSU8q Wi1Zsl2Y+FKKEQm8p8jSYvqBosN9bRw5qoQamg7iMuyQO4bV0bWhgeSmO HU6GBYvqW2twaHqW0qx1k9NXCD0GTIGt1ECwyIfQWpBzs83hENss+xSJX RSnbTfvSyzl8XVSHuaBmxcPVp2qVmOjNfqlH475nIrUMWkutxBCkZuRA2 u1rxWros7cg7yFVjciB9PbdrGnes65O1dq2U09O5pe5iVeXZ8uc03bo62 Q==; X-CSE-ConnectionGUID: Xp5upOxiTF2fkELgJxxKSA== X-CSE-MsgGUID: e3QQzCGLRRq3VMHhwNlszQ== X-IronPort-AV: E=Sophos;i="6.12,263,1728921600"; d="scan'208";a="35812595" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 25 Dec 2024 18:09:51 +0800 IronPort-SDR: 676bcc7b_fT/swpSAN2hMNJcbXWO235DJeoAXjlokJlavxFq246LcALS jnxQngCGnBSsizLmiItUUFCTCQd7Ppa1NFPJ0aw== Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 25 Dec 2024 01:12:28 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Dec 2024 02:09:50 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche Subject: [PATCH for-next v2 1/4] null_blk: generate null_blk configfs features string Date: Wed, 25 Dec 2024 19:09:46 +0900 Message-ID: <20241225100949.930897-2-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241225100949.930897-1-shinichiro.kawasaki@wdc.com> References: <20241225100949.930897-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The null_blk configfs file 'features' provides a string that lists available null_blk features for userspace programs to reference. The string is defined as a long constant in the code, which tends to be forgotten for updates. It also causes checkpatch.pl to report "WARNING: quoted string split across lines". To avoid these drawbacks, generate the feature string on the fly. Refer to the ca_name field of each element in the nullb_device_attrs table and concatenate them in the given buffer. Also, modify order of the nullb_device_attrs table to not change the list order of the generated string. Of note is that the feature "index" was missing before this commit. This commit adds it to the generated string. Suggested-by: Bart Van Assche Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 93 ++++++++++++++++++++--------------- 1 file changed, 54 insertions(+), 39 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 178e62cd9a9f..f720707b7cfb 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -591,42 +591,46 @@ static ssize_t nullb_device_zone_offline_store(struct config_item *item, } CONFIGFS_ATTR_WO(nullb_device_, zone_offline); +/* + * Place the elements in alphabetical order to keep the confingfs + * 'features' string readable. + */ static struct configfs_attribute *nullb_device_attrs[] = { - &nullb_device_attr_size, - &nullb_device_attr_completion_nsec, - &nullb_device_attr_submit_queues, - &nullb_device_attr_poll_queues, - &nullb_device_attr_home_node, - &nullb_device_attr_queue_mode, + &nullb_device_attr_badblocks, + &nullb_device_attr_blocking, &nullb_device_attr_blocksize, - &nullb_device_attr_max_sectors, - &nullb_device_attr_irqmode, + &nullb_device_attr_cache_size, + &nullb_device_attr_completion_nsec, + &nullb_device_attr_discard, + &nullb_device_attr_fua, + &nullb_device_attr_home_node, &nullb_device_attr_hw_queue_depth, &nullb_device_attr_index, - &nullb_device_attr_blocking, - &nullb_device_attr_use_per_node_hctx, - &nullb_device_attr_power, - &nullb_device_attr_memory_backed, - &nullb_device_attr_discard, + &nullb_device_attr_irqmode, + &nullb_device_attr_max_sectors, &nullb_device_attr_mbps, - &nullb_device_attr_cache_size, - &nullb_device_attr_badblocks, - &nullb_device_attr_zoned, - &nullb_device_attr_zone_size, - &nullb_device_attr_zone_capacity, - &nullb_device_attr_zone_nr_conv, - &nullb_device_attr_zone_max_open, - &nullb_device_attr_zone_max_active, - &nullb_device_attr_zone_append_max_sectors, - &nullb_device_attr_zone_readonly, - &nullb_device_attr_zone_offline, - &nullb_device_attr_zone_full, - &nullb_device_attr_virt_boundary, + &nullb_device_attr_memory_backed, &nullb_device_attr_no_sched, - &nullb_device_attr_shared_tags, - &nullb_device_attr_shared_tag_bitmap, - &nullb_device_attr_fua, + &nullb_device_attr_poll_queues, + &nullb_device_attr_power, + &nullb_device_attr_queue_mode, &nullb_device_attr_rotational, + &nullb_device_attr_shared_tag_bitmap, + &nullb_device_attr_shared_tags, + &nullb_device_attr_size, + &nullb_device_attr_submit_queues, + &nullb_device_attr_use_per_node_hctx, + &nullb_device_attr_virt_boundary, + &nullb_device_attr_zone_append_max_sectors, + &nullb_device_attr_zone_capacity, + &nullb_device_attr_zone_full, + &nullb_device_attr_zone_max_active, + &nullb_device_attr_zone_max_open, + &nullb_device_attr_zone_nr_conv, + &nullb_device_attr_zone_offline, + &nullb_device_attr_zone_readonly, + &nullb_device_attr_zone_size, + &nullb_device_attr_zoned, NULL, }; @@ -704,16 +708,27 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item) static ssize_t memb_group_features_show(struct config_item *item, char *page) { - return snprintf(page, PAGE_SIZE, - "badblocks,blocking,blocksize,cache_size,fua," - "completion_nsec,discard,home_node,hw_queue_depth," - "irqmode,max_sectors,mbps,memory_backed,no_sched," - "poll_queues,power,queue_mode,shared_tag_bitmap," - "shared_tags,size,submit_queues,use_per_node_hctx," - "virt_boundary,zoned,zone_capacity,zone_max_active," - "zone_max_open,zone_nr_conv,zone_offline,zone_readonly," - "zone_size,zone_append_max_sectors,zone_full," - "rotational\n"); + + struct configfs_attribute **entry; + const char *fmt = "%s,"; + size_t left = PAGE_SIZE; + size_t written = 0; + int ret; + + for (entry = &nullb_device_attrs[0]; *entry && left > 0; entry++) { + if (!*(entry + 1)) + fmt = "%s\n"; + ret = snprintf(page + written, left, fmt, (*entry)->ca_name); + if (ret >= left) { + WARN_ONCE(1, "Too many null_blk features to print\n"); + memzero_explicit(page, PAGE_SIZE); + return 0; + } + left -= ret; + written += ret; + } + + return written; } CONFIGFS_ATTR_RO(memb_group_, features); From patchwork Wed Dec 25 10:09:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shinichiro Kawasaki X-Patchwork-Id: 13920716 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3136717B50F for ; Wed, 25 Dec 2024 10:09:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735121400; cv=none; b=AkyuLXQzOyiHUWhfYnJoRYVHc3crH6oqDjSVcpYdfe6e4wnSGK9npASH9Ldf98CuyiTjnbghEpE9p7ytz3CmpTGQtU/VX6BzZybUVP8bu1wwJR1l8FEFiW5FHuztxkpeW6r8kcJH7A3whGadsDXV4WmD2sBkCk/NwaK24bjFc3g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735121400; c=relaxed/simple; bh=0iz8JweiUvpnGy9W+Qsdh5M5SGVkAoJkNgfOBNdgkfY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=snf8RgmRh2dftXwNojJEd0RvyMMYbjfz1wTmVwRZ6tFvCKh2ZR3GsBuBe3au5IF6nKgQD/+mzp978mFuiRqCE1fcVjUAG2vcaqLeTOgPau9F3u4izWnJLhdo20EgaDHJUgzEQWmyThVTXIFOMyVh/u4i/ebnUeyIms+z2lFB7Yk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=iO5fo/SB; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="iO5fo/SB" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1735121399; x=1766657399; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0iz8JweiUvpnGy9W+Qsdh5M5SGVkAoJkNgfOBNdgkfY=; b=iO5fo/SBJ2MZTTxy3mQtmH2eIvY2m0oPMq5oB49TZZbWy4LJ0IlVFzsj YC2n0/yZYpbKrkMI+4qYlkkXG7rOcet7oU5/knzAjFgPEq5OGbHdqthGx Ya0cvzf4AhGKc9XlatLmM9UvvzLeZbE2bE99c3rrZlWLL5AD9b/Ko6rFw YePJjF2/2j+Es+kxSFmOE5m7sKG5C5jY/fCVqGBjigY57xc5/cem03cJJ 6A1crJFfWJMf8A7hrF7JsUmaxy6WM1+Dsa06tEGjV6y9hqk8KOTcHJ1zX u0w/xJsupbmGHqeo7LhH+A3FzCLlN+h4GIvgodp9nRNn8V/9m2y/1F4sG A==; X-CSE-ConnectionGUID: WNhkPfK+Th21BKiB7T4W7A== X-CSE-MsgGUID: e4lLDUH7RweuE83siLgJRg== X-IronPort-AV: E=Sophos;i="6.12,263,1728921600"; d="scan'208";a="35812597" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 25 Dec 2024 18:09:52 +0800 IronPort-SDR: 676bcc7c_VDmllqiEEl4TnzNN4FWxMn2xy1VidWCKZePIB4YZoYKVQ+M ziDl3F+wThZtEA3E7j1tI+ipTivbTTjIxjNm+GQ== Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 25 Dec 2024 01:12:29 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Dec 2024 02:09:51 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche Subject: [PATCH for-next v2 2/4] null_blk: do partial IO for bad blocks Date: Wed, 25 Dec 2024 19:09:47 +0900 Message-ID: <20241225100949.930897-3-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241225100949.930897-1-shinichiro.kawasaki@wdc.com> References: <20241225100949.930897-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The current null_blk implementation checks if any bad blocks exist in the target blocks of each IO. If so, the IO fails and data is not transferred for all of the IO target blocks. However, when real storage devices have bad blocks, the devices may transfer data partially up to the first bad blocks. Especially, when the IO is a write operation, such partial IO leaves partially written data on the device. To simulate such partial IO using null_blk, perform the data transfer from the IO start block to the block just before the first bad block. Introduce __null_handle_rq() to support partial data transfer. Modify null_handle_badblocks() to calculate the size of the partial data transfer and call __null_handle_rq(). Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 33 +++++++++++++++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index f720707b7cfb..d155eb040077 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1264,31 +1264,50 @@ static int null_transfer(struct nullb *nullb, struct page *page, return err; } -static blk_status_t null_handle_rq(struct nullb_cmd *cmd) +/* + * Transfer data for the given request. The transfer size is capped with the + * max_bytes argument. If max_bytes is zero, transfer all of the requested data. + */ +static blk_status_t __null_handle_rq(struct nullb_cmd *cmd, + unsigned int max_bytes) { struct request *rq = blk_mq_rq_from_pdu(cmd); struct nullb *nullb = cmd->nq->dev->nullb; int err = 0; unsigned int len; sector_t sector = blk_rq_pos(rq); + unsigned int transferred_bytes = 0; struct req_iterator iter; struct bio_vec bvec; + if (!max_bytes) + max_bytes = blk_rq_bytes(rq); + spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { len = bvec.bv_len; + if (transferred_bytes + len > max_bytes) + len = max_bytes - transferred_bytes; err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, op_is_write(req_op(rq)), sector, rq->cmd_flags & REQ_FUA); if (err) break; sector += len >> SECTOR_SHIFT; + transferred_bytes += len; + if (transferred_bytes >= max_bytes) + break; } spin_unlock_irq(&nullb->lock); return errno_to_blk_status(err); } +static blk_status_t null_handle_rq(struct nullb_cmd *cmd) +{ + return __null_handle_rq(cmd, 0); +} + static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd) { struct nullb_device *dev = cmd->nq->dev; @@ -1315,11 +1334,21 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t nr_sectors) { struct badblocks *bb = &cmd->nq->dev->badblocks; + struct nullb_device *dev = cmd->nq->dev; + unsigned int block_sectors = dev->blocksize >> SECTOR_SHIFT; + unsigned int transfer_bytes; sector_t first_bad; int bad_sectors; - if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) + if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) { + if (!IS_ALIGNED(first_bad, block_sectors)) + first_bad = ALIGN_DOWN(first_bad, block_sectors); + if (dev->memory_backed && sector < first_bad) { + transfer_bytes = (first_bad - sector) << SECTOR_SHIFT; + __null_handle_rq(cmd, transfer_bytes); + } return BLK_STS_IOERR; + } return BLK_STS_OK; } From patchwork Wed Dec 25 10:09:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shinichiro Kawasaki X-Patchwork-Id: 13920717 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B405118A950 for ; Wed, 25 Dec 2024 10:10:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735121402; cv=none; b=WTq5bgYbEqL/ihUiq8SH2SELV9UEmwD3KyN94ToAGcj9MQWtBeybR/MvtpPS53HRh6ldds4djy+W7b29nEdQO6aOj1qj2AN+1Yal+fmh501/4azvYHjhv0bf0uL1HF7yEgFlfv3qogKnfFXgldiN1T8galnAJUK0b/DB6llSxhU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735121402; c=relaxed/simple; bh=3uWnWJ8f1VMdw9xEZlLwVcZZxvJJEqP244ybmgiDSRU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lQIrX9I0leRXiBvAdMX6JGA6caSRnke5AMMPbkg8GXVpp1Ip2aMHc4ioAgDFrhCxDdt/TwwVGCbP6Rbr8qqSqfOnlM05Dr0PJNhzo7SrD1vV9LH9smxq5TqpdV9aG72l1/CrO8qZYs0WMxRcsQCjWzjuJximaOkI5V1Sa3ddRTQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=errF6grs; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="errF6grs" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1735121400; x=1766657400; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3uWnWJ8f1VMdw9xEZlLwVcZZxvJJEqP244ybmgiDSRU=; b=errF6grsTioOg3q6FkKWO11orq6q7fAz5Jw8xio2NlzX9W+gZ0AZFaat 451r8Vdy6I8eiOX87Uumap/9Z4e7YBCyWK0JATUvbC+08y9AmUCIKlIHl YVR5JivCbTqXbNJNcuRgPCwy+8wKErsqe+gesikSnb2VuUjdiXM7rA/QL ToK6QVugMtlOUOUQVFQeCx9PxboqQ9I6XK5SbptJeXOO1BW5TywQes/Wv zUQ+5pDJk3ffk63hiUTUKyrX2dlVsrFFDgbP5vt6L7OIGoaUVUXMFsdUP tQ6HWWMkcHgDqsOpBj9SAiztFepBrq/zrT6cNhFwYkKiD/YJRD4IShoxS A==; X-CSE-ConnectionGUID: EsZMeWd/QZObDBMJczyJjA== X-CSE-MsgGUID: 4d7e5U6oTu+5WQPk56td8Q== X-IronPort-AV: E=Sophos;i="6.12,263,1728921600"; d="scan'208";a="35812598" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 25 Dec 2024 18:09:53 +0800 IronPort-SDR: 676bcc7d_urqbSFw/nBoMJ8ZvzrIaKifCO6BhrVpd45L1aBq0y3k54kV 99CDSJpGx3aSObo+xgVlJDwRZUy3PG5ZGOIVqfw== Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 25 Dec 2024 01:12:30 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Dec 2024 02:09:52 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche Subject: [PATCH for-next v2 3/4] null_blk: move write pointers for partial writes Date: Wed, 25 Dec 2024 19:09:48 +0900 Message-ID: <20241225100949.930897-4-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241225100949.930897-1-shinichiro.kawasaki@wdc.com> References: <20241225100949.930897-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The previous commit modified bad blocks handling to do the partial IOs. When such partial IOs happen for zoned null_blk devices, it is expected that the write pointers also move partially. To test and debug partial write by userland tools for zoned block devices, move write pointers partially. Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 5 ++++- drivers/block/null_blk/null_blk.h | 6 ++++++ drivers/block/null_blk/zoned.c | 10 ++++++++++ 3 files changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index d155eb040077..1675dec0b0e6 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1330,6 +1330,7 @@ static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd) } static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, + enum req_op op, sector_t sector, sector_t nr_sectors) { @@ -1347,6 +1348,8 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, transfer_bytes = (first_bad - sector) << SECTOR_SHIFT; __null_handle_rq(cmd, transfer_bytes); } + if (dev->zoned && op == REQ_OP_WRITE) + null_move_zone_wp(dev, sector, first_bad - sector); return BLK_STS_IOERR; } @@ -1413,7 +1416,7 @@ blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op, blk_status_t ret; if (dev->badblocks.shift != -1) { - ret = null_handle_badblocks(cmd, sector, nr_sectors); + ret = null_handle_badblocks(cmd, op, sector, nr_sectors); if (ret != BLK_STS_OK) return ret; } diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index 6f9fe6171087..c6ceede691ba 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -144,6 +144,8 @@ size_t null_zone_valid_read_len(struct nullb *nullb, sector_t sector, unsigned int len); ssize_t zone_cond_store(struct nullb_device *dev, const char *page, size_t count, enum blk_zone_cond cond); +void null_move_zone_wp(struct nullb_device *dev, sector_t zone_sector, + sector_t nr_sectors); #else static inline int null_init_zoned_dev(struct nullb_device *dev, struct queue_limits *lim) @@ -173,6 +175,10 @@ static inline ssize_t zone_cond_store(struct nullb_device *dev, { return -EOPNOTSUPP; } +static inline void null_move_zone_wp(struct nullb_device *dev, + sector_t zone_sector, sector_t nr_sectors) +{ +} #define null_report_zones NULL #endif /* CONFIG_BLK_DEV_ZONED */ #endif /* __NULL_BLK_H */ diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 0d5f9bf95229..e2b8396aa318 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -347,6 +347,16 @@ static blk_status_t null_check_zone_resources(struct nullb_device *dev, } } +void null_move_zone_wp(struct nullb_device *dev, sector_t zone_sector, + sector_t nr_sectors) +{ + unsigned int zno = null_zone_no(dev, zone_sector); + struct nullb_zone *zone = &dev->zones[zno]; + + if (zone->type != BLK_ZONE_TYPE_CONVENTIONAL) + zone->wp += nr_sectors; +} + static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, unsigned int nr_sectors, bool append) { From patchwork Wed Dec 25 10:09:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shinichiro Kawasaki X-Patchwork-Id: 13920718 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E913218E764 for ; Wed, 25 Dec 2024 10:10:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735121402; cv=none; b=W2O9vcw4rkaVP7/qIg6GmkzNZMUog3bBadVTgfUDNiT8n20OrIvPU2/pvXAB52BcPlOrwPb50fxAQmNyIvhPLPTF0uSnoGtwI/uorz4spPgFEWcL6BLK9jEM1I0yNlZLAeUgEguQ0vo83siNm/lDgSyfP/pfSfkhqHRC1pORG+M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735121402; c=relaxed/simple; bh=/NBE5XRhOlU64CSGQQs1kStu2qmn8wfA60ckE/L1tJs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=W5sYABg1kdHPhEHF75NESZKC7DJZIRkDmaGN0yidfSnF8assr5my38u1J76H9S/tAElEE2jnEuQ9jKHSUebSktsXeJ80cNpOv+rgvII+zTFF74bKp/rYT3Tu7yTrkSw2yPlJIAkKlXvKZQiDVw28nGqOXz3Ky6PyDjhn3txXBkk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=DgWwbDkJ; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="DgWwbDkJ" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1735121400; x=1766657400; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/NBE5XRhOlU64CSGQQs1kStu2qmn8wfA60ckE/L1tJs=; b=DgWwbDkJyDIDz2EfD4xzTbkidB+2ZeDgpQVTeMEjxt0vj3+XUJrHRyOq ui1NXCnf6WRT6dNkmJmZrQNhYN3XvuwW8NIuEKxhx1xPgrwpOa9NBDiya 56qyL9vmBIgFo5164c3tV5kFLQlKmt6NgrVZR0jl3y+yzUxsV/imidUn0 fuFHSJ1BvE5swdqtgs927lVGhdMBM13l/xxSdeDzKqmunUzj0iQ9KQ6Lp E3h25Ur1eJkKV0b0XtDWJ4YHfEruBvH5eoKSuqpOtgeETJxKV8kCfXX+s Viuieen10RmSYLNsGp9WLQl1rkOwnF2/M1pEvSjsNU6hTWGynUGgJVdKA w==; X-CSE-ConnectionGUID: qo+L0r3RQnmLFACq6ef5FA== X-CSE-MsgGUID: gi4jY7mwRFOXJXOfDEv1Qw== X-IronPort-AV: E=Sophos;i="6.12,263,1728921600"; d="scan'208";a="35812600" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 25 Dec 2024 18:09:54 +0800 IronPort-SDR: 676bcc7e_3HwHs/Ii50aJFqfZyrrc/A9EVLKOaRMv9KPF7kJfFnVsPIY B9bQe6MC6gt5cBdoaBub6vyJ4NV/k5NOc3yI+ug== Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 25 Dec 2024 01:12:31 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip02.wdc.com with ESMTP; 25 Dec 2024 02:09:53 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche Subject: [PATCH for-next v2 4/4] null_blk: introduce badblocks_once parameter Date: Wed, 25 Dec 2024 19:09:49 +0900 Message-ID: <20241225100949.930897-5-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241225100949.930897-1-shinichiro.kawasaki@wdc.com> References: <20241225100949.930897-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When IO errors happen on real storage devices, the IOs repeated to the same target range can success by virtue of recovery features by devices, such as reserved block assignment. To simulate such IO errors and recoveries, introduce the new parameter badblocks_once parameter. When this parameter is set to 1, the specified badblocks are cleared after the first IO error, so that the next IO to the blocks succeed. Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 4 ++++ drivers/block/null_blk/null_blk.h | 1 + 2 files changed, 5 insertions(+) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 1675dec0b0e6..09d85b71b7f9 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -473,6 +473,7 @@ NULLB_DEVICE_ATTR(shared_tags, bool, NULL); NULLB_DEVICE_ATTR(shared_tag_bitmap, bool, NULL); NULLB_DEVICE_ATTR(fua, bool, NULL); NULLB_DEVICE_ATTR(rotational, bool, NULL); +NULLB_DEVICE_ATTR(badblocks_once, bool, NULL); static ssize_t nullb_device_power_show(struct config_item *item, char *page) { @@ -597,6 +598,7 @@ CONFIGFS_ATTR_WO(nullb_device_, zone_offline); */ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_badblocks, + &nullb_device_attr_badblocks_once, &nullb_device_attr_blocking, &nullb_device_attr_blocksize, &nullb_device_attr_cache_size, @@ -1342,6 +1344,8 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, int bad_sectors; if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) { + if (cmd->nq->dev->badblocks_once) + badblocks_clear(bb, first_bad, bad_sectors); if (!IS_ALIGNED(first_bad, block_sectors)) first_bad = ALIGN_DOWN(first_bad, block_sectors); if (dev->memory_backed && sector < first_bad) { diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index c6ceede691ba..b9cd85542498 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -63,6 +63,7 @@ struct nullb_device { unsigned long flags; /* device flags */ unsigned int curr_cache; struct badblocks badblocks; + bool badblocks_once; unsigned int nr_zones; unsigned int nr_zones_imp_open;