From patchwork Thu Jun 30 09:14:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12901485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DAD0C433EF for ; Thu, 30 Jun 2022 09:15:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234178AbiF3JPV (ORCPT ); Thu, 30 Jun 2022 05:15:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234129AbiF3JOt (ORCPT ); Thu, 30 Jun 2022 05:14:49 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2083.outbound.protection.outlook.com [40.107.244.83]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A60E29813; Thu, 30 Jun 2022 02:14:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HUvQ2SDajSuxgEsQ6U/nVwT5JQTlsjq1EH4oqGIxvS+h5JPLg+0pdZbT7uR0NSuO16TB9oXMTiTGWkS4upxu2yaYDd5rFHE1DZWCzH9Q0VRTHCG6GvpIg7U2UNfYrchczamzpj1Th54Lkx+G/QEcWxBE5i3exCAnmePTjlRiImuadQZMW48hXiIcGO+jVebEw8scvrRupS/SAQwNrm/BXvFVGmRB45KOwLaXbI0D3Pw6kL0UER/CzKBYcw2nAT/bcA5fHQ9QCahXhfXexfAgvfMDMLQzV9/8dRfbd/rfr4yQhOwbPPTzOXagreZl1i0lVpgO7Gm8CDcTP9/P4monOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=I5xfaRzpKEymGb8N13zm9iNvM3QwC+3yoK3e8B2yDsA=; b=WTpuUBK3K9GF3zhngTP5iSoof98yEumGr4Q75q6MnqI867XgwJqU4valC1bhcu2sxMCByKfMh/5Zz3Af/IlaQrbUJCRtt6yy6DDE0nRS6PHzHiESpoJljCQYiKcbMfZgUh9vAyEpM5EDYJ/XPa2fjUyG/O3UinRB8HeU53csGY90ECmldedStJogQjIBw9Twk/4pj+ne314KLsimpSikCixmGITYo94l03nQTZX/hGXRnIchQ4R1x5ZemUxPHANDafvKGnBbJq7h04SscaW/iDtFxbkk9ZDtXG7K7CbqK3ZLnmQa3LiU6BwyOeyaEsu1TWHyMHgYlajRtCEFBi9hMw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=grimberg.me smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=I5xfaRzpKEymGb8N13zm9iNvM3QwC+3yoK3e8B2yDsA=; b=tOjvVV5QWU6C0jMi+MUiGtCLuV5B6mr9FmO98XHkc4dl5m0GTgTX3wQWRN/CHT2xDe4JS9WF3qtn0BjZq+ZgygV5rzvqIPXcQ3Cj2aSMl/r5tDi5qnTpk4pEE6IlwmS73/HdMEdC8kbGf28RSX6hCdtQLU66XrZQLLVWMLv9w72xuvOKMIT6OwONQfgAyMKL17tcJR7H5g8TaUV/4bEzfqcUeoOpADZJ+verwc9q0XQr7da7+hQl+P/PpZlUAxKJzXuj1WltWeB404FnyRlswT5vgT3cg2o/YNx43W6kRqB/4pOfoi129dUeIpX+BwF0+wZ88CH1xiTFE8oufl3qgA== Received: from DM6PR12CA0012.namprd12.prod.outlook.com (2603:10b6:5:1c0::25) by IA1PR12MB6460.namprd12.prod.outlook.com (2603:10b6:208:3a8::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Thu, 30 Jun 2022 09:14:30 +0000 Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1c0:cafe::e1) by DM6PR12CA0012.outlook.office365.com (2603:10b6:5:1c0::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 09:14:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 09:14:30 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Thu, 30 Jun 2022 09:14:29 +0000 Received: from dev.nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 30 Jun 2022 02:14:27 -0700 From: Chaitanya Kulkarni To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Chaitanya Kulkarni Subject: [PATCH 1/6] block: add support for REQ_OP_VERIFY Date: Thu, 30 Jun 2022 02:14:01 -0700 Message-ID: <20220630091406.19624-2-kch@nvidia.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20220630091406.19624-1-kch@nvidia.com> References: <20220630091406.19624-1-kch@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b55f044f-f7c3-4081-302f-08da5a78f06f X-MS-TrafficTypeDiagnostic: IA1PR12MB6460:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: uInnsK0PhpA8Ze0zqoQSGtvBSAXNG27RRBKGM83wLpxEsyREvGoa6egXgaHhm8atOynbE1AsIlFlUzDIGZ5CTGkpaW7TRByPvhbDAEjv6MCJrWJQ2kD8Wn91mszFpecZIQVs1p/OXStBgeWLe04LUPe9Ukdiq3U2Rflvk3xFzNTqVRDKaUUHEy6tmznhCexIkDmj4x5Ca2WETB9cagt3Wy2lUZlPehaxlAPnFbx/OGKHcnvb/w1b+68XZ8+v2mxh5I8WoLuGONRQdjOT0nfxBfIYjqLBIEqOl1aXVOIqr06UylccDA5Y9AIPwy5inHtIbj18/sdaKpjkOJ7ldMWhpLiGSm34hVgR6gGYgSsK4nVBxj2h57RQ5tUDn3koZYwWLgQEG1ETeyPnzL491zcyIdJUkhmkqxd5nYDdMPWjeFBlCrcNgqIfa95WpUCCabKlRzz/HH2EUh9E8jzwV5abdve7ELlpS/9N88Xw1QLsgI27Gw1EalO1DLdBw4bkTR19bLFiOaoXtbBiBmE21G2ItHbWx2RyT6o2jlZyFhxCit+3cbduCQULh+OsLaKanFKpQYqUD4gUqeTL04GZoF4rUsJIZqAFurnY+GH7yMTnR3HmVonivc3pjy+RvI9hUvMd+FDzXoNjKiewd346tHVZTX34Zuw4LxlGR4KDxRJh6MjMoYp+ANlDzvz29jpp1uny1ynLg19KfaB09viCPc/RZFxm+q6bFpoOLngpyxPQFz9shd51LeodyIOEdiCwufjjKwdQXtbLXouvT76u9UxlJYtwaotk+SycSK8iMc0CMdJ4Wxdhfwp73Y/zfdkzPfo7vSmze3GPmK75KOqJOortNCDoiFE5Y3x6ErHtANzB5kUT4+T9+pqs5Ak1FkP3Pw7J X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(136003)(376002)(396003)(346002)(39860400002)(40470700004)(36840700001)(46966006)(110136005)(7416002)(26005)(41300700001)(8936002)(478600001)(5660300002)(82310400005)(30864003)(70586007)(81166007)(7406005)(2616005)(70206006)(82740400003)(7696005)(36756003)(336012)(40480700001)(4326008)(2906002)(36860700001)(47076005)(356005)(1076003)(6666004)(107886003)(16526019)(186003)(40460700003)(8676002)(426003)(316002)(83380400001)(54906003)(2101003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 09:14:30.1178 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b55f044f-f7c3-4081-302f-08da5a78f06f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6460 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org This adds a new block layer operation to offload verifying a range of LBAs. This support is needed in order to provide file systems and fabrics, kernel components to offload LBA verification when it is supported by the hardware controller. In case hardware offloading is not supported then we provide API to emulate the same. The prominent example of that is SCSI and NVMe Verify command. We also provide an emulation of the same operation that can be used in case H/W does not support verify. This is still useful when block device is remotely attached e.g. using NVMeOF. Signed-off-by: Chaitanya Kulkarni --- Documentation/ABI/stable/sysfs-block | 12 +++ block/blk-core.c | 5 + block/blk-lib.c | 155 +++++++++++++++++++++++++++ block/blk-merge.c | 18 ++++ block/blk-settings.c | 17 +++ block/blk-sysfs.c | 8 ++ block/blk.h | 4 + block/ioctl.c | 35 ++++++ include/linux/bio.h | 9 +- include/linux/blk_types.h | 2 + include/linux/blkdev.h | 22 ++++ include/uapi/linux/fs.h | 1 + 12 files changed, 285 insertions(+), 3 deletions(-) diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block index e8797cd09aff..a71d9c41cf8b 100644 --- a/Documentation/ABI/stable/sysfs-block +++ b/Documentation/ABI/stable/sysfs-block @@ -657,6 +657,18 @@ Description: in a single write zeroes command. If write_zeroes_max_bytes is 0, write zeroes is not supported by the device. +What: /sys/block//queue/verify_max_bytes +Date: April 2022 +Contact: Chaitanya Kulkarni +Description: + Devices that support verify operation in which a single + request can be issued to verify the range of the contiguous + blocks on the storage without any payload in the request. + This can be used to optimize verifying LBAs on the device + without reading by offloading functionality. verify_max_bytes + indicates how many bytes can be written in a single verify + command. If verify_max_bytes is 0, verify operation is not + supported by the device. What: /sys/block//queue/zone_append_max_bytes Date: May 2020 diff --git a/block/blk-core.c b/block/blk-core.c index 06ff5bbfe8f6..9ad52247dcdf 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -123,6 +123,7 @@ static const char *const blk_op_name[] = { REQ_OP_NAME(ZONE_FINISH), REQ_OP_NAME(ZONE_APPEND), REQ_OP_NAME(WRITE_ZEROES), + REQ_OP_NAME(VERIFY), REQ_OP_NAME(DRV_IN), REQ_OP_NAME(DRV_OUT), }; @@ -842,6 +843,10 @@ void submit_bio_noacct(struct bio *bio) if (!q->limits.max_write_zeroes_sectors) goto not_supported; break; + case REQ_OP_VERIFY: + if (!q->limits.max_verify_sectors) + goto not_supported; + break; default: break; } diff --git a/block/blk-lib.c b/block/blk-lib.c index 09b7e1200c0f..4624d68bb3cb 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -340,3 +340,158 @@ int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector, return ret; } EXPORT_SYMBOL(blkdev_issue_secure_erase); + +/** + * __blkdev_emulate_verify - emulate number of verify operations + * asynchronously + * @bdev: blockdev to issue + * @sector: start sector + * @nr_sects: number of sectors to verify + * @gfp_mask: memory allocation flags (for bio_alloc) + * @biop: pointer to anchor bio + * @buf: data buffer to mapped on bio + * + * Description: + * Verify a block range by emulating REQ_OP_VERIFY into REQ_OP_READ, + * use this when H/W offloading is not supported asynchronously. + * Caller is responsible to handle anchored bio. + */ +static int __blkdev_emulate_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask, struct bio **biop, char *buf) +{ + struct bio *bio = *biop; + unsigned int sz; + int bi_size; + + while (nr_sects != 0) { + bio = blk_next_bio(bio, bdev, + __blkdev_sectors_to_bio_pages(nr_sects), + REQ_OP_READ, gfp_mask); + bio->bi_iter.bi_sector = sector; + + while (nr_sects != 0) { + bool is_vaddr = is_vmalloc_addr(buf); + struct page *p; + + p = is_vaddr ? vmalloc_to_page(buf) : virt_to_page(buf); + sz = min((sector_t) PAGE_SIZE, nr_sects << 9); + + bi_size = bio_add_page(bio, p, sz, offset_in_page(buf)); + if (bi_size < sz) + return -EIO; + + nr_sects -= bi_size >> 9; + sector += bi_size >> 9; + buf += bi_size; + } + cond_resched(); + } + + *biop = bio; + return 0; +} + +/** + * __blkdev_issue_verify - generate number of verify operations + * @bdev: blockdev to issue + * @sector: start sector + * @nr_sects: number of sectors to verify + * @gfp_mask: memory allocation flags (for bio_alloc()) + * @biop: pointer to anchor bio + * + * Description: + * Verify a block range using hardware offload. + * + * The function will emulate verify operation if no explicit hardware + * offloading for verifying is provided. + */ +int __blkdev_issue_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask, struct bio **biop) +{ + unsigned int max_verify_sectors = bdev_verify_sectors(bdev); + sector_t min_io_sect = (BIO_MAX_VECS << PAGE_SHIFT) >> 9; + struct bio *bio = *biop; + sector_t curr_sects; + char *buf; + + if (!max_verify_sectors) { + int ret = 0; + + buf = kzalloc(min_io_sect << 9, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + while (nr_sects > 0) { + curr_sects = min_t(sector_t, nr_sects, min_io_sect); + ret = __blkdev_emulate_verify(bdev, sector, curr_sects, + gfp_mask, &bio, buf); + if (ret) + break; + + if (bio) { + ret = submit_bio_wait(bio); + bio_put(bio); + bio = NULL; + } + + nr_sects -= curr_sects; + sector += curr_sects; + + } + /* set the biop to NULL since we have alrady completed above */ + *biop = NULL; + kfree(buf); + return ret; + } + + while (nr_sects) { + bio = blk_next_bio(bio, bdev, 0, REQ_OP_VERIFY, gfp_mask); + bio->bi_iter.bi_sector = sector; + + if (nr_sects > max_verify_sectors) { + bio->bi_iter.bi_size = max_verify_sectors << 9; + nr_sects -= max_verify_sectors; + sector += max_verify_sectors; + } else { + bio->bi_iter.bi_size = nr_sects << 9; + nr_sects = 0; + } + cond_resched(); + } + *biop = bio; + return 0; +} +EXPORT_SYMBOL_GPL(__blkdev_issue_verify); + +/** + * blkdev_issue_verify - verify a block range + * @bdev: blockdev to verify + * @sector: start sector + * @nr_sects: number of sectors to verify + * @gfp_mask: memory allocation flags (for bio_alloc) + * + * Description: + * Verify a block range using hardware offload. + */ +int blkdev_issue_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask) +{ + sector_t bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1; + struct bio *bio = NULL; + struct blk_plug plug; + int ret = 0; + + if ((sector | nr_sects) & bs_mask) + return -EINVAL; + + blk_start_plug(&plug); + ret = __blkdev_issue_verify(bdev, sector, nr_sects, gfp_mask, &bio); + if (ret == 0 && bio) { + ret = submit_bio_wait(bio); + bio_put(bio); + } + blk_finish_plug(&plug); + + return ret; +} +EXPORT_SYMBOL_GPL(blkdev_issue_verify); diff --git a/block/blk-merge.c b/block/blk-merge.c index 7771dacc99cb..8ff305377b5a 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -153,6 +153,20 @@ static struct bio *blk_bio_write_zeroes_split(struct request_queue *q, return bio_split(bio, q->limits.max_write_zeroes_sectors, GFP_NOIO, bs); } +static struct bio *blk_bio_verify_split(struct request_queue *q, + struct bio *bio, struct bio_set *bs, unsigned *nsegs) +{ + *nsegs = 0; + + if (!q->limits.max_verify_sectors) + return NULL; + + if (bio_sectors(bio) <= q->limits.max_verify_sectors) + return NULL; + + return bio_split(bio, q->limits.max_verify_sectors, GFP_NOIO, bs); +} + /* * Return the maximum number of sectors from the start of a bio that may be * submitted as a single request to a block device. If enough sectors remain, @@ -336,6 +350,10 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio, split = blk_bio_write_zeroes_split(q, *bio, &q->bio_split, nr_segs); break; + case REQ_OP_VERIFY: + split = blk_bio_verify_split(q, *bio, &q->bio_split, + nr_segs); + break; default: split = blk_bio_segment_split(q, *bio, &q->bio_split, nr_segs); break; diff --git a/block/blk-settings.c b/block/blk-settings.c index 6ccceb421ed2..c77697290bc5 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -43,6 +43,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->max_dev_sectors = 0; lim->chunk_sectors = 0; lim->max_write_zeroes_sectors = 0; + lim->max_verify_sectors = 0; lim->max_zone_append_sectors = 0; lim->max_discard_sectors = 0; lim->max_hw_discard_sectors = 0; @@ -80,6 +81,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_sectors = UINT_MAX; lim->max_dev_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; + lim->max_verify_sectors = UINT_MAX; lim->max_zone_append_sectors = UINT_MAX; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -202,6 +204,19 @@ void blk_queue_max_write_zeroes_sectors(struct request_queue *q, } EXPORT_SYMBOL(blk_queue_max_write_zeroes_sectors); +/** + * blk_queue_max_verify_sectors - set max sectors for a single verify + * + * @q: the request queue for the device + * @max_verify_sectors: maximum number of sectors to verify per command + **/ +void blk_queue_max_verify_sectors(struct request_queue *q, + unsigned int max_verify_sectors) +{ + q->limits.max_verify_sectors = max_verify_sectors; +} +EXPORT_SYMBOL(blk_queue_max_verify_sectors); + /** * blk_queue_max_zone_append_sectors - set max sectors for a single zone append * @q: the request queue for the device @@ -554,6 +569,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->max_dev_sectors = min_not_zero(t->max_dev_sectors, b->max_dev_sectors); t->max_write_zeroes_sectors = min(t->max_write_zeroes_sectors, b->max_write_zeroes_sectors); + t->max_verify_sectors = min(t->max_verify_sectors, + b->max_verify_sectors); t->max_zone_append_sectors = min(t->max_zone_append_sectors, b->max_zone_append_sectors); t->bounce = max(t->bounce, b->bounce); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 88bd41d4cb59..4fb6a731acad 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -113,6 +113,12 @@ queue_ra_store(struct request_queue *q, const char *page, size_t count) return ret; } +static ssize_t queue_verify_max_show(struct request_queue *q, char *page) +{ + return sprintf(page, "%llu\n", + (unsigned long long)q->limits.max_verify_sectors << 9); +} + static ssize_t queue_max_sectors_show(struct request_queue *q, char *page) { int max_sectors_kb = queue_max_sectors(q) >> 1; @@ -588,6 +594,7 @@ QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data"); QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes"); QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes"); +QUEUE_RO_ENTRY(queue_verify_max, "verify_max_bytes"); QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes"); QUEUE_RO_ENTRY(queue_zone_write_granularity, "zone_write_granularity"); @@ -644,6 +651,7 @@ static struct attribute *queue_attrs[] = { &queue_discard_zeroes_data_entry.attr, &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, + &queue_verify_max_entry.attr, &queue_zone_append_max_entry.attr, &queue_zone_write_granularity_entry.attr, &queue_nonrot_entry.attr, diff --git a/block/blk.h b/block/blk.h index 434017701403..63a0e3aca7e0 100644 --- a/block/blk.h +++ b/block/blk.h @@ -132,6 +132,9 @@ static inline bool rq_mergeable(struct request *rq) if (req_op(rq) == REQ_OP_WRITE_ZEROES) return false; + if (req_op(rq) == REQ_OP_VERIFY) + return false; + if (req_op(rq) == REQ_OP_ZONE_APPEND) return false; @@ -286,6 +289,7 @@ static inline bool blk_may_split(struct request_queue *q, struct bio *bio) case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: + case REQ_OP_VERIFY: return true; /* non-trivial splitting decisions */ default: break; diff --git a/block/ioctl.c b/block/ioctl.c index 46949f1b0dba..60a48e24b82d 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -192,6 +192,39 @@ static int blk_ioctl_zeroout(struct block_device *bdev, fmode_t mode, return err; } +static int blk_ioctl_verify(struct block_device *bdev, fmode_t mode, + unsigned long arg) +{ + uint64_t range[2]; + struct address_space *mapping; + uint64_t start, end, len; + + if (!(mode & FMODE_READ)) + return -EBADF; + + if (copy_from_user(range, (void __user *)arg, sizeof(range))) + return -EFAULT; + + start = range[0]; + len = range[1]; + end = start + len - 1; + + if (start & 511) + return -EINVAL; + if (len & 511) + return -EINVAL; + if (end >= (uint64_t)i_size_read(bdev->bd_inode)) + return -EINVAL; + if (end < start) + return -EINVAL; + + /* Invalidate the page cache, including dirty pages */ + mapping = bdev->bd_inode->i_mapping; + truncate_inode_pages_range(mapping, start, end); + + return blkdev_issue_verify(bdev, start >> 9, len >> 9, GFP_KERNEL); +} + static int put_ushort(unsigned short __user *argp, unsigned short val) { return put_user(val, argp); @@ -483,6 +516,8 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode, return blk_ioctl_secure_erase(bdev, mode, argp); case BLKZEROOUT: return blk_ioctl_zeroout(bdev, mode, arg); + case BLKVERIFY: + return blk_ioctl_verify(bdev, mode, arg); case BLKGETDISKSEQ: return put_u64(argp, bdev->bd_disk->diskseq); case BLKREPORTZONE: diff --git a/include/linux/bio.h b/include/linux/bio.h index 1cf3738ef1ea..3dfafe1da098 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -55,7 +55,8 @@ static inline bool bio_has_data(struct bio *bio) bio->bi_iter.bi_size && bio_op(bio) != REQ_OP_DISCARD && bio_op(bio) != REQ_OP_SECURE_ERASE && - bio_op(bio) != REQ_OP_WRITE_ZEROES) + bio_op(bio) != REQ_OP_WRITE_ZEROES && + bio_op(bio) != REQ_OP_VERIFY) return true; return false; @@ -65,7 +66,8 @@ static inline bool bio_no_advance_iter(const struct bio *bio) { return bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE || - bio_op(bio) == REQ_OP_WRITE_ZEROES; + bio_op(bio) == REQ_OP_WRITE_ZEROES || + bio_op(bio) == REQ_OP_VERIFY; } static inline void *bio_data(struct bio *bio) @@ -176,7 +178,7 @@ static inline unsigned bio_segments(struct bio *bio) struct bvec_iter iter; /* - * We special case discard/write same/write zeroes, because they + * We special case discard/write same/write zeroes/verify, because they * interpret bi_size differently: */ @@ -184,6 +186,7 @@ static inline unsigned bio_segments(struct bio *bio) case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: + case REQ_OP_VERIFY: return 0; default: break; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index a24d4078fb21..0d5383fc84ed 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -363,6 +363,8 @@ enum req_opf { REQ_OP_FLUSH = 2, /* discard sectors */ REQ_OP_DISCARD = 3, + /* Verify the sectors */ + REQ_OP_VERIFY = 6, /* securely erase sectors */ REQ_OP_SECURE_ERASE = 5, /* write the zero filled sector many times */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 608d577734c2..78fd6c5530d7 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -266,6 +266,7 @@ struct queue_limits { unsigned int max_hw_discard_sectors; unsigned int max_secure_erase_sectors; unsigned int max_write_zeroes_sectors; + unsigned int max_verify_sectors; unsigned int max_zone_append_sectors; unsigned int discard_granularity; unsigned int discard_alignment; @@ -925,6 +926,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, if (unlikely(op == REQ_OP_WRITE_ZEROES)) return q->limits.max_write_zeroes_sectors; + if (unlikely(op == REQ_OP_VERIFY)) + return q->limits.max_verify_sectors; + return q->limits.max_sectors; } @@ -968,6 +972,8 @@ extern void blk_queue_max_discard_sectors(struct request_queue *q, unsigned int max_discard_sectors); extern void blk_queue_max_write_zeroes_sectors(struct request_queue *q, unsigned int max_write_same_sectors); +extern void blk_queue_max_verify_sectors(struct request_queue *q, + unsigned int max_verify_sectors); extern void blk_queue_logical_block_size(struct request_queue *, unsigned int); extern void blk_queue_max_zone_append_sectors(struct request_queue *q, unsigned int max_zone_append_sectors); @@ -1119,6 +1125,12 @@ extern int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, extern int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, unsigned flags); +extern int __blkdev_issue_verify(struct block_device *bdev, + sector_t sector, sector_t nr_sects, gfp_t gfp_mask, + struct bio **biop); +extern int blkdev_issue_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask); + static inline int sb_issue_discard(struct super_block *sb, sector_t block, sector_t nr_blocks, gfp_t gfp_mask, unsigned long flags) { @@ -1293,6 +1305,16 @@ static inline unsigned int bdev_write_zeroes_sectors(struct block_device *bdev) return 0; } +static inline unsigned int bdev_verify_sectors(struct block_device *bdev) +{ + struct request_queue *q = bdev_get_queue(bdev); + + if (q) + return q->limits.max_verify_sectors; + + return 0; +} + static inline bool bdev_nonrot(struct block_device *bdev) { return blk_queue_nonrot(bdev_get_queue(bdev)); diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index bdf7b404b3e7..ad0e5cb5cac4 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -185,6 +185,7 @@ struct fsxattr { #define BLKROTATIONAL _IO(0x12,126) #define BLKZEROOUT _IO(0x12,127) #define BLKGETDISKSEQ _IOR(0x12,128,__u64) +#define BLKVERIFY _IO(0x12,129) /* * A jump here: 130-136 are reserved for zoned block devices * (see uapi/linux/blkzoned.h) From patchwork Thu Jun 30 09:14:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12901486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0704BCCA47B for ; Thu, 30 Jun 2022 09:15:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234117AbiF3JPt (ORCPT ); Thu, 30 Jun 2022 05:15:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234147AbiF3JPe (ORCPT ); Thu, 30 Jun 2022 05:15:34 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2044.outbound.protection.outlook.com [40.107.94.44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A140720F51; Thu, 30 Jun 2022 02:14:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ibjjRPG4MjnFONxx6gmSgTcgM/JBmXkb/Yto3iZ4RUXwcyMaEYjm0j1wLBbnSM1i2OockqU4gfEvbL+ycJJisejTFOty5Xv8EaBfywb6B+dcUB7sqHDWGug4t3bLh2+YtWnzzInxySIm3Go5Vc76hbY7UGLHO12fAZ+3mf8tuzdU24tLt3556s3FZ6igtDZHHCbm1V+F/G3iF4TeVcH4ayDWq2rS0XFV+5nsN6oKOXckyEDsWn/l0YOQLWh8Qxtg/RtAZdWhvkvt918EVsQ1UjI+q9tRiiqOvsuTLZKvOyAQLCsqfDmPU4tNTHCDjrwN0IV3XS50B/eAYO6ocTprEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Yvnp8DqW+ZzH+Skld2NU1KhkiVq0+0mJLr2QZ76qMF0=; b=OlcuISbdsEsddvrroz+Zum3KHpWbwYe2DLxKAw6wbr+7G7tEYn7LWB9FsOdCQq+16XhlmztX1104QCvz3uP/w9H2sTvzxeYBzoM161l/RlhQ+xgfV1LUq0JIhDGTX2Jj7sTtJR63p9LqAHd4SkM6dBsvMSX4IXwtsl0g2XOvEQHD0Oe8vPW4GJGiA6cUnPdLHk7/rgz2uedVgdfPX8Zw6CXhCFctSkhpbHsCdWLjgFhm9oz1SD4IptEzC0cWKC09/cxidFKUxoWFKe2BNUolEcEqelG9KCDyOom7iAYPUmUaMWu7o5ybTcJSSVzHK/loyHIqcfoPLMf8Hz0Ojc9s7g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=grimberg.me smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Yvnp8DqW+ZzH+Skld2NU1KhkiVq0+0mJLr2QZ76qMF0=; b=FG2CBvIZciCee2WZ8PHey8ZeBThF2SgRzjdAP9WnmOfPEPmJ+X5WGknK9Ym6mVUp6BrhM4wwNgkocMghAabHS4a/Y/+VbMsdCyMIC2Fe/U55tJPVHc6lKhasSa195zzBWCjvwlBFD5ruLnqejVu42dQpPx7eEpGmhD9r7nwUS84uYrZ0MviVyrn+mNDcVh+8xcK4oMv0vT6G9jSNhQJOhIN5wcZs9ziHmCranjmIsPRyA2PtillC69YKC5inTuA0ziIYpMKgtoxxIzn5SNshlgSIcCHGyrYXG358FaFwWX0uCaA2sjfDEe7s2ZHR9cXwhl1AXsqT2oXG/xQueHaJKg== Received: from DM6PR07CA0131.namprd07.prod.outlook.com (2603:10b6:5:330::19) by CY4PR12MB1142.namprd12.prod.outlook.com (2603:10b6:903:40::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun 2022 09:14:48 +0000 Received: from DM6NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:5:330:cafe::4b) by DM6PR07CA0131.outlook.office365.com (2603:10b6:5:330::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17 via Frontend Transport; Thu, 30 Jun 2022 09:14:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT011.mail.protection.outlook.com (10.13.172.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5373.15 via Frontend Transport; Thu, 30 Jun 2022 09:14:48 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Thu, 30 Jun 2022 09:14:41 +0000 Received: from dev.nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 30 Jun 2022 02:14:39 -0700 From: Chaitanya Kulkarni To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Chaitanya Kulkarni Subject: [PATCH 2/6] nvme: add support for the Verify command Date: Thu, 30 Jun 2022 02:14:02 -0700 Message-ID: <20220630091406.19624-3-kch@nvidia.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20220630091406.19624-1-kch@nvidia.com> References: <20220630091406.19624-1-kch@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2e29aac5-1274-4437-514b-08da5a78fb67 X-MS-TrafficTypeDiagnostic: CY4PR12MB1142:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2FM/+q9Ny4SdkGp3DqGO3EzZnfRBnSKlxEOC2XfI2AEv+i1whqgFLzLMGH3KYdRRnggu65zGlMClZE1U6P7kicw1yW5nUz9Pfb5ipp5k64zBvY0ZTswpjKJSb/Une7OYolYx62CFAcB3YmYGgJtT3P80Q6ZceG/3oQf3dnXXDTiyrhlDzNVF3uInYkEhmL1WspxmdnPRFwUvlXictX1AMoqqblSe4imnFkpYGk0o2fEIuyeHYd4DLVV1G1LGQm2vJB/Lxxf/I42XCFZb7ehfsHDe9GfBCj0F6ZmOb5WpKuQyqQplaw6HP8aU/FBVe4gPQ19QESB7nKRA2nFd1bJh5wbYiZnZtpJTnsW+kgZQZiee5qOKe6ouV1sK4we+wklX59MwyTk2cQdiOSZP6uWimxgrPI4OOyHdxZsSAbc5oAl547B18DwvgDx+bjE7BXYtlG2ZrxVuYLf0NpZ3r/RGOiWKSAd9mNBXeY3vbK/PBf/hEUj6DBrs5sE3/9KNdpT4F6frKF2MbZELp2DIa/jV1EfxenM/7ElPIZCVB7ghXzv8rTCVHZZy9ap9wC/VcJOyxcVv0QbtlxUQF3L5m2+8mZtxGP18n0TgXftViL/xPpgSHnPWZ/u/5/3yeRhtfrLlNz3BXlfoa3GubqnQdgXnSb8JtwH8hbRsuwo0kz0LDP6OvkVYDsl4W3sMRIzteFdbrLiBE8N8+Q8LGufn5azVMQUO5wyITevKrdPwmP4pPJAZMPHcCwaf0OXrQoJ5zZWm5wjQbhp/EXwnbYpPvxq+nB1wDcibVqh1nNnJ5ijtZH4qqKhur2qJPvaylNRILJdhBN6JNAVjtcXSsw32zDciBfXuAgSvniv2Q2UHljH8fZlSzHRJ6CKV2RailbOlo2e2 X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(376002)(136003)(39860400002)(396003)(346002)(36840700001)(40470700004)(46966006)(8676002)(40460700003)(7696005)(2616005)(6666004)(15650500001)(7416002)(5660300002)(81166007)(1076003)(356005)(41300700001)(70206006)(426003)(36756003)(82310400005)(316002)(7406005)(83380400001)(336012)(47076005)(40480700001)(16526019)(54906003)(4326008)(26005)(110136005)(186003)(36860700001)(107886003)(478600001)(70586007)(82740400003)(8936002)(2906002)(36900700001)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 09:14:48.5524 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2e29aac5-1274-4437-514b-08da5a78fb67 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1142 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org Allow verify operations (REQ_OP_VERIFY) on the block device, if the device supports optional command bit set for verify. Add support to setup verify command. Set maximum possible verify sectors in one verify command according to maximum hardware sectors supported by the controller. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/host/core.c | 33 +++++++++++++++++++++++++++++++++ include/linux/nvme.h | 19 +++++++++++++++++++ 2 files changed, 52 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 24165daee3c8..ef27580886b1 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -838,6 +838,19 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns, return BLK_STS_OK; } +static inline blk_status_t nvme_setup_verify(struct nvme_ns *ns, + struct request *req, struct nvme_command *cmnd) +{ + cmnd->verify.opcode = nvme_cmd_verify; + cmnd->verify.nsid = cpu_to_le32(ns->head->ns_id); + cmnd->verify.slba = + cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req))); + cmnd->verify.length = + cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1); + cmnd->verify.control = 0; + return BLK_STS_OK; +} + static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, struct request *req, struct nvme_command *cmnd, enum nvme_opcode op) @@ -943,6 +956,9 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req) case REQ_OP_WRITE_ZEROES: ret = nvme_setup_write_zeroes(ns, req, cmd); break; + case REQ_OP_VERIFY: + ret = nvme_setup_verify(ns, req, cmd); + break; case REQ_OP_DISCARD: ret = nvme_setup_discard(ns, req, cmd); break; @@ -1672,6 +1688,22 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns) blk_queue_max_write_zeroes_sectors(queue, UINT_MAX); } +static void nvme_config_verify(struct gendisk *disk, struct nvme_ns *ns) +{ + u64 max_blocks; + + if (!(ns->ctrl->oncs & NVME_CTRL_ONCS_VERIFY)) + return; + + if (ns->ctrl->max_hw_sectors == UINT_MAX) + max_blocks = (u64)USHRT_MAX + 1; + else + max_blocks = ns->ctrl->max_hw_sectors + 1; + + blk_queue_max_verify_sectors(disk->queue, + nvme_lba_to_sect(ns, max_blocks)); +} + static bool nvme_ns_ids_equal(struct nvme_ns_ids *a, struct nvme_ns_ids *b) { return uuid_equal(&a->uuid, &b->uuid) && @@ -1871,6 +1903,7 @@ static void nvme_update_disk_info(struct gendisk *disk, set_capacity_and_notify(disk, capacity); nvme_config_discard(disk, ns); + nvme_config_verify(disk, ns); blk_queue_max_write_zeroes_sectors(disk->queue, ns->ctrl->max_zeroes_sectors); } diff --git a/include/linux/nvme.h b/include/linux/nvme.h index 29ec3e3481ff..578bb4931665 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -363,6 +363,7 @@ enum { NVME_CTRL_ONCS_WRITE_ZEROES = 1 << 3, NVME_CTRL_ONCS_RESERVATIONS = 1 << 5, NVME_CTRL_ONCS_TIMESTAMP = 1 << 6, + NVME_CTRL_ONCS_VERIFY = 1 << 7, NVME_CTRL_VWC_PRESENT = 1 << 0, NVME_CTRL_OACS_SEC_SUPP = 1 << 0, NVME_CTRL_OACS_NS_MNGT_SUPP = 1 << 3, @@ -1001,6 +1002,23 @@ struct nvme_write_zeroes_cmd { __le16 appmask; }; +struct nvme_verify_cmd { + __u8 opcode; + __u8 flags; + __u16 command_id; + __le32 nsid; + __u64 rsvd2; + __le64 metadata; + union nvme_data_ptr dptr; + __le64 slba; + __le16 length; + __le16 control; + __le32 rsvd3; + __le32 reftag; + __le16 eapptag; + __le16 eappmask; +}; + enum nvme_zone_mgmt_action { NVME_ZONE_CLOSE = 0x1, NVME_ZONE_FINISH = 0x2, @@ -1539,6 +1557,7 @@ struct nvme_command { struct nvme_format_cmd format; struct nvme_dsm_cmd dsm; struct nvme_write_zeroes_cmd write_zeroes; + struct nvme_verify_cmd verify; struct nvme_zone_mgmt_send_cmd zms; struct nvme_zone_mgmt_recv_cmd zmr; struct nvme_abort_cmd abort; From patchwork Thu Jun 30 09:14:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12901487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DA6CCCA47B for ; Thu, 30 Jun 2022 09:15:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234217AbiF3JPx (ORCPT ); Thu, 30 Jun 2022 05:15:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234215AbiF3JPg (ORCPT ); Thu, 30 Jun 2022 05:15:36 -0400 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2079.outbound.protection.outlook.com [40.107.237.79]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B69525597; Thu, 30 Jun 2022 02:14:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d8ZXLH2MN41cPtry20AumczyA8LH3H0JDXcC7FivIANh9H1ISdMEADAMmTbU5OOn+lsSztDuKiHdJ4iUqCPNqkeZCWQrtomfFHmVowWFRvA22UwMjnvmlKmSNROt5F1xtirG04CuxXa9ZfZWilVk+6YY64++0bfyqhFatXXV2ojKWyKZfcmzbex57oc+SpYAgtiVhKYzeSN39bajhgLeL/UN5upu6xUwDzpxHyfrJVPOdFKun/Ak9zuH8a8HdOGiaTEv7Qoi5snB9rI689obgKbYHRFf2jlJ7FuX7atrN1tu+9TGbpKExce9klfEzlQLOdE245rHxr5wWbWnUzW4sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gmkcKKQaioOe6LEHQ2WUk1MeVaA7iTn9pKJgcZIFof0=; b=hhWCVoKxWkCnwx3RaPeeqXGEyLgCePn0g6BB5UFJq/qpEOe1jEsXLWi2/OYayoBNU6QVJDubiTRBIya/KsK4blaQAhRYMyIpVE9sTwnHYLP4l6dzW/g/PKk8LqcxjYIdPwE+USzzI4txO9ZFzjljM6nnc5jEinM/eSKLhcPdwZAm9VzikRFLPlvZecgMt6fcBTU/n3liWHD1JZb8QkcI9HKdT+QpzyuL7+SbqoChUtmuuDm0MpoJBHK4+dTYQxeHsutsrwfgN9g1tj5nVGGgA+NOuWeGXpixdglKRvUgP24d3/G6MLgAe8fDl3ojSTsDEdsPxhdTE5+hKPTGWGjAcA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=grimberg.me smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gmkcKKQaioOe6LEHQ2WUk1MeVaA7iTn9pKJgcZIFof0=; b=cqWcy4+rm0f7ufNz3+arKRTmQdRRpv5qH4Ft8ORqCBUyfbauavqLjFFm3BWpLkppTKGt4FgrzWtd5xTU6ttaKCUYtnJuZ6gwkSu5kwLpJyjQxb8IoIDifZ0UBgHGdBcwrG91edSf5V0TlZUk601WzFcJYMOQDpJYmXiSYVbfzT6NUYu6GF2l4Qv72bj2zvujm2c1ccGJXXxQQNVcyu5BgxVM7rl92A7+txJUh6668Mx32s4x/42xg7CV8SxXFjCCh+rIaIg0c+4ELM6Mhyp2ObuDdcENvyCoeUwQacbtSwSspDLJObMgW4mqkHNRfr97G4Djp9aDAJwPmClX5e+l/Q== Received: from DS7PR03CA0349.namprd03.prod.outlook.com (2603:10b6:8:55::24) by DM6PR12MB4764.namprd12.prod.outlook.com (2603:10b6:5:31::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun 2022 09:14:54 +0000 Received: from DM6NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:8:55:cafe::d4) by DS7PR03CA0349.outlook.office365.com (2603:10b6:8:55::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 09:14:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT023.mail.protection.outlook.com (10.13.173.96) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5373.15 via Frontend Transport; Thu, 30 Jun 2022 09:14:53 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Thu, 30 Jun 2022 09:14:53 +0000 Received: from dev.nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 30 Jun 2022 02:14:51 -0700 From: Chaitanya Kulkarni To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Chaitanya Kulkarni Subject: [PATCH 3/6] nvmet: add Verify command support for bdev-ns Date: Thu, 30 Jun 2022 02:14:03 -0700 Message-ID: <20220630091406.19624-4-kch@nvidia.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20220630091406.19624-1-kch@nvidia.com> References: <20220630091406.19624-1-kch@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c4e0a3c0-5004-4384-c48e-08da5a78fe9f X-MS-TrafficTypeDiagnostic: DM6PR12MB4764:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: H4PfOHs54EqOehrUn3BEoA9IgtuzxeSd7ySDD/AabsLjp4lwHBuM7gIhh0zWtScf68RWNJxI69SBTu2LFrj9WDBQQvYXRYbatGEIMRJMM3Z1FuZNGgiV8U3SJSQY0KlvHHfsMwY8zoodOv+ZE14r259Pvp6VBCoL8wsj+5MmbAqZxvA7JaQv+lQF3Z7ut5HhYvwLt+b0/bENgO/2YZvUdIQKJkKHgE1oi28/+up51txJ8rX6BaMsL83Aq7rjp3dDc1yI0FnHmYxrU693BCU3etCiBkcYGSy1CeVPX0jFbfVb5iwBcty0vkEC9O3tCTiTNiMWrq4IZHotuFzEZRkyeJyOQbYK6eAlRvJ0UFl9IoGNx+Ja2oF+6eUnmcqkeBBNkJxR5GO0+tMxZLr/WPL7sLlMNUJL85AHyUqJpMdiRPDjkotMBL8kl0+rNyWgAN2hrqbK0PFeC/Y+lLj7ahSAtAJq+SJfosk3D8QTbzWWM6TLhnnY3NdrojgulyfKV0HTz9mvpoLUOO0cjm/UFjGFHQ2BgcDo+/Tac9ieR5rgG9PpXp/MJ5IOolqf0FXR6bDmvEidjy87CqsfZqSLZKu0b3PIvjlNIiaHAvC97kncPDkBO0u7VFiQPM1Yg/Ytgb5zLywDTkovV18WBRJzE2O+GmKMgLS2JrHjGPdZZkDkn2L/aBIH2jro7PEltY4qogjCeWtxsfjSdzK1PPDp/17fNKysC74gyonCohcqMGkCj9W146sVHnmwo3mIVW1Mqvnzhzf5qkkkiaNgWnO+OBN89IEIM1RdLI7G6SO6xjwktzJxKsAcS1TAs6OqOQmVKO4nKkKKYmPJRMiXW7/+KblA5SFkBoSDX2Uk0h7lzCscjpsxEyd5XscZt3Gmx3mOAcwD6zEnpPV1Tb8QQMvVlutRcQ== X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(136003)(39860400002)(376002)(396003)(346002)(40470700004)(36840700001)(46966006)(26005)(47076005)(15650500001)(8676002)(81166007)(426003)(82310400005)(40480700001)(70586007)(7696005)(356005)(70206006)(83380400001)(4326008)(36860700001)(186003)(16526019)(478600001)(8936002)(336012)(1076003)(54906003)(82740400003)(2906002)(7416002)(36756003)(316002)(6666004)(7406005)(107886003)(110136005)(5660300002)(41300700001)(40460700003)(2616005)(21314003)(36900700001)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 09:14:53.9711 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c4e0a3c0-5004-4384-c48e-08da5a78fe9f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4764 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org Add support for handling verify command on target. Call into __blkdev_issue_verify, which the block layer expands into the REQ_OP_VERIFY LBAs. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/admin-cmd.c | 3 ++- drivers/nvme/target/io-cmd-bdev.c | 39 +++++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index 397daaf51f1b..495c3a31473a 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -431,7 +431,8 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req) id->nn = cpu_to_le32(NVMET_MAX_NAMESPACES); id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES); id->oncs = cpu_to_le16(NVME_CTRL_ONCS_DSM | - NVME_CTRL_ONCS_WRITE_ZEROES); + NVME_CTRL_ONCS_WRITE_ZEROES | + NVME_CTRL_ONCS_VERIFY); /* XXX: don't report vwc if the underlying device is write through */ id->vwc = NVME_CTRL_VWC_PRESENT; diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 27a72504d31c..6687e2665e26 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -146,6 +146,7 @@ u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts) switch (req->cmd->common.opcode) { case nvme_cmd_dsm: case nvme_cmd_write_zeroes: + case nvme_cmd_verify: status = NVME_SC_ONCS_NOT_SUPPORTED | NVME_SC_DNR; break; default: @@ -171,6 +172,10 @@ u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts) req->error_slba = le64_to_cpu(req->cmd->write_zeroes.slba); break; + case nvme_cmd_verify: + req->error_slba = + le64_to_cpu(req->cmd->verify.slba); + break; default: req->error_slba = 0; } @@ -442,6 +447,37 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) } } +static void nvmet_bdev_execute_verify(struct nvmet_req *req) +{ + struct nvme_verify_cmd *verify = &req->cmd->verify; + struct bio *bio = NULL; + sector_t nr_sector; + sector_t sector; + int ret; + + if (!nvmet_check_transfer_len(req, 0)) + return; + + if (!bdev_verify_sectors(req->ns->bdev)) { + nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR); + return; + } + + sector = le64_to_cpu(verify->slba) << (req->ns->blksize_shift - 9); + nr_sector = (((sector_t)le16_to_cpu(verify->length) + 1) << + (req->ns->blksize_shift - 9)); + + ret = __blkdev_issue_verify(req->ns->bdev, sector, nr_sector, + GFP_KERNEL, &bio); + if (bio) { + bio->bi_private = req; + bio->bi_end_io = nvmet_bio_done; + submit_bio(bio); + } else { + nvmet_req_complete(req, errno_to_nvme_status(req, ret)); + } +} + u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) { switch (req->cmd->common.opcode) { @@ -460,6 +496,9 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_bdev_execute_write_zeroes; return 0; + case nvme_cmd_verify: + req->execute = nvmet_bdev_execute_verify; + return 0; default: return nvmet_report_invalid_opcode(req); } From patchwork Thu Jun 30 09:14:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12901488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0F1DCCA47B for ; Thu, 30 Jun 2022 09:16:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234268AbiF3JQY (ORCPT ); Thu, 30 Jun 2022 05:16:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234094AbiF3JPo (ORCPT ); Thu, 30 Jun 2022 05:15:44 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2073.outbound.protection.outlook.com [40.107.101.73]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D7013BA52; Thu, 30 Jun 2022 02:15:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Geu7LM8LXQkcAZCAKjikVgSmlkTkLknUoT5l8z73GYN1z43xlk/BfmI1i/8Y4k/JDHhnrGOKsA3NMSbOqm48M9ElluWwhfPPVrnz/DB0OHJOjH5sgxh3xkKsdbmu3KoQXEhRYAhuAYe2/CebgCUcaJuYLHYnKzQg+SX0Q/qA7WMk7kjLGq8MbYim9G/I5yQhx3sG6KBUauYl22dkz8Ogy+VnX2i15dP6fqqzRI/HU4FTgHrwVviimpQhMhQLDIhasZCmTWIn6xEgjkDkjRRJ7wHY0hlwVoQ9z8fTf9RtY73kASa80FIk4/Tkyk9431nPtZpBNMH0xisxmt/IuUZrNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Fa/Y++nwNvmoWytoaUWfLxpPWzLCWXE0amicnkf5P2I=; b=GCQbEjthYAoNlSUAkb4EO6qX4ZcbqcUxEVyrtFpgy19/dcGx8QtH/7TTwjlGF5xnAxutnoSmOtT58TXS4ksyts02Wm3OQ2TQX1sLjYN2HyDCEabJq1azm7vBLWi431e5/qJn5vss02Swih+aRg59omT8YoHh3ZdQ35ss5xwMa77xedBTLBR1q7JlN4fucFriEQqcYOPEEACAwDdKD6dzQMUjJaLZ8tyQwK6JATLA8zoSeRWWAQp6H62uAhjCiWYdU3q2q0Uys2K6PVLbQ17eLEa4mZiZFg8TTXBb9F6doLrIOMwDBK0NBKeshoJx8SG4XMjsP7pj9Z1BNwHUPLD6ZQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=grimberg.me smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Fa/Y++nwNvmoWytoaUWfLxpPWzLCWXE0amicnkf5P2I=; b=ogn46T0HXkdotiRSl+NzmPtuW/32AzJi4Uu1bDiGAaOOyUp66B48kM5dEmxKyoIAg1Am968J53hEsGsjMItOSkpnolFyZ5gaM/outpsUsJrFM0rZbufcS/YyDiEAbFJfkqVhaANS749OR1Gr9FyeNFnlCD+Az904DQDZjQ1pdhvzjvr2a6GSEknoquW3P4qEfeEVmo86iG2RYGuiEZVJS02Ae7RwevrKxpEpAXGk4u3pBxnv69X8prP3EUmf49z1F2wRJx/Q0TLVSM+hfoJ5AI4G1y7rAXaxNY0m/bZ+Lpi9KEmlWGhiBYgKbWgeLRB783MM/sL5oHzOaAoOaug0hg== Received: from MW4P221CA0024.NAMP221.PROD.OUTLOOK.COM (2603:10b6:303:8b::29) by SN6PR12MB2717.namprd12.prod.outlook.com (2603:10b6:805:68::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Thu, 30 Jun 2022 09:15:08 +0000 Received: from CO1NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8b:cafe::48) by MW4P221CA0024.outlook.office365.com (2603:10b6:303:8b::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 09:15:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT058.mail.protection.outlook.com (10.13.174.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 09:15:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Thu, 30 Jun 2022 09:15:07 +0000 Received: from dev.nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 30 Jun 2022 02:15:05 -0700 From: Chaitanya Kulkarni To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Chaitanya Kulkarni Subject: [PATCH 4/6] nvmet: add Verify emulation support for bdev-ns Date: Thu, 30 Jun 2022 02:14:04 -0700 Message-ID: <20220630091406.19624-5-kch@nvidia.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20220630091406.19624-1-kch@nvidia.com> References: <20220630091406.19624-1-kch@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9fcfbe26-892e-4b83-e0a6-08da5a79071a X-MS-TrafficTypeDiagnostic: SN6PR12MB2717:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wi5apg6Hf0mhSQUI8ImrZEbKM8DEJdk+0G7AGhcz+5ILEambabeQjASUwGnJF7wMbndLzMSz6HNeHv/GSq/9g1taFMs9AjUvSlxZt9DgsfMti4vWbZNZ7boJptJA9KFk2ckl32bYPCbdaJunG8nqBKXIGxDev6qoFCWC+pSMeDccOywtpjW77PDmwM7t/ByDSYZ/FC70pXP+CteBEkyyIS5fA+RIsZH7du77jrn8atWJjqLtT4ulKxa0j4+oRejhpFcoMm7cj2e9IvN3oLKBCszjTUUAXd/1OhqH9y+y889luVT3MvSNUQ0CoNc56XW/ax2zLaDukQHoPG83C0IYVGBlQ9N47p/jblheT8E/3xNJvax3POtRZGjfQL9D2VgUekhvqfEmrrseCK+tDlQ6XapwpsMCTe72HO3jt1Mcj65gZ4LYDObGUiCnJB4xve/412+lP8LXbIqNKBceXlkJrRhrhVrwR0CZNtcu2gs140t2fC6jb3/uD71zZ37sLsC1uAeycKBRB59zAQhr+8hGvh+0r2/AvyngBreJm9qY4EKmS7UkHy1wI23I6DBfAMfFH/wiNUv4itw9OxySqmEabFAg+rxYaFz7cAiHq9D5ouKAlz61eqnr0l/2E413OWYJEzBxY0/2jV0pP0K2OgS90fCzlQY+Uxv4kAKCqsryz/Xvk9x0lkI+ha7s1TyYqNpLbdYYGdIgWHib0WUOHZW5PofYEtITUkUeQ5r7ZxrCW/P2gV0xsIe0tk2+mcpxuLiHGE0SVea5ehjnkrf8f7dsCpZFZm0aKT9r4ofAJ+mh2uYDuZe2hTm1j9EluGBfqMk0kqHJK9GjrkX2CJ5SFkDKau4WoLiK19CNJA6lQlx1uUYh0kfRWC/JN8XyCdmp1vYQ X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(136003)(346002)(396003)(376002)(39860400002)(46966006)(36840700001)(40470700004)(36860700001)(83380400001)(40460700003)(54906003)(110136005)(40480700001)(8936002)(15650500001)(82310400005)(36756003)(2906002)(7416002)(7406005)(5660300002)(356005)(16526019)(81166007)(70206006)(316002)(70586007)(4326008)(8676002)(7696005)(41300700001)(26005)(82740400003)(478600001)(186003)(1076003)(2616005)(107886003)(336012)(47076005)(426003)(36900700001)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 09:15:08.1997 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9fcfbe26-892e-4b83-e0a6-08da5a79071a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR12MB2717 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org Not all devices can support verify requests which can be mapped to the controller specific command. This patch adds a way to emulate REQ_OP_VERIFY for NVMeOF block device namespace. We add a new workqueue to offload the emulation. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/core.c | 14 ++++++-- drivers/nvme/target/io-cmd-bdev.c | 56 +++++++++++++++++++++++++------ drivers/nvme/target/nvmet.h | 4 ++- 3 files changed, 61 insertions(+), 13 deletions(-) diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 90e75324dae0..b701eeaf156a 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -16,6 +16,7 @@ #include "nvmet.h" struct workqueue_struct *buffered_io_wq; +struct workqueue_struct *verify_wq; struct workqueue_struct *zbd_wq; static const struct nvmet_fabrics_ops *nvmet_transports[NVMF_TRTYPE_MAX]; static DEFINE_IDA(cntlid_ida); @@ -1611,10 +1612,16 @@ static int __init nvmet_init(void) nvmet_ana_group_enabled[NVMET_DEFAULT_ANA_GRPID] = 1; - zbd_wq = alloc_workqueue("nvmet-zbd-wq", WQ_MEM_RECLAIM, 0); - if (!zbd_wq) + verify_wq = alloc_workqueue("nvmet-verify-wq", WQ_MEM_RECLAIM, 0); + if (!verify_wq) return -ENOMEM; + zbd_wq = alloc_workqueue("nvmet-zbd-wq", WQ_MEM_RECLAIM, 0); + if (!zbd_wq) { + error = -ENOMEM; + goto out_free_verify_work_queue; + } + buffered_io_wq = alloc_workqueue("nvmet-buffered-io-wq", WQ_MEM_RECLAIM, 0); if (!buffered_io_wq) { @@ -1645,6 +1652,8 @@ static int __init nvmet_init(void) destroy_workqueue(buffered_io_wq); out_free_zbd_work_queue: destroy_workqueue(zbd_wq); +out_free_verify_work_queue: + destroy_workqueue(verify_wq); return error; } @@ -1656,6 +1665,7 @@ static void __exit nvmet_exit(void) destroy_workqueue(nvmet_wq); destroy_workqueue(buffered_io_wq); destroy_workqueue(zbd_wq); + destroy_workqueue(verify_wq); BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_entry) != 1024); BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_hdr) != 1024); diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 6687e2665e26..721c8571a2da 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -447,35 +447,71 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) } } -static void nvmet_bdev_execute_verify(struct nvmet_req *req) +static void __nvmet_req_to_verify_sectors(struct nvmet_req *req, + sector_t *sects, sector_t *nr_sects) { struct nvme_verify_cmd *verify = &req->cmd->verify; + + *sects = le64_to_cpu(verify->slba) << (req->ns->blksize_shift - 9); + *nr_sects = (((sector_t)le16_to_cpu(verify->length) + 1) << + (req->ns->blksize_shift - 9)); +} + +static void nvmet_bdev_emulate_verify_work(struct work_struct *w) +{ + struct nvmet_req *req = container_of(w, struct nvmet_req, b.work); + sector_t nr_sector; + sector_t sector; + int ret = 0; + + __nvmet_req_to_verify_sectors(req, §or, &nr_sector); + if (!nr_sector) + goto out; + + /* blkdev_issue_verify() will automatically emulate */ + ret = blkdev_issue_verify(req->ns->bdev, sector, nr_sector, + GFP_KERNEL); +out: + nvmet_req_complete(req, + blk_to_nvme_status(req, errno_to_blk_status(ret))); +} + +static void nvmet_bdev_submit_emulate_verify(struct nvmet_req *req) +{ + INIT_WORK(&req->b.work, nvmet_bdev_emulate_verify_work); + queue_work(verify_wq, &req->b.work); +} + +static void nvmet_bdev_execute_verify(struct nvmet_req *req) +{ struct bio *bio = NULL; sector_t nr_sector; sector_t sector; - int ret; + int ret = 0; if (!nvmet_check_transfer_len(req, 0)) return; + __nvmet_req_to_verify_sectors(req, §or, &nr_sector); + if (!nr_sector) + goto out; + + /* offload emulation */ if (!bdev_verify_sectors(req->ns->bdev)) { - nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR); + nvmet_bdev_submit_emulate_verify(req); return; } - sector = le64_to_cpu(verify->slba) << (req->ns->blksize_shift - 9); - nr_sector = (((sector_t)le16_to_cpu(verify->length) + 1) << - (req->ns->blksize_shift - 9)); - ret = __blkdev_issue_verify(req->ns->bdev, sector, nr_sector, GFP_KERNEL, &bio); - if (bio) { + if (ret == 0 && bio) { bio->bi_private = req; bio->bi_end_io = nvmet_bio_done; submit_bio(bio); - } else { - nvmet_req_complete(req, errno_to_nvme_status(req, ret)); + return; } +out: + nvmet_req_complete(req, errno_to_nvme_status(req, ret)); } u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 69818752a33a..96e3f6eb4fef 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -326,7 +326,8 @@ struct nvmet_req { struct bio_vec inline_bvec[NVMET_MAX_INLINE_BIOVEC]; union { struct { - struct bio inline_bio; + struct bio inline_bio; + struct work_struct work; } b; struct { bool mpool_alloc; @@ -365,6 +366,7 @@ struct nvmet_req { }; extern struct workqueue_struct *buffered_io_wq; +extern struct workqueue_struct *verify_wq; extern struct workqueue_struct *zbd_wq; extern struct workqueue_struct *nvmet_wq; From patchwork Thu Jun 30 09:14:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12901489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC4BDC43334 for ; Thu, 30 Jun 2022 09:16:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234143AbiF3JQ2 (ORCPT ); Thu, 30 Jun 2022 05:16:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234140AbiF3JPs (ORCPT ); Thu, 30 Jun 2022 05:15:48 -0400 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2056.outbound.protection.outlook.com [40.107.102.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D6E12A963; Thu, 30 Jun 2022 02:15:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YPEWEKfMViuGgxbyiAT4nWD8hnW7v8nHgNWDKLK1dyoQFtkA7dpU9mlNP7nKZW/jJPAyiphEhyqAWUVQ5T2SiRV0Fa8Y1hvDaVQJaM6nFj4Fhgv2Ajtz8Z5yY1AUgcTA3U8P13AWiltNIpQHAUsM1iC5YU5Sd5etEoygT/wmvme+z3/Vnoq1NgEcYa+gtwMVgGLudNcuxK3Luh2OpA/KuSpXCssJd1dCo2P5+UVLiEJ6km3Avs3Po/ccqLPg2STXgVkt4qUHz0cqBxMo/vktgmIMkzlIzIqv0fgH6glGflub1xG+iKxoQaub5LgbnAqLbOCdoywyAnJ3EiqqaX1+7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PH4qK27KflLRtEWi4DGfb1hzh+EOW9SDc/spTYn1upM=; b=WylWgtQ3kVbjLbVnJoQEuivUxWFwvOE04+/bvsyX0iB7CDblCUWz1zMK226Ud9NmEmp3/J838pPeHf8QnHoyNMkVUPK3NajRsMy/FSB1x7daWpTRXl5vZUESjwUMu5ta9zh/8sMeGm5j9H8NMwcmWOg2PjFJZY3x2k0uGZYzNcH+NMUvJHHsK6tp4rd7mRGgDWCT4PHLnpf99uMY7tV5rXuUwPZo5hLC9hHSYI9V4CFgLRn3S6EIYt0xmUSG+O1YiHSnjtwouHRJ7TIJqR1e/7URdm3PyCjIpFI+mJuQ4qJ6jKi+WaaFcOIvCMDLUmkRb2nDRpz9qshzhwAB4r1SzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=grimberg.me smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PH4qK27KflLRtEWi4DGfb1hzh+EOW9SDc/spTYn1upM=; b=DfbUGUlue0W+01OL83P66VMV4OjYtSp76OxgwdzjYQoebqetqIXr8TDQXz57fSoQ7vYTi/K92d8vXFPKYYQPQ71xq9sG0iFLJ4UjJuuEPbJ4JvxtXD1XHB8iGLri7cuRMod4u67VcSu9Fvu2wDp4qvg6WsB6I65ItF9fzQfKO/glwumACQIDRDvLS+POTnBQVAbhOJJ+Hc/Z+VlKvY8JdWn4OEGxgDZxlkAyCxtOoB+LFbqIvCRuLBdtdhARGp19T/Qhl9S58IdJdFc+0H/j6VuXgpkUG9GnlkwroVpIGmJIxNRIIDiTppxAiBfSSV8IOYMy9SryztNeN2GvTL6QAw== Received: from DS7PR03CA0306.namprd03.prod.outlook.com (2603:10b6:8:2b::14) by BN8PR12MB3124.namprd12.prod.outlook.com (2603:10b6:408:41::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun 2022 09:15:21 +0000 Received: from DM6NAM11FT014.eop-nam11.prod.protection.outlook.com (2603:10b6:8:2b:cafe::45) by DS7PR03CA0306.outlook.office365.com (2603:10b6:8:2b::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15 via Frontend Transport; Thu, 30 Jun 2022 09:15:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT014.mail.protection.outlook.com (10.13.173.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 09:15:20 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Thu, 30 Jun 2022 09:15:20 +0000 Received: from dev.nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 30 Jun 2022 02:15:18 -0700 From: Chaitanya Kulkarni To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Chaitanya Kulkarni Subject: [PATCH 5/6] nvmet: add verify emulation support for file-ns Date: Thu, 30 Jun 2022 02:14:05 -0700 Message-ID: <20220630091406.19624-6-kch@nvidia.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20220630091406.19624-1-kch@nvidia.com> References: <20220630091406.19624-1-kch@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4c70b6c6-fd13-4683-9cf1-08da5a790eb5 X-MS-TrafficTypeDiagnostic: BN8PR12MB3124:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xDjc7o2clYBkb0dFtoBZraW63pH32/673Zs8NA0O/RuUJGpqNzKlYwyEnEyoHlpFmSVPhEbk+C/4HtG+gTs9LpSiAEq2vxycHjnXB8155gOvYAH6Ej01oI0ceXhzDr19MLY/pgCndOM/OrSzH2ET+EwW2VVjGiznTVMvYfvVybJb+H91Po8pDvjXHwbQOznDHfHD4UJC3o/lkg898Oo4DEERjVGMnwWarZ05QPT3S1mVfJSuHMKO3x9IOlqfzZwtbKHT0kuUjqd1UKkClT4ypiruzLgIr3La3Xpn2rIwXnK5YR/KvhfcgJRbpaOVnRY/M/xrwf4sHDC7Qp00dZ3mg9p9iEytpZGAUs6WUy90eCxN9wRLmeon4SFvUPvd19800zNq1ihdVJSaQKavmT7T8B5ZmiMLVOFdYRRSxbiexz2qG7BiX3H+7AGDuPc2mOfCJ1XFSehnh7+YCrVMEL3oFtheqjHPgWEyKF17qymaH6kyva0c5NEOhCB9UzUk+Pq2gnm3em4r8Z5C3uSFfMQKODZd2WVSd5upiw0RhJ+RZ3si1VBnWUFarlbO5YiCnFE59C7v7CS9F6IopByFElTfno4OCVMiNZOTlpmLT/o+AiBU+a+FJwb/dIu86Oyn3XffBhWmLRfsZHjCj3sRVBR/vMPLCxNPgUDOTzNAkQ1ZxwvF+EhCM2EeNygBuJVGn5RrsWug50cWfa9N/gV4d8Qx2Nf3c6i+ShGB0X7iH82dFeIkk/6X7QRA0ZYo0pNk3ihJgBFIlsZvAvZuMfgnC/xa7OPoWICE1R3jf9LDgBR8pDk2pAXCKgMKTVSOa8ueZ1fIiUUndZXXp/2ialN2DCfw4i7rD0WlIzyquZDnOb/HGWMifNYluWy3oUJHJYEAnJcf X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(346002)(376002)(136003)(396003)(39860400002)(36840700001)(40470700004)(46966006)(7696005)(7416002)(7406005)(5660300002)(8936002)(478600001)(26005)(356005)(40460700003)(41300700001)(15650500001)(2906002)(36860700001)(82740400003)(81166007)(426003)(336012)(6666004)(47076005)(70206006)(2616005)(82310400005)(16526019)(186003)(107886003)(1076003)(40480700001)(83380400001)(8676002)(110136005)(70586007)(36756003)(4326008)(54906003)(316002)(36900700001)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 09:15:20.9422 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4c70b6c6-fd13-4683-9cf1-08da5a790eb5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT014.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3124 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org For now, there is no way to map verify operation to the VFS layer API. This patch emulates verify operation by offloading it to the workqueue and reading the data using vfs layer APIs for both buffered io and direct io mode. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/io-cmd-file.c | 152 ++++++++++++++++++++++++++++++ 1 file changed, 152 insertions(+) diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c index f3d58abf11e0..287187d641ba 100644 --- a/drivers/nvme/target/io-cmd-file.c +++ b/drivers/nvme/target/io-cmd-file.c @@ -13,6 +13,7 @@ #define NVMET_MAX_MPOOL_BVEC 16 #define NVMET_MIN_MPOOL_OBJ 16 +#define NVMET_VERIFY_BUF_LEN (BIO_MAX_VECS << PAGE_SHIFT) void nvmet_file_ns_revalidate(struct nvmet_ns *ns) { @@ -376,6 +377,154 @@ static void nvmet_file_execute_write_zeroes(struct nvmet_req *req) queue_work(nvmet_wq, &req->f.work); } +static void __nvmet_req_to_verify_offset(struct nvmet_req *req, loff_t *offset, + ssize_t *len) +{ + struct nvme_verify_cmd *verify = &req->cmd->verify; + + *offset = le64_to_cpu(verify->slba) << req->ns->blksize_shift; + *len = (((sector_t)le16_to_cpu(verify->length) + 1) << + req->ns->blksize_shift); +} + +static int do_buffered_io_emulate_verify(struct file *f, loff_t offset, + ssize_t len) +{ + char *buf = NULL; + int ret = 0; + ssize_t rc; + + buf = kmalloc(NVMET_VERIFY_BUF_LEN, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + while (len > 0) { + ssize_t curr_len = min_t(ssize_t, len, NVMET_VERIFY_BUF_LEN); + + rc = kernel_read(f, buf, curr_len, &offset); + if (rc != curr_len) { + pr_err("kernel_read %lu curr_len %lu\n", rc, curr_len); + ret = -EINVAL; + break; + } + + len -= curr_len; + offset += curr_len; + cond_resched(); + } + + kfree(buf); + return ret; +} + +static int do_direct_io_emulate_verify(struct file *f, loff_t offset, + ssize_t len) +{ + struct scatterlist *sgl = NULL; + struct bio_vec *bvec = NULL; + struct iov_iter iter = { 0 }; + struct kiocb iocb = { 0 }; + unsigned int sgl_nents; + ssize_t ret = 0; + int i; + + while (len > 0) { + ssize_t curr_len = min_t(ssize_t, len, NVMET_VERIFY_BUF_LEN); + struct scatterlist *sg = NULL; + unsigned int bv_len = 0; + ssize_t rc; + + sgl = sgl_alloc(curr_len, GFP_KERNEL, &sgl_nents); + if (!sgl) { + ret = -ENOMEM; + break; + } + + bvec = kmalloc_array(sgl_nents, sizeof(struct bio_vec), + GFP_KERNEL); + if (!bvec) { + ret = -ENOMEM; + break; + } + + for_each_sg(sgl, sg, sgl_nents, i) { + nvmet_file_init_bvec(&bvec[i], sg); + bv_len += sg->length; + } + + if (bv_len != curr_len) { + pr_err("length mismatch sgl & bvec\n"); + ret = -EINVAL; + break; + } + + iocb.ki_pos = offset; + iocb.ki_filp = f; + iocb.ki_complete = NULL; /* Sync I/O */ + iocb.ki_flags |= IOCB_DIRECT; + + iov_iter_bvec(&iter, READ, bvec, sgl_nents, bv_len); + + rc = call_read_iter(f, &iocb, &iter); + if (rc != curr_len) { + pr_err("read len mismatch expected %lu got %ld\n", + curr_len, rc); + ret = -EINVAL; + break; + } + + cond_resched(); + + len -= curr_len; + offset += curr_len; + + kfree(bvec); + sgl_free(sgl); + bvec = NULL; + sgl = NULL; + memset(&iocb, 0, sizeof(iocb)); + memset(&iter, 0, sizeof(iter)); + } + + kfree(bvec); + sgl_free(sgl); + return ret; +} + +static void nvmet_file_emulate_verify_work(struct work_struct *w) +{ + struct nvmet_req *req = container_of(w, struct nvmet_req, f.work); + loff_t offset; + ssize_t len; + int ret = 0; + + __nvmet_req_to_verify_offset(req, &offset, &len); + if (!len) + goto out; + + if (unlikely(offset + len > req->ns->size)) { + nvmet_req_complete(req, errno_to_nvme_status(req, -ENOSPC)); + return; + } + + if (req->ns->buffered_io) + ret = do_buffered_io_emulate_verify(req->ns->file, offset, len); + else + ret = do_direct_io_emulate_verify(req->ns->file, offset, len); +out: + nvmet_req_complete(req, errno_to_nvme_status(req, ret)); +} + +static void nvmet_file_execute_verify(struct nvmet_req *req) +{ + if (!nvmet_check_data_len_lte(req, 0)) + return; + + INIT_WORK(&req->f.work, nvmet_file_emulate_verify_work); + queue_work(verify_wq, &req->f.work); +} + + u16 nvmet_file_parse_io_cmd(struct nvmet_req *req) { switch (req->cmd->common.opcode) { @@ -392,6 +541,9 @@ u16 nvmet_file_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_file_execute_write_zeroes; return 0; + case nvme_cmd_verify: + req->execute = nvmet_file_execute_verify; + return 0; default: return nvmet_report_invalid_opcode(req); } From patchwork Thu Jun 30 09:14:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12901490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 065EEC43334 for ; Thu, 30 Jun 2022 09:17:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234264AbiF3JRB (ORCPT ); Thu, 30 Jun 2022 05:17:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234242AbiF3JQ0 (ORCPT ); Thu, 30 Jun 2022 05:16:26 -0400 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2087.outbound.protection.outlook.com [40.107.92.87]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0893641993; Thu, 30 Jun 2022 02:15:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DCc5agTqi5Pp1GfCjdoVAe9ecr3aIQwx/CwRhfsprn9PS35OLytOkU7al+uqPNpL8tMF/Cst2cLkpMoXb3Adw3jEwkbHvNtJe8JtcjN3i7pdZ0xlF3NBrZy8ytrbl2LAYHH39WuyiocCPr1Ct61bcd8w0x2W5811hFvJ96RRlaaaAC2TkkwAC1PPcjym/stgj/7BS1JUp4ikq4zXl2QvwImXdmUqCVd/edLoc1vNRFAYL6aR8bknRA55ilrnh0B/iR5diWD9EhJD+oVwyT6qWs+WiP5POHMA5bdzShnzFdTGcgdtlqHMmZ8fpkMoBepVCeAFWkyrtEt9pTlJ5l2mRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LtmGy0oSsa9Y6+crbY6Qb4TjZJwrK7ZMYRrwhuxNU9Y=; b=FOFeCdUZX0Qjtx1XmiB3WwxZTVhR8vafthLJQ0tYzcLuEfK1xp7KONH3JWIjVxVePz6kE4xfd3do13EqYXub8Z4/8PF1zCwvdbyPQBhP6s+hJqWemnuRxnLgyvEtKWDYYSPcUIlsT2c7vExKGLg5+eZR5n2GJkDtAczV4EPUpKHyCtud4WueXs8m55eO5z3toMqckX7ZNDa9a7F7AqTEZpM4CNrMrS1UmzFBE6E4TspOGrP1uknaUoBvnUXTMm4YdDbNRdB4KdpqP3g5jQSHMW17bzrYq3JmxsQURDAq21tdPb4AxhzdG3FNdxR7eVU95WtNVWAGtN7iVXoZwV5BOw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=grimberg.me smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LtmGy0oSsa9Y6+crbY6Qb4TjZJwrK7ZMYRrwhuxNU9Y=; b=oHCT9HFJv9sYS1TU3Z4zDCuq5RgEPcrShKpGnfGynF53rh/qY47vcxDzBWUcKNH2efzTFVbsccnxJ6Y/9Cyfj8BNMxYFAo0IA4yxuRlkYju832ArWts21aGCV1U4HE0DKZg5j+jCH0UZEuQhUWm0Df9fdCBLoFRP58F+8FsGEmLjV+QWo22Las76yYjhFHPvT+z/7f759jLm+/eSvqjhZYVhCIwIA2yguTVQjeHycqPeaxwqnZvxUREHRYd1lJhueDx9NZMoSpW79J4g0BzTDVXtyRZ1Z1flCrUCEBDGp243MSoQpfGfQgj6y4yo2RxufRELp4aPF/yT8X7uUhImlw== Received: from DS7PR05CA0057.namprd05.prod.outlook.com (2603:10b6:8:2f::21) by BY5PR12MB3778.namprd12.prod.outlook.com (2603:10b6:a03:1a6::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun 2022 09:15:33 +0000 Received: from DM6NAM11FT020.eop-nam11.prod.protection.outlook.com (2603:10b6:8:2f:cafe::c7) by DS7PR05CA0057.outlook.office365.com (2603:10b6:8:2f::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.8 via Frontend Transport; Thu, 30 Jun 2022 09:15:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT020.mail.protection.outlook.com (10.13.172.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 09:15:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Thu, 30 Jun 2022 09:15:31 +0000 Received: from dev.nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 30 Jun 2022 02:15:30 -0700 From: Chaitanya Kulkarni To: , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Chaitanya Kulkarni Subject: [PATCH 6/6] null_blk: add REQ_OP_VERIFY support Date: Thu, 30 Jun 2022 02:14:06 -0700 Message-ID: <20220630091406.19624-7-kch@nvidia.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20220630091406.19624-1-kch@nvidia.com> References: <20220630091406.19624-1-kch@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f8df16dd-770f-4efe-d556-08da5a7915b0 X-MS-TrafficTypeDiagnostic: BY5PR12MB3778:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: KIjj0yYXLPv0LNDnJBexy35RsIb79pKW8orDLbtf5dZQv/XahkVP/ylZ3syazHZGDi1mOEEZhYlHgli1VTKEiqkXbAA6D1GvHfFVyeztAGJSkthrDGM4r5Pq80KgbzMofyR4ZIem0CjwO6883Jx9wZPSXtEjcaIqU12i+AQ1VC2deiXWBj1gvbYmOoDhtUn+QdgcD2Emp+6w3ohxPnLmloD/nDHnzbkK1BBQJAwF6fCebS1Qq8UuGVotrxWKr8yZiVHZYjJ2pXRGUHp4wITtGwiQNReQrtwfVPuTtp1rXZUYU16K1Yjkkmh/CuvcCnA4xnBpUoHWhjTKotak96cSb8/huKNdAZVkqgLpvJti95fobURPIhAhSVxTf8SQIVFaclNzmYTNGqRIHdzxvGYMVAde8il93cn2vimHvS7usxg3zpCLkOQht7L9bIRDDMlkUw4o2ZBK8D0mxZ/uh+5CatQXFMcLwJBLZmDw9ao4JQAM6wrWWYfNLnQdwtYrXqHYwKJ9BNBdK4RzIo4Y+b9KgId5llOP9t4W6VT0J9peOVvawPim/uivvBVByrZjdXLEm8HtQT2B2or+LL8aNIvDl5kG/JXnoZKOS4GpRbgGSBmpKwH45O89RUle+WtGUaDAwyjIHp0bsqTWw80h8kwLdghC0r3hDqFw2PZQvv0GAkeOFEVacu+bQsgewJbP7aUNCtNiXbQjKg/SmA3cNogbfMHK/JToWMV0FcjiVLhQOpHsza0G9bQYtzADR+1G9Qfkp2SgKysi9vg9PwRR3quEGwZgJ1vUOqBDvWUJO9hE81PkzsB0YdNYQC1q/B0mlM2XtJe5jnz6IiOgSGb3lMJt1ciiXu1EDU5S8BscFVwe3JTPUE0q5iUD/u+PZhVXsQG9 X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(136003)(39860400002)(396003)(46966006)(36840700001)(40470700004)(8936002)(41300700001)(356005)(81166007)(82740400003)(26005)(426003)(16526019)(186003)(107886003)(6666004)(1076003)(2906002)(36860700001)(2616005)(82310400005)(7696005)(40480700001)(36756003)(110136005)(478600001)(4326008)(70206006)(70586007)(7416002)(47076005)(7406005)(5660300002)(8676002)(316002)(83380400001)(40460700003)(54906003)(336012)(36900700001)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 09:15:32.6549 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f8df16dd-770f-4efe-d556-08da5a7915b0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT020.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB3778 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org Add a new module parameter, configfs attribute to configure handling of the REQ_OP_VERIFY. This is needed for testing newly added REQ_OP_VERIFY block layer operation. Signed-off-by: Chaitanya Kulkarni --- drivers/block/null_blk/main.c | 20 +++++++++++++++++++- drivers/block/null_blk/null_blk.h | 1 + 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 6b67088f4ea7..559daac83b94 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -77,6 +77,10 @@ enum { NULL_IRQ_TIMER = 2, }; +static bool g_verify = true; +module_param_named(verify, g_verify, bool, 0444); +MODULE_PARM_DESC(verify, "Allow REQ_OP_VERIFY processing. Default: true"); + static bool g_virt_boundary = false; module_param_named(virt_boundary, g_virt_boundary, bool, 0444); MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False"); @@ -400,6 +404,7 @@ NULLB_DEVICE_ATTR(blocking, bool, NULL); NULLB_DEVICE_ATTR(use_per_node_hctx, bool, NULL); NULLB_DEVICE_ATTR(memory_backed, bool, NULL); NULLB_DEVICE_ATTR(discard, bool, NULL); +NULLB_DEVICE_ATTR(verify, bool, NULL); NULLB_DEVICE_ATTR(mbps, uint, NULL); NULLB_DEVICE_ATTR(cache_size, ulong, NULL); NULLB_DEVICE_ATTR(zoned, bool, NULL); @@ -522,6 +527,7 @@ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_power, &nullb_device_attr_memory_backed, &nullb_device_attr_discard, + &nullb_device_attr_verify, &nullb_device_attr_mbps, &nullb_device_attr_cache_size, &nullb_device_attr_badblocks, @@ -588,7 +594,7 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item) static ssize_t memb_group_features_show(struct config_item *item, char *page) { return snprintf(page, PAGE_SIZE, - "memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active,blocksize,max_sectors,virt_boundary\n"); + "memory_backed,discard,verify,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active,blocksize,max_sectors,virt_boundary\n"); } CONFIGFS_ATTR_RO(memb_group_, features); @@ -651,6 +657,7 @@ static struct nullb_device *null_alloc_dev(void) dev->hw_queue_depth = g_hw_queue_depth; dev->blocking = g_blocking; dev->use_per_node_hctx = g_use_per_node_hctx; + dev->verify = g_verify; dev->zoned = g_zoned; dev->zone_size = g_zone_size; dev->zone_capacity = g_zone_capacity; @@ -1394,6 +1401,10 @@ blk_status_t null_process_cmd(struct nullb_cmd *cmd, return ret; } + /* currently implemented as noop */ + if (op == REQ_OP_VERIFY) + return 0; + if (dev->memory_backed) return null_handle_memory_backed(cmd, op, sector, nr_sectors); @@ -1769,6 +1780,12 @@ static void null_config_discard(struct nullb *nullb) blk_queue_max_discard_sectors(nullb->q, UINT_MAX >> 9); } +static void null_config_verify(struct nullb *nullb) +{ + blk_queue_max_verify_sectors(nullb->q, + nullb->dev->verify ? UINT_MAX >> 9 : 0); +} + static const struct block_device_operations null_bio_ops = { .owner = THIS_MODULE, .submit_bio = null_submit_bio, @@ -2059,6 +2076,7 @@ static int null_add_dev(struct nullb_device *dev) blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1); null_config_discard(nullb); + null_config_verify(nullb); if (config_item_name(&dev->item)) { /* Use configfs dir name as the device name */ diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index 8359b43842f2..2a1df1bc8165 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -111,6 +111,7 @@ struct nullb_device { bool power; /* power on/off the device */ bool memory_backed; /* if data is stored in memory */ bool discard; /* if support discard */ + bool verify; /* if support verify */ bool zoned; /* if device is zoned */ bool virt_boundary; /* virtual boundary on/off for the device */ };