From patchwork Thu Nov 4 06:46:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12602653 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B53AC433EF for ; Thu, 4 Nov 2021 07:23:07 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C558C61108 for ; Thu, 4 Nov 2021 07:23:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C558C61108 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-538-kboYHZ7qN9yXS2c-FyQYuw-1; Thu, 04 Nov 2021 03:23:01 -0400 X-MC-Unique: kboYHZ7qN9yXS2c-FyQYuw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 07B0E875048; Thu, 4 Nov 2021 07:22:57 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id BD2E8772F2; Thu, 4 Nov 2021 07:22:56 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 94CA34A703; Thu, 4 Nov 2021 07:22:56 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1A46lT0u005626 for ; Thu, 4 Nov 2021 02:47:29 -0400 Received: by smtp.corp.redhat.com (Postfix) id 4C02C40CFD10; Thu, 4 Nov 2021 06:47:29 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast06.extmail.prod.ext.rdu2.redhat.com [10.11.55.22]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 44AA640CFD0A for ; Thu, 4 Nov 2021 06:47:29 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 23960185A7A4 for ; Thu, 4 Nov 2021 06:47:29 +0000 (UTC) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2080.outbound.protection.outlook.com [40.107.220.80]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-16-SgU3pPPLOu2kZ3HYMT1_0Q-1; Thu, 04 Nov 2021 02:47:10 -0400 X-MC-Unique: SgU3pPPLOu2kZ3HYMT1_0Q-1 Received: from MWHPR14CA0013.namprd14.prod.outlook.com (2603:10b6:300:ae::23) by DM5PR12MB1881.namprd12.prod.outlook.com (2603:10b6:3:10f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Thu, 4 Nov 2021 06:47:05 +0000 Received: from CO1NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ae:cafe::e9) by MWHPR14CA0013.outlook.office365.com (2603:10b6:300:ae::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 06:47:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none; infradead.org; dmarc=pass action=none header.from=nvidia.com Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT008.mail.protection.outlook.com (10.13.175.191) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:47:04 +0000 Received: from dev.nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 06:47:00 +0000 From: Chaitanya Kulkarni To: , , , , , , , Date: Wed, 3 Nov 2021 23:46:27 -0700 Message-ID: <20211104064634.4481-2-chaitanyak@nvidia.com> In-Reply-To: <20211104064634.4481-1-chaitanyak@nvidia.com> References: <20211104064634.4481-1-chaitanyak@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: af4a8b2d-4d89-4c2f-2fcc-08d99f5ee9dd X-MS-TrafficTypeDiagnostic: DM5PR12MB1881: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6790 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0 X-Microsoft-Antispam-Message-Info: F7ZwzjHgLOQ7Nu8vxFgbZBFBwBCrSfh3BwRqepbl9hggNZtn8c3KI/PLWTXY+RyDZ1J8yHhJvnSe/9YW1/FW+EKckMsaDOUQDPey1zxcxCH4WFGNPpj6wbyE91FSFW5m9/17EuLkiQ/5W7yTgOvNl9UyJXlEUfx0Z2b5sZK38pOQHAwCiSi8KEJvs/PKD1BcLYVdbZV8+Lda0+cIOuHQKn1PJSkJhqv50KgXstxKRDqEMqIFnVVwKUrnA9mYktl55BJK2DNPPKaWPOg5m7LaklKvRAxa9hQmlYIVmDCllzp7sX+AP5W4LFimNrKm88o9SH3i3K1h2e6FWNQXWeu4Hk3XbR02pEZNA+EeS0++cxDNjn3zs1ld/B13UTIz5pKQud0kE5rOCH4fp/I9ZMXeGegIx57uXPm7hUrpU+Cs9WtnJZ+0cNzNSSRcPFlZQhAZqYz/CtI5IQxwxmoCMI2HSEZaMwgVcUXCipxybkTsdIC8OWGyQ1XT3xPpmM5XkPjxQPwJ35RsRVabB0yHMKh+Fec/nfDIkTPcjNBsD2j4u7qTjy5O46a0hPJ0RMvQy33ZLkL7rtKTUQfi3ZouSQ+YWsdkhhkaTunYfVPVcJrTc234Ejwysthquih8dJ7ZuWZP8m+p4mAQRqehAfBmjMeqfYBjyVGP6S8+XxJiOLneS5TlLnLiTLvk/Jtp6Em37SokjHmHk6xX7xMOUY7fAhYI2L7zuZsAyiRq+MZB9Wo0ExI= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(8676002)(6666004)(36860700001)(107886003)(316002)(82310400003)(30864003)(8936002)(7636003)(4326008)(426003)(70586007)(508600001)(1076003)(70206006)(7406005)(83380400001)(2616005)(356005)(16526019)(186003)(36756003)(26005)(336012)(54906003)(110136005)(47076005)(7416002)(5660300002)(2906002)(86362001)(7696005)(2101003); DIR:OUT; SFP:1101 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 06:47:04.7755 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: af4a8b2d-4d89-4c2f-2fcc-08d99f5ee9dd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1881 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 1A46lT0u005626 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Thu, 04 Nov 2021 03:22:43 -0400 Cc: snitzer@redhat.com, ebiggers@google.com, djwong@kernel.org, clm@fb.com, adilger.kernel@dilger.ca, osandov@fb.com, agk@redhat.com, javier@javigon.com, sagi@grimberg.me, dongli.zhang@oracle.com, willy@infradead.org, hch@lst.de, danil.kipnis@cloud.ionos.com, idryomov@gmail.com, jinpu.wang@cloud.ionos.com, Chaitanya Kulkarni , jejb@linux.ibm.com, josef@toxicpanda.com, ming.lei@redhat.com, dsterba@suse.com, viro@zeniv.linux.org.uk, jefflexu@linux.alibaba.com, bvanassche@acm.org, axboe@kernel.dk, tytso@mit.edu, martin.petersen@oracle.com, song@kernel.org, johannes.thumshirn@wdc.com, jlayton@kernel.org, kbusch@kernel.org, jack@suse.com Subject: [dm-devel] [RFC PATCH 1/8] block: add support for REQ_OP_VERIFY X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Chaitanya Kulkarni This adds a new block layer operation to offload verifying a range of LBAs. This support is needed in order to provide file systems and fabrics, kernel components to offload LBA verification when it is supported by the hardware controller. In case hardware offloading is not supported then we provide APIs to emulate the same. The prominent example of that is NVMe Verify command. We also provide an emulation of the same operation which can be used in case H/W does not support verify. This is still useful when block device is remotely attached e.g. using NVMeOF. Signed-off-by: Chaitanya Kulkarni --- Documentation/ABI/testing/sysfs-block | 14 ++ block/blk-core.c | 5 + block/blk-lib.c | 192 ++++++++++++++++++++++++++ block/blk-merge.c | 19 +++ block/blk-settings.c | 17 +++ block/blk-sysfs.c | 8 ++ block/blk-zoned.c | 1 + block/bounce.c | 1 + block/ioctl.c | 35 +++++ include/linux/bio.h | 10 +- include/linux/blk_types.h | 2 + include/linux/blkdev.h | 31 +++++ include/uapi/linux/fs.h | 1 + 13 files changed, 332 insertions(+), 4 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-block b/Documentation/ABI/testing/sysfs-block index e34cdeeeb9d4..ba97f7a9cbec 100644 --- a/Documentation/ABI/testing/sysfs-block +++ b/Documentation/ABI/testing/sysfs-block @@ -252,6 +252,20 @@ Description: write_zeroes_max_bytes is 0, write zeroes is not supported by the device. +What: /sys/block//queue/verify_max_bytes +Date: Nov 2021 +Contact: Chaitanya Kulkarni +Description: + Devices that support verify operation in which a single + request can be issued to verify the range of the contiguous + blocks on the storage without any payload in the request. + This can be used to optimize verifying LBAs on the device + without reading by offloading functionality. verify_max_bytes + indicates how many bytes can be written in a single verify + command. If verify_max_bytes is 0, verify operation is not + supported by the device. + + What: /sys/block//queue/zoned Date: September 2016 Contact: Damien Le Moal diff --git a/block/blk-core.c b/block/blk-core.c index 5e752840b41a..62160e729e7d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -141,6 +141,7 @@ static const char *const blk_op_name[] = { REQ_OP_NAME(ZONE_APPEND), REQ_OP_NAME(WRITE_SAME), REQ_OP_NAME(WRITE_ZEROES), + REQ_OP_NAME(VERIFY), REQ_OP_NAME(SCSI_IN), REQ_OP_NAME(SCSI_OUT), REQ_OP_NAME(DRV_IN), @@ -851,6 +852,10 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio) if (!q->limits.max_write_same_sectors) goto not_supported; break; + case REQ_OP_VERIFY: + if (!q->limits.max_verify_sectors) + goto not_supported; + break; case REQ_OP_ZONE_APPEND: status = blk_check_zone_append(q, bio); if (status != BLK_STS_OK) diff --git a/block/blk-lib.c b/block/blk-lib.c index 752f9c722062..fdbb765b369e 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -439,3 +439,195 @@ int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, return ret; } EXPORT_SYMBOL(blkdev_issue_zeroout); + +/** + * __blkdev_emulate_verify - emulate number of verify operations + * asynchronously + * @bdev: blockdev to issue + * @sector: start sector + * @nr_sects: number of sectors to verify + * @gfp_mask: memory allocation flags (for bio_alloc) + * @biop: pointer to anchor bio + * @buf: data buffer to mapped on bio + * + * Description: + * Verify a block range by emulating REQ_OP_VERIFY, use this when H/W + * offloading is not supported asynchronously. Caller is responsible to + * handle anchored bio. + */ +int __blkdev_emulate_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask, struct bio **biop, char *buf) +{ + struct request_queue *q = bdev_get_queue(bdev); + struct bio *bio = *biop; + unsigned int sz; + int bi_size; + + if (!q) + return -ENXIO; + + if (bdev_read_only(bdev)) + return -EPERM; + + while (nr_sects != 0) { + bio = blk_next_bio(bio, __blkdev_sectors_to_bio_pages(nr_sects), + gfp_mask); + bio->bi_iter.bi_sector = sector; + bio_set_dev(bio, bdev); + bio_set_op_attrs(bio, REQ_OP_READ, 0); + + while (nr_sects != 0) { + bool is_vaddr = is_vmalloc_addr(buf); + struct page *p; + + p = is_vaddr ? vmalloc_to_page(buf) : virt_to_page(buf); + sz = min((sector_t) PAGE_SIZE, nr_sects << 9); + bi_size = bio_add_page(bio, p, sz, offset_in_page(buf)); + nr_sects -= bi_size >> 9; + sector += bi_size >> 9; + buf += bi_size; + + if (bi_size < sz) + break; + } + cond_resched(); + } + + *biop = bio; + return 0; +} +EXPORT_SYMBOL(__blkdev_emulate_verify); + +/** + * blkdev_emulate_verify - emulate number of verify operations synchronously + * @bdev: blockdev to issue + * @sector: start sector + * @nr_sects: number of sectors to verify + * @gfp_mask: memory allocation flags (for bio_alloc) + * + * Description: + * Verify a block range by emulating REQ_OP_VERIFY, use this when H/W + * offloading is not supported synchronously. + */ +int blkdev_emulate_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask) +{ + sector_t min_io_sect = (BIO_MAX_VECS << PAGE_SHIFT) >> 9; + int ret = 0; + char *buf; + + /* allows pages in buffer to be == BIO_MAX_VECS */ + buf = kzalloc(min_io_sect << 9, GFP_KERNEL); + if (!buf) { + ret = -ENOMEM; + goto out; + } + + while (nr_sects > 0) { + sector_t curr_sects = min_t(sector_t, nr_sects, min_io_sect); + struct bio *bio = NULL; + + ret = __blkdev_emulate_verify(bdev, sector, curr_sects, + GFP_KERNEL, &bio, buf); + + if (!(ret == 0 && bio)) + break; + + ret = submit_bio_wait(bio); + bio_put(bio); + + nr_sects -= curr_sects; + sector += curr_sects; + } +out: + kfree(buf); + return ret; +} +EXPORT_SYMBOL(blkdev_emulate_verify); + +/** + * __blkdev_issue_verify - generate number of verify operations + * @bdev: blockdev to issue + * @sector: start sector + * @nr_sects: number of sectors to verify + * @gfp_mask: memory allocation flags (for bio_alloc) + * @biop: pointer to anchor bio + * + * Description: + * Verify a block range using hardware offload. + * + * The function will emulate verify operation if no explicit hardware + * offloading for verifying is provided. + */ +int __blkdev_issue_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask, struct bio **biop) +{ + struct request_queue *q = bdev_get_queue(bdev); + unsigned int max_verify_sectors; + struct bio *bio = *biop; + + if (!q) + return -ENXIO; + + if (bdev_read_only(bdev)) + return -EPERM; + + max_verify_sectors = bdev_verify_sectors(bdev); + + if (max_verify_sectors == 0) + return blkdev_emulate_verify(bdev, sector, nr_sects, gfp_mask); + + while (nr_sects) { + bio = blk_next_bio(bio, 0, gfp_mask); + bio->bi_iter.bi_sector = sector; + bio_set_dev(bio, bdev); + bio->bi_opf = REQ_OP_VERIFY; + if (nr_sects > max_verify_sectors) { + bio->bi_iter.bi_size = max_verify_sectors << 9; + nr_sects -= max_verify_sectors; + sector += max_verify_sectors; + } else { + bio->bi_iter.bi_size = nr_sects << 9; + nr_sects = 0; + } + cond_resched(); + } + + *biop = bio; + return 0; +} +EXPORT_SYMBOL(__blkdev_issue_verify); + +/** + * blkdev_issue_verify - verify a block range + * @bdev: blockdev to verify + * @sector: start sector + * @nr_sects: number of sectors to verify + * @gfp_mask: memory allocation flags (for bio_alloc) + * + * Description: + * Verify a block range using hardware offload. + */ +int blkdev_issue_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask) +{ + int ret = 0; + sector_t bs_mask; + struct bio *bio = NULL; + struct blk_plug plug; + + bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1; + if ((sector | nr_sects) & bs_mask) + return -EINVAL; + + blk_start_plug(&plug); + ret = __blkdev_issue_verify(bdev, sector, nr_sects, gfp_mask, &bio); + if (ret == 0 && bio) { + ret = submit_bio_wait(bio); + bio_put(bio); + } + blk_finish_plug(&plug); + + return ret; +} +EXPORT_SYMBOL(blkdev_issue_verify); diff --git a/block/blk-merge.c b/block/blk-merge.c index ffb4aa0ea68b..c28632cb936b 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -117,6 +117,20 @@ static struct bio *blk_bio_write_zeroes_split(struct request_queue *q, return bio_split(bio, q->limits.max_write_zeroes_sectors, GFP_NOIO, bs); } +static struct bio *blk_bio_verify_split(struct request_queue *q, + struct bio *bio, struct bio_set *bs, unsigned *nsegs) +{ + *nsegs = 0; + + if (!q->limits.max_verify_sectors) + return NULL; + + if (bio_sectors(bio) <= q->limits.max_verify_sectors) + return NULL; + + return bio_split(bio, q->limits.max_verify_sectors, GFP_NOIO, bs); +} + static struct bio *blk_bio_write_same_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -316,6 +330,10 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) split = blk_bio_write_zeroes_split(q, *bio, &q->bio_split, nr_segs); break; + case REQ_OP_VERIFY: + split = blk_bio_verify_split(q, *bio, &q->bio_split, + nr_segs); + break; case REQ_OP_WRITE_SAME: split = blk_bio_write_same_split(q, *bio, &q->bio_split, nr_segs); @@ -383,6 +401,7 @@ unsigned int blk_recalc_rq_segments(struct request *rq) case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: + case REQ_OP_VERIFY: return 0; case REQ_OP_WRITE_SAME: return 1; diff --git a/block/blk-settings.c b/block/blk-settings.c index 4c974340f1a9..f34cbd3678b6 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -48,6 +48,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->chunk_sectors = 0; lim->max_write_same_sectors = 0; lim->max_write_zeroes_sectors = 0; + lim->max_verify_sectors = 0; lim->max_zone_append_sectors = 0; lim->max_discard_sectors = 0; lim->max_hw_discard_sectors = 0; @@ -84,6 +85,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors = UINT_MAX; lim->max_write_same_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; + lim->max_verify_sectors = UINT_MAX; lim->max_zone_append_sectors = UINT_MAX; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -227,6 +229,19 @@ void blk_queue_max_write_zeroes_sectors(struct request_queue *q, } EXPORT_SYMBOL(blk_queue_max_write_zeroes_sectors); +/** + * blk_queue_max_verify_sectors - set max sectors for a single verify + * + * @q: the request queue for the device + * @max_verify_sectors: maximum number of sectors to verify per command + **/ +void blk_queue_max_verify_sectors(struct request_queue *q, + unsigned int max_verify_sectors) +{ + q->limits.max_verify_sectors = max_verify_sectors; +} +EXPORT_SYMBOL(blk_queue_max_verify_sectors); + /** * blk_queue_max_zone_append_sectors - set max sectors for a single zone append * @q: the request queue for the device @@ -514,6 +529,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, b->max_write_same_sectors); t->max_write_zeroes_sectors = min(t->max_write_zeroes_sectors, b->max_write_zeroes_sectors); + t->max_verify_sectors = min(t->max_verify_sectors, + b->max_verify_sectors); t->max_zone_append_sectors = min(t->max_zone_append_sectors, b->max_zone_append_sectors); t->bounce_pfn = min_not_zero(t->bounce_pfn, b->bounce_pfn); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index b513f1683af0..f918c83dd8d4 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -108,6 +108,12 @@ queue_ra_store(struct request_queue *q, const char *page, size_t count) return ret; } +static ssize_t queue_verify_max_show(struct request_queue *q, char *page) +{ + return sprintf(page, "%llu\n", + (unsigned long long)q->limits.max_verify_sectors << 9); +} + static ssize_t queue_max_sectors_show(struct request_queue *q, char *page) { int max_sectors_kb = queue_max_sectors(q) >> 1; @@ -584,6 +590,7 @@ QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data"); QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes"); QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes"); +QUEUE_RO_ENTRY(queue_verify_max, "verify_max_bytes"); QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes"); QUEUE_RO_ENTRY(queue_zoned, "zoned"); @@ -638,6 +645,7 @@ static struct attribute *queue_attrs[] = { &queue_discard_zeroes_data_entry.attr, &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, + &queue_verify_max_entry.attr, &queue_zone_append_max_entry.attr, &queue_nonrot_entry.attr, &queue_zoned_entry.attr, diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 7a68b6e4300c..c9c51ee22a49 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -73,6 +73,7 @@ bool blk_req_needs_zone_write_lock(struct request *rq) switch (req_op(rq)) { case REQ_OP_WRITE_ZEROES: + case REQ_OP_VERIFY: case REQ_OP_WRITE_SAME: case REQ_OP_WRITE: return blk_rq_zone_is_seq(rq); diff --git a/block/bounce.c b/block/bounce.c index fc55314aa426..86cdb900b88f 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -259,6 +259,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask, case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: + case REQ_OP_VERIFY: break; case REQ_OP_WRITE_SAME: bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0]; diff --git a/block/ioctl.c b/block/ioctl.c index d61d652078f4..5e1b3c4660bf 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -168,6 +168,39 @@ static int blk_ioctl_zeroout(struct block_device *bdev, fmode_t mode, BLKDEV_ZERO_NOUNMAP); } +static int blk_ioctl_verify(struct block_device *bdev, fmode_t mode, + unsigned long arg) +{ + uint64_t range[2]; + struct address_space *mapping; + uint64_t start, end, len; + + if (!(mode & FMODE_WRITE)) + return -EBADF; + + if (copy_from_user(range, (void __user *)arg, sizeof(range))) + return -EFAULT; + + start = range[0]; + len = range[1]; + end = start + len - 1; + + if (start & 511) + return -EINVAL; + if (len & 511) + return -EINVAL; + if (end >= (uint64_t)i_size_read(bdev->bd_inode)) + return -EINVAL; + if (end < start) + return -EINVAL; + + /* Invalidate the page cache, including dirty pages */ + mapping = bdev->bd_inode->i_mapping; + truncate_inode_pages_range(mapping, start, end); + + return blkdev_issue_verify(bdev, start >> 9, len >> 9, GFP_KERNEL); +} + static int put_ushort(unsigned short __user *argp, unsigned short val) { return put_user(val, argp); @@ -460,6 +493,8 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode, BLKDEV_DISCARD_SECURE); case BLKZEROOUT: return blk_ioctl_zeroout(bdev, mode, arg); + case BLKVERIFY: + return blk_ioctl_verify(bdev, mode, arg); case BLKREPORTZONE: return blkdev_report_zones_ioctl(bdev, mode, cmd, arg); case BLKRESETZONE: diff --git a/include/linux/bio.h b/include/linux/bio.h index c74857cf1252..d660c37b7d6c 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -63,7 +63,8 @@ static inline bool bio_has_data(struct bio *bio) bio->bi_iter.bi_size && bio_op(bio) != REQ_OP_DISCARD && bio_op(bio) != REQ_OP_SECURE_ERASE && - bio_op(bio) != REQ_OP_WRITE_ZEROES) + bio_op(bio) != REQ_OP_WRITE_ZEROES && + bio_op(bio) != REQ_OP_VERIFY) return true; return false; @@ -73,8 +74,8 @@ static inline bool bio_no_advance_iter(const struct bio *bio) { return bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE || - bio_op(bio) == REQ_OP_WRITE_SAME || - bio_op(bio) == REQ_OP_WRITE_ZEROES; + bio_op(bio) == REQ_OP_WRITE_ZEROES || + bio_op(bio) == REQ_OP_VERIFY; } static inline bool bio_mergeable(struct bio *bio) @@ -198,7 +199,7 @@ static inline unsigned bio_segments(struct bio *bio) struct bvec_iter iter; /* - * We special case discard/write same/write zeroes, because they + * We special case discard/write same/write zeroes/verify, because they * interpret bi_size differently: */ @@ -206,6 +207,7 @@ static inline unsigned bio_segments(struct bio *bio) case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: + case REQ_OP_VERIFY: return 0; case REQ_OP_WRITE_SAME: return 1; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 1bc6f6a01070..8877711c4c56 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -366,6 +366,8 @@ enum req_opf { REQ_OP_SECURE_ERASE = 5, /* write the same sector many times */ REQ_OP_WRITE_SAME = 7, + /* verify the sectors */ + REQ_OP_VERIFY = 8, /* write the zero filled sector many times */ REQ_OP_WRITE_ZEROES = 9, /* Open a zone */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 0dea268bd61b..99c41d90584b 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -334,6 +334,7 @@ struct queue_limits { unsigned int max_hw_discard_sectors; unsigned int max_write_same_sectors; unsigned int max_write_zeroes_sectors; + unsigned int max_verify_sectors; unsigned int max_zone_append_sectors; unsigned int discard_granularity; unsigned int discard_alignment; @@ -621,6 +622,7 @@ struct request_queue { #define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */ #define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */ #define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */ +#define QUEUE_FLAG_VERIFY 30 /* supports Verify */ #define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_SAME_COMP) | \ @@ -667,6 +669,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); #define blk_queue_fua(q) test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags) #define blk_queue_registered(q) test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags) #define blk_queue_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) +#define blk_queue_verify(q) test_bit(QUEUE_FLAG_VERIFY, &(q)->queue_flags) extern void blk_set_pm_only(struct request_queue *q); extern void blk_clear_pm_only(struct request_queue *q); @@ -814,6 +817,9 @@ static inline bool rq_mergeable(struct request *rq) if (req_op(rq) == REQ_OP_WRITE_ZEROES) return false; + if (req_op(rq) == REQ_OP_VERIFY) + return false; + if (req_op(rq) == REQ_OP_ZONE_APPEND) return false; @@ -1072,6 +1078,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, if (unlikely(op == REQ_OP_WRITE_ZEROES)) return q->limits.max_write_zeroes_sectors; + if (unlikely(op == REQ_OP_VERIFY)) + return q->limits.max_verify_sectors; + return q->limits.max_sectors; } @@ -1154,6 +1163,8 @@ extern void blk_queue_max_discard_sectors(struct request_queue *q, unsigned int max_discard_sectors); extern void blk_queue_max_write_same_sectors(struct request_queue *q, unsigned int max_write_same_sectors); +extern void blk_queue_max_verify_sectors(struct request_queue *q, + unsigned int max_verify_sectors); extern void blk_queue_max_write_zeroes_sectors(struct request_queue *q, unsigned int max_write_same_sectors); extern void blk_queue_logical_block_size(struct request_queue *, unsigned int); @@ -1348,6 +1359,16 @@ extern int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, unsigned flags); extern int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, unsigned flags); +extern int __blkdev_emulate_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask, struct bio **biop, + char *buf); +extern int blkdev_emulate_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask); +extern int __blkdev_issue_verify(struct block_device *bdev, + sector_t sector, sector_t nr_sects, gfp_t gfp_mask, + struct bio **biop); +extern int blkdev_issue_verify(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask); static inline int sb_issue_discard(struct super_block *sb, sector_t block, sector_t nr_blocks, gfp_t gfp_mask, unsigned long flags) @@ -1553,6 +1574,16 @@ static inline unsigned int bdev_write_same(struct block_device *bdev) return 0; } +static inline unsigned int bdev_verify_sectors(struct block_device *bdev) +{ + struct request_queue *q = bdev_get_queue(bdev); + + if (q) + return q->limits.max_verify_sectors; + + return 0; +} + static inline unsigned int bdev_write_zeroes_sectors(struct block_device *bdev) { struct request_queue *q = bdev_get_queue(bdev); diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index f44eb0a04afd..5eda16bd2c3d 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -184,6 +184,7 @@ struct fsxattr { #define BLKSECDISCARD _IO(0x12,125) #define BLKROTATIONAL _IO(0x12,126) #define BLKZEROOUT _IO(0x12,127) +#define BLKVERIFY _IO(0x12,128) /* * A jump here: 130-131 are reserved for zoned block devices * (see uapi/linux/blkzoned.h) From patchwork Thu Nov 4 06:46:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12602659 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC98CC433EF for ; Thu, 4 Nov 2021 07:23:11 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4ADAA61157 for ; Thu, 4 Nov 2021 07:23:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4ADAA61157 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-9-V1v9iklmMUKKc43Ijbe-AA-1; Thu, 04 Nov 2021 03:23:06 -0400 X-MC-Unique: V1v9iklmMUKKc43Ijbe-AA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8D9975074C; Thu, 4 Nov 2021 07:22:59 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 462F15E272; Thu, 4 Nov 2021 07:22:59 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 23AF9181A1D0; Thu, 4 Nov 2021 07:22:59 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1A46loYl005661 for ; Thu, 4 Nov 2021 02:47:50 -0400 Received: by smtp.corp.redhat.com (Postfix) id 07F112166B41; Thu, 4 Nov 2021 06:47:50 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast01.extmail.prod.ext.rdu2.redhat.com [10.11.55.17]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F33012166B26 for ; Thu, 4 Nov 2021 06:47:47 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 167F1899EC0 for ; Thu, 4 Nov 2021 06:47:47 +0000 (UTC) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2077.outbound.protection.outlook.com [40.107.244.77]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-441-03hLJXoRM2WbulYh7ZcDQg-1; Thu, 04 Nov 2021 02:47:31 -0400 X-MC-Unique: 03hLJXoRM2WbulYh7ZcDQg-1 Received: from MW4PR04CA0341.namprd04.prod.outlook.com (2603:10b6:303:8a::16) by DM4PR12MB5341.namprd12.prod.outlook.com (2603:10b6:5:39e::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Thu, 4 Nov 2021 06:47:26 +0000 Received: from CO1NAM11FT044.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8a:cafe::2b) by MW4PR04CA0341.outlook.office365.com (2603:10b6:303:8a::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:47:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none; infradead.org; dmarc=pass action=none header.from=nvidia.com Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT044.mail.protection.outlook.com (10.13.175.188) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:47:24 +0000 Received: from dev.nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 06:47:23 +0000 From: Chaitanya Kulkarni To: , , , , , , , Date: Wed, 3 Nov 2021 23:46:28 -0700 Message-ID: <20211104064634.4481-3-chaitanyak@nvidia.com> In-Reply-To: <20211104064634.4481-1-chaitanyak@nvidia.com> References: <20211104064634.4481-1-chaitanyak@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 26ac02d0-1700-42ae-15c3-08d99f5ef585 X-MS-TrafficTypeDiagnostic: DM4PR12MB5341: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0 X-Microsoft-Antispam-Message-Info: 6+53OUy59cS/uSmDZSCGJU2KvFqthHmmZMneEYFyfYl7c38hrX35h6sTKLip6j9weUmd8D2Nol9W8LIfrh2LcxFvam4OApQi2z93Od9yYy83UIjTz/oAW3AvHHUIGUYd+RLqy1+SVKSesfvSZSsG3Fw18AymIAm7gIV+sMQYwNlKp3fkqW68+R7gR9/FLebse/u73XSc0pbu3cdseLaH3c67Wj53S5zZDtLCzbrZa1DPArGMaHGL88J5UqEqmD5G+c4dhyzSset20lFtQ51B7KLEfYHy0xFQoaHgxd3QFJZD2f8y9wBayrufiVvaMdnqJiMDEItRn44F8BIQC/N0CYPLRVmYRIO1uPzU0f8/+aEK7N+NPCojipbmdb1b8COe6zfQ3qMudVHlms7yB2oD52+eumgFSbNcbzsZB5Q2zOBPdvUgZoCxCBMqomS78j//T9b53L1Rs39EDClTYojzLfDhW88jA57xJ8tKurIZI1+m4k0j+oRvBZWto6ovigc2OwHRSz5Vinat0V/Dxa1jFx9scBPMoneOJoElnckO9Xeyd02FW1mGr4x2hXTS87ijzjcwKEsuZ/uVO2hxiN1C1QvC0jTCDnpPLLRSulxz1aTtz2h78MozaVySnRpAqifyiWax7WQV2dJHou+YctEwNMr0l0BxPIuQ9cWyaXCs7wyjJPNImkzldJ3XN5Z9w0Dj9hlHyJleKV2q1gjrLa0mDT5H4bBjcH6qQXeSk2WiRhcbuvPAwHEPi1r7SMBidFsO X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(110136005)(8936002)(16526019)(86362001)(508600001)(54906003)(47076005)(2616005)(83380400001)(1076003)(4326008)(336012)(426003)(186003)(7696005)(82310400003)(7416002)(36860700001)(356005)(5660300002)(2906002)(107886003)(36756003)(316002)(70586007)(26005)(36906005)(70206006)(7406005)(8676002)(7636003)(21314003)(2101003); DIR:OUT; SFP:1101 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 06:47:24.3298 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 26ac02d0-1700-42ae-15c3-08d99f5ef585 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT044.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5341 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 1A46loYl005661 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Thu, 04 Nov 2021 03:22:43 -0400 Cc: snitzer@redhat.com, ebiggers@google.com, djwong@kernel.org, clm@fb.com, adilger.kernel@dilger.ca, osandov@fb.com, agk@redhat.com, javier@javigon.com, sagi@grimberg.me, dongli.zhang@oracle.com, willy@infradead.org, hch@lst.de, danil.kipnis@cloud.ionos.com, idryomov@gmail.com, jinpu.wang@cloud.ionos.com, Chaitanya Kulkarni , jejb@linux.ibm.com, josef@toxicpanda.com, ming.lei@redhat.com, dsterba@suse.com, viro@zeniv.linux.org.uk, jefflexu@linux.alibaba.com, bvanassche@acm.org, axboe@kernel.dk, tytso@mit.edu, martin.petersen@oracle.com, song@kernel.org, johannes.thumshirn@wdc.com, jlayton@kernel.org, kbusch@kernel.org, jack@suse.com Subject: [dm-devel] [RFC PATCH 2/8] scsi: add REQ_OP_VERIFY support X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Chaitanya Kulkarni Signed-off-by: Chaitanya Kulkarni --- drivers/scsi/sd.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++ drivers/scsi/sd.h | 1 + 2 files changed, 53 insertions(+) diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index a3d2d4bc4a3d..7f2c4eb98cf8 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -106,6 +106,7 @@ MODULE_ALIAS_SCSI_DEVICE(TYPE_ZBC); static void sd_config_discard(struct scsi_disk *, unsigned int); static void sd_config_write_same(struct scsi_disk *); +static void sd_config_verify(struct scsi_disk *sdkp); static int sd_revalidate_disk(struct gendisk *); static void sd_unlock_native_capacity(struct gendisk *disk); static int sd_probe(struct device *); @@ -995,6 +996,41 @@ static blk_status_t sd_setup_write_zeroes_cmnd(struct scsi_cmnd *cmd) return sd_setup_write_same10_cmnd(cmd, false); } +static void sd_config_verify(struct scsi_disk *sdkp) +{ + struct request_queue *q = sdkp->disk->queue; + + /* XXX: use same pattern as sd_config_write_same(). */ + blk_queue_max_verify_sectors(q, UINT_MAX >> 9); +} + +static blk_status_t sd_setup_verify_cmnd(struct scsi_cmnd *cmd) +{ + struct request *rq = cmd->request; + struct scsi_device *sdp = cmd->device; + struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); + u64 lba = sectors_to_logical(sdp, blk_rq_pos(rq)); + u32 nr_blocks = sectors_to_logical(sdp, blk_rq_sectors(rq)); + + if (!sdkp->verify_16) + return BLK_STS_NOTSUPP; + + cmd->cmd_len = 16; + cmd->cmnd[0] = VERIFY_16; + /* skip veprotect / dpo / bytchk */ + cmd->cmnd[1] = 0; + put_unaligned_be64(lba, &cmd->cmnd[2]); + put_unaligned_be32(nr_blocks, &cmd->cmnd[10]); + cmd->cmnd[14] = 0; + cmd->cmnd[15] = 0; + + cmd->allowed = SD_MAX_RETRIES; + cmd->sc_data_direction = DMA_NONE; + cmd->transfersize = 0; + + return BLK_STS_OK; +} + static void sd_config_write_same(struct scsi_disk *sdkp) { struct request_queue *q = sdkp->disk->queue; @@ -1345,6 +1381,8 @@ static blk_status_t sd_init_command(struct scsi_cmnd *cmd) } case REQ_OP_WRITE_ZEROES: return sd_setup_write_zeroes_cmnd(cmd); + case REQ_OP_VERIFY: + return sd_setup_verify_cmnd(cmd); case REQ_OP_WRITE_SAME: return sd_setup_write_same_cmnd(cmd); case REQ_OP_FLUSH: @@ -2029,6 +2067,7 @@ static int sd_done(struct scsi_cmnd *SCpnt) switch (req_op(req)) { case REQ_OP_DISCARD: case REQ_OP_WRITE_ZEROES: + case REQ_OP_VERIFY: case REQ_OP_WRITE_SAME: case REQ_OP_ZONE_RESET: case REQ_OP_ZONE_RESET_ALL: @@ -3096,6 +3135,17 @@ static void sd_read_write_same(struct scsi_disk *sdkp, unsigned char *buffer) sdkp->ws10 = 1; } +static void sd_read_verify(struct scsi_disk *sdkp, unsigned char *buffer) +{ + struct scsi_device *sdev = sdkp->device; + + sd_printk(KERN_INFO, sdkp, "VERIFY16 check.\n"); + if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, VERIFY_16) == 1) { + sd_printk(KERN_INFO, sdkp, " VERIFY16 in ON .\n"); + sdkp->verify_16 = 1; + } +} + static void sd_read_security(struct scsi_disk *sdkp, unsigned char *buffer) { struct scsi_device *sdev = sdkp->device; @@ -3224,6 +3274,7 @@ static int sd_revalidate_disk(struct gendisk *disk) sd_read_cache_type(sdkp, buffer); sd_read_app_tag_own(sdkp, buffer); sd_read_write_same(sdkp, buffer); + sd_read_verify(sdkp, buffer); sd_read_security(sdkp, buffer); } @@ -3265,6 +3316,7 @@ static int sd_revalidate_disk(struct gendisk *disk) set_capacity_and_notify(disk, logical_to_sectors(sdp, sdkp->capacity)); sd_config_write_same(sdkp); + sd_config_verify(sdkp); kfree(buffer); /* diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h index b59136c4125b..94a86bf6dac4 100644 --- a/drivers/scsi/sd.h +++ b/drivers/scsi/sd.h @@ -120,6 +120,7 @@ struct scsi_disk { unsigned lbpvpd : 1; unsigned ws10 : 1; unsigned ws16 : 1; + unsigned verify_16 : 1; unsigned rc_basis: 2; unsigned zoned: 2; unsigned urswrz : 1; From patchwork Thu Nov 4 06:46:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12602657 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7C99C433F5 for ; Thu, 4 Nov 2021 07:23:11 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 47FA46112D for ; Thu, 4 Nov 2021 07:23:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 47FA46112D Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-9-5J3iUg7cPpSSLRJ-OdqVHQ-1; Thu, 04 Nov 2021 03:23:06 -0400 X-MC-Unique: 5J3iUg7cPpSSLRJ-OdqVHQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 52884806696; Thu, 4 Nov 2021 07:23:02 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2CF2419D9F; Thu, 4 Nov 2021 07:23:02 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id CEDE64E9F4; Thu, 4 Nov 2021 07:23:01 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1A46m3DZ005668 for ; Thu, 4 Nov 2021 02:48:03 -0400 Received: by smtp.corp.redhat.com (Postfix) id B7D6B2166B26; Thu, 4 Nov 2021 06:48:03 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast01.extmail.prod.ext.rdu2.redhat.com [10.11.55.17]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B14B72166B25 for ; Thu, 4 Nov 2021 06:48:03 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 918BC899EC2 for ; Thu, 4 Nov 2021 06:48:03 +0000 (UTC) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2084.outbound.protection.outlook.com [40.107.237.84]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-233-xldSiKD4Pcu0scAkmdoeYA-1; Thu, 04 Nov 2021 02:47:59 -0400 X-MC-Unique: xldSiKD4Pcu0scAkmdoeYA-1 Received: from MWHPR15CA0043.namprd15.prod.outlook.com (2603:10b6:300:ad::29) by MN2PR12MB4064.namprd12.prod.outlook.com (2603:10b6:208:1d3::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.17; Thu, 4 Nov 2021 06:47:56 +0000 Received: from CO1NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ad:cafe::e4) by MWHPR15CA0043.outlook.office365.com (2603:10b6:300:ad::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 06:47:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none; infradead.org; dmarc=pass action=none header.from=nvidia.com Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT004.mail.protection.outlook.com (10.13.175.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:47:56 +0000 Received: from dev.nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 06:47:55 +0000 From: Chaitanya Kulkarni To: , , , , , , , Date: Wed, 3 Nov 2021 23:46:29 -0700 Message-ID: <20211104064634.4481-4-chaitanyak@nvidia.com> In-Reply-To: <20211104064634.4481-1-chaitanyak@nvidia.com> References: <20211104064634.4481-1-chaitanyak@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 18df7c6d-8220-41b9-b5ed-08d99f5f089b X-MS-TrafficTypeDiagnostic: MN2PR12MB4064: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2887 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0 X-Microsoft-Antispam-Message-Info: e9XznhoJ3x00fkMXDPRfHAw9zkQ9lx+xLczVn+ZNWr+lNcO1+Le0SJCuY4dNNJjrzrzorj4pbUejn2D3DmU+7mJ9RiVQzQGO/V1dSArmOt4OnlDngaNILcKgCVqmK/JQmKDZ5PXSHAxsyaZdzKSrbR7Pkr77pwPKJxf84utKmsGAoPR8/Mk2+dd0s2rFJweTUgYcpO2ywQzUCS9tKOCV35c33k7tGzWHfCb3k7tDlG9r7R5F4toRfVqgkJ0yF+toksFJu8f+9WjhDVCCfTNbQCejMC/WlH5JlJw3A8HoKZGAWjGPI2hRm8JABHbv1FMcX7xMK0DlpRaXJ/FDKropT4k3MCWPnsv5hCiegU9+kEHVUeF7lOu0lYj1ekTSj9YF73DnjHJoZeQQks646XSLKRInzvqsyhxuCH8RDwUC98VxlFqL1KtRZb5wzHd85rYNc6PhIqtRYWkdRztIXcbn5xYZu6bwBXidfM86HrMjvGvZWPr8Pt6mVAthd5SFb/QwXIjFZp1sz541WKYQP1di2St97ShPYDRvowLK8d8Ap4Z0V/fX07Mk/92aG94uFr6z8tw/+oxGo1sYYuEq6r+Wy20w0thFQrPIF+obTqKL/2GVIObIuMmyZvf1uNrNKj9GKjcu/d8hFx7EJlb0PcGmSh/IE1IWJS80ym4GaV+A9AC9BFY2xmwHGI1wryRysXP0Rn/gQJg8oBsPYkl4R82RNztj42SglmXFoD99SHBCGjI= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(7406005)(110136005)(426003)(508600001)(7416002)(36906005)(70586007)(316002)(16526019)(54906003)(82310400003)(70206006)(2906002)(7696005)(6666004)(1076003)(107886003)(186003)(26005)(8936002)(36860700001)(336012)(47076005)(8676002)(356005)(86362001)(36756003)(83380400001)(5660300002)(7636003)(15650500001)(4326008)(2616005)(2101003); DIR:OUT; SFP:1101 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 06:47:56.3628 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 18df7c6d-8220-41b9-b5ed-08d99f5f089b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4064 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 1A46m3DZ005668 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Thu, 04 Nov 2021 03:22:43 -0400 Cc: snitzer@redhat.com, ebiggers@google.com, djwong@kernel.org, clm@fb.com, adilger.kernel@dilger.ca, osandov@fb.com, agk@redhat.com, javier@javigon.com, sagi@grimberg.me, dongli.zhang@oracle.com, willy@infradead.org, hch@lst.de, danil.kipnis@cloud.ionos.com, idryomov@gmail.com, jinpu.wang@cloud.ionos.com, Chaitanya Kulkarni , jejb@linux.ibm.com, josef@toxicpanda.com, ming.lei@redhat.com, dsterba@suse.com, viro@zeniv.linux.org.uk, jefflexu@linux.alibaba.com, bvanassche@acm.org, axboe@kernel.dk, tytso@mit.edu, martin.petersen@oracle.com, song@kernel.org, johannes.thumshirn@wdc.com, jlayton@kernel.org, kbusch@kernel.org, jack@suse.com Subject: [dm-devel] [RFC PATCH 3/8] nvme: add support for the Verify command X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Chaitanya Kulkarni Allow verify operations (REQ_OP_VERIFY) on the block device, if the device supports optional command bit set for verify. Add support to setup verify command. Set maximum possible verify sectors in one verify command according to maximum hardware sectors supported by the controller. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/host/core.c | 39 +++++++++++++++++++++++++++++++++++++++ include/linux/nvme.h | 19 +++++++++++++++++++ 2 files changed, 58 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 546a10407385..250647c3bb7b 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -801,6 +801,19 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns, return BLK_STS_OK; } +static inline blk_status_t nvme_setup_verify(struct nvme_ns *ns, + struct request *req, struct nvme_command *cmnd) +{ + cmnd->verify.opcode = nvme_cmd_verify; + cmnd->verify.nsid = cpu_to_le32(ns->head->ns_id); + cmnd->verify.slba = + cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req))); + cmnd->verify.length = + cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1); + cmnd->verify.control = 0; + return BLK_STS_OK; +} + static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, struct request *req, struct nvme_command *cmnd, enum nvme_opcode op) @@ -904,6 +917,9 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, case REQ_OP_WRITE_ZEROES: ret = nvme_setup_write_zeroes(ns, req, cmd); break; + case REQ_OP_VERIFY: + ret = nvme_setup_verify(ns, req, cmd); + break; case REQ_OP_DISCARD: ret = nvme_setup_discard(ns, req, cmd); break; @@ -1974,6 +1990,28 @@ static void nvme_config_write_zeroes(struct gendisk *disk, struct nvme_ns *ns) nvme_lba_to_sect(ns, max_blocks)); } +static void nvme_config_verify(struct gendisk *disk, struct nvme_ns *ns) +{ + u64 max_blocks; + + if (!(ns->ctrl->oncs & NVME_CTRL_ONCS_VERIFY)) + return; + + if (ns->ctrl->max_hw_sectors == UINT_MAX) + max_blocks = (u64)USHRT_MAX + 1; + else + max_blocks = ns->ctrl->max_hw_sectors + 1; + + /* keep same as discard */ + if (blk_queue_flag_test_and_set(QUEUE_FLAG_VERIFY, disk->queue)) + return; + + blk_queue_max_verify_sectors(disk->queue, + nvme_lba_to_sect(ns, max_blocks)); + +} + + static bool nvme_ns_ids_valid(struct nvme_ns_ids *ids) { return !uuid_is_null(&ids->uuid) || @@ -2144,6 +2182,7 @@ static void nvme_update_disk_info(struct gendisk *disk, nvme_config_discard(disk, ns); nvme_config_write_zeroes(disk, ns); + nvme_config_verify(disk, ns); set_disk_ro(disk, (id->nsattr & NVME_NS_ATTR_RO) || test_bit(NVME_NS_FORCE_RO, &ns->flags)); diff --git a/include/linux/nvme.h b/include/linux/nvme.h index b08787cd0881..14925602726a 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -318,6 +318,7 @@ enum { NVME_CTRL_ONCS_WRITE_UNCORRECTABLE = 1 << 1, NVME_CTRL_ONCS_DSM = 1 << 2, NVME_CTRL_ONCS_WRITE_ZEROES = 1 << 3, + NVME_CTRL_ONCS_VERIFY = 1 << 7, NVME_CTRL_ONCS_RESERVATIONS = 1 << 5, NVME_CTRL_ONCS_TIMESTAMP = 1 << 6, NVME_CTRL_VWC_PRESENT = 1 << 0, @@ -890,6 +891,23 @@ struct nvme_write_zeroes_cmd { __le16 appmask; }; +struct nvme_verify_cmd { + __u8 opcode; + __u8 flags; + __u16 command_id; + __le32 nsid; + __u64 rsvd2; + __le64 metadata; + union nvme_data_ptr dptr; + __le64 slba; + __le16 length; + __le16 control; + __le32 rsvd3; + __le32 reftag; + __le16 eapptag; + __le16 eappmask; +}; + enum nvme_zone_mgmt_action { NVME_ZONE_CLOSE = 0x1, NVME_ZONE_FINISH = 0x2, @@ -1411,6 +1429,7 @@ struct nvme_command { struct nvme_format_cmd format; struct nvme_dsm_cmd dsm; struct nvme_write_zeroes_cmd write_zeroes; + struct nvme_verify_cmd verify; struct nvme_zone_mgmt_send_cmd zms; struct nvme_zone_mgmt_recv_cmd zmr; struct nvme_abort_cmd abort; From patchwork Thu Nov 4 06:46:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12602647 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A402BC433EF for ; Thu, 4 Nov 2021 07:22:53 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2E6D161108 for ; Thu, 4 Nov 2021 07:22:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2E6D161108 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-84-mOf2ZGixMv-eiZxGKy-Ubw-1; Thu, 04 Nov 2021 03:22:50 -0400 X-MC-Unique: mOf2ZGixMv-eiZxGKy-Ubw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 537B05074D; Thu, 4 Nov 2021 07:22:46 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 09DC513ABD; Thu, 4 Nov 2021 07:22:46 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id DB3191806D03; Thu, 4 Nov 2021 07:22:44 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1A46ma7v006040 for ; Thu, 4 Nov 2021 02:48:36 -0400 Received: by smtp.corp.redhat.com (Postfix) id 802242026D46; Thu, 4 Nov 2021 06:48:36 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast01.extmail.prod.ext.rdu2.redhat.com [10.11.55.17]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7AE0B2026D5D for ; Thu, 4 Nov 2021 06:48:31 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 95C57899EC2 for ; Thu, 4 Nov 2021 06:48:31 +0000 (UTC) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2089.outbound.protection.outlook.com [40.107.100.89]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-485-T3NpyGOHONK3KVgfa53CFg-1; Thu, 04 Nov 2021 02:48:27 -0400 X-MC-Unique: T3NpyGOHONK3KVgfa53CFg-1 Received: from MWHPR1401CA0007.namprd14.prod.outlook.com (2603:10b6:301:4b::17) by BY5PR12MB4872.namprd12.prod.outlook.com (2603:10b6:a03:1c4::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10; Thu, 4 Nov 2021 06:48:24 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:301:4b:cafe::91) by MWHPR1401CA0007.outlook.office365.com (2603:10b6:301:4b::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 06:48:24 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none; infradead.org; dmarc=pass action=none header.from=nvidia.com Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:48:23 +0000 Received: from dev.nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 06:48:11 +0000 From: Chaitanya Kulkarni To: , , , , , , , Date: Wed, 3 Nov 2021 23:46:30 -0700 Message-ID: <20211104064634.4481-5-chaitanyak@nvidia.com> In-Reply-To: <20211104064634.4481-1-chaitanyak@nvidia.com> References: <20211104064634.4481-1-chaitanyak@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e9fe58b2-e2d4-4979-886e-08d99f5f189f X-MS-TrafficTypeDiagnostic: BY5PR12MB4872: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4303 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0 X-Microsoft-Antispam-Message-Info: EO8BTKCM6VtmG7mGQ+x4yQAEAJGfB8VQywv+7G9X9ZkOISMPLIreHV9sxZj/J7qC9VXdn5eG87MwXPRDFOp1phZFz+uaQEkRasnrqHO4IQnKVIz/zOJ0iQIe0sZYikdfOn54n7lR462qRv6lSb2ZMDNc5z1jBUVws60lMjHMjE/pLtSjCKcy7sMO4sqesU8e+eWSKz6st+0Cby0Kqmdbmzf3kX7thvfaLMOwY/F+E2gc2bHjHyr7MCfHs9sdeB+smbImQBSVdP3+NZyNt+1O4KHnh1ou5i86bD1pMUd8mJ1pUujouPDitowElGPiB8fqXB+FoXCDk7ZraC6Ifk8fgpXQ0orVtw913cGa0LkusGlDNi8IvrYsH45PRCWibyMUUKoXi9AhMWrbUTGOvoSa62CfRB/g5SfhgDoOaG/TvxetavE1QYdOQGn8qoYjk1qrJl47+HxT9HJTh6ke5biV/NZZ021CRNjxBQz9PTGw4H6Zzmt+BL9QIhziaqth9+Z/4udmXPFT+yI0kBYgn80jQYp98QXHL2XwOj0b7jfeEVJw6gl1jcmcgEqRWfCz1XodmvQqRR40Y0xBQol53+bIbhfXQn7Zv6taH9NDNSkcu9PBornhTjFTTrUidIPMhxiPK0DxFx7MfgxZQjKuVMM0GgUOKsS1w2WBKq0fNfW43ltip3G17OjfSxC6uU7xcKE2ZuylJTkL2jJ71n2by33c+q0ag/yFJOSqhtfBKzbQoZoAb0O930gU3hbNDRIFpslF X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(2906002)(83380400001)(110136005)(86362001)(82310400003)(186003)(16526019)(54906003)(2616005)(4326008)(6666004)(70206006)(47076005)(70586007)(426003)(336012)(8676002)(36756003)(36860700001)(15650500001)(1076003)(356005)(316002)(36906005)(508600001)(8936002)(7406005)(7416002)(5660300002)(7636003)(7696005)(26005)(107886003)(21314003)(2101003); DIR:OUT; SFP:1101 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 06:48:23.2284 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e9fe58b2-e2d4-4979-886e-08d99f5f189f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4872 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 1A46ma7v006040 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Thu, 04 Nov 2021 03:22:43 -0400 Cc: snitzer@redhat.com, ebiggers@google.com, djwong@kernel.org, clm@fb.com, adilger.kernel@dilger.ca, osandov@fb.com, agk@redhat.com, javier@javigon.com, sagi@grimberg.me, dongli.zhang@oracle.com, willy@infradead.org, hch@lst.de, danil.kipnis@cloud.ionos.com, idryomov@gmail.com, jinpu.wang@cloud.ionos.com, Chaitanya Kulkarni , jejb@linux.ibm.com, josef@toxicpanda.com, ming.lei@redhat.com, dsterba@suse.com, viro@zeniv.linux.org.uk, jefflexu@linux.alibaba.com, bvanassche@acm.org, axboe@kernel.dk, tytso@mit.edu, martin.petersen@oracle.com, song@kernel.org, johannes.thumshirn@wdc.com, jlayton@kernel.org, kbusch@kernel.org, jack@suse.com Subject: [dm-devel] [PATCH 4/8] nvmet: add Verify command support for bdev-ns X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Chaitanya Kulkarni Add support for handling verify command on target. Call into __blkdev_issue_verify, which the block layer expands into the REQ_OP_VERIFY LBAs. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/admin-cmd.c | 3 ++- drivers/nvme/target/io-cmd-bdev.c | 39 +++++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index 613a4d8feac1..87cad64895e6 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -408,7 +408,8 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req) id->nn = cpu_to_le32(ctrl->subsys->max_nsid); id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES); id->oncs = cpu_to_le16(NVME_CTRL_ONCS_DSM | - NVME_CTRL_ONCS_WRITE_ZEROES); + NVME_CTRL_ONCS_WRITE_ZEROES | + NVME_CTRL_ONCS_VERIFY); /* XXX: don't report vwc if the underlying device is write through */ id->vwc = NVME_CTRL_VWC_PRESENT; diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index ec45e597084b..5a888cdadfea 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -128,6 +128,7 @@ static u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts) switch (req->cmd->common.opcode) { case nvme_cmd_dsm: case nvme_cmd_write_zeroes: + case nvme_cmd_verify: status = NVME_SC_ONCS_NOT_SUPPORTED | NVME_SC_DNR; break; default: @@ -153,6 +154,10 @@ static u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts) req->error_slba = le64_to_cpu(req->cmd->write_zeroes.slba); break; + case nvme_cmd_verify: + req->error_slba = + le64_to_cpu(req->cmd->verify.slba); + break; default: req->error_slba = 0; } @@ -428,6 +433,37 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) } } +static void nvmet_bdev_execute_verify(struct nvmet_req *req) +{ + struct nvme_verify_cmd *verify = &req->cmd->verify; + struct bio *bio = NULL; + sector_t nr_sector; + sector_t sector; + int ret; + + if (!nvmet_check_transfer_len(req, 0)) + return; + + if (!bdev_verify_sectors(req->ns->bdev)) { + nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR); + return; + } + + sector = le64_to_cpu(verify->slba) << (req->ns->blksize_shift - 9); + nr_sector = (((sector_t)le16_to_cpu(verify->length) + 1) << + (req->ns->blksize_shift - 9)); + + ret = __blkdev_issue_verify(req->ns->bdev, sector, nr_sector, + GFP_KERNEL, &bio); + if (bio) { + bio->bi_private = req; + bio->bi_end_io = nvmet_bio_done; + submit_bio(bio); + } else { + nvmet_req_complete(req, errno_to_nvme_status(req, ret)); + } +} + u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) { struct nvme_command *cmd = req->cmd; @@ -448,6 +484,9 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_bdev_execute_write_zeroes; return 0; + case nvme_cmd_verify: + req->execute = nvmet_bdev_execute_verify; + return 0; default: pr_err("unhandled cmd %d on qid %d\n", cmd->common.opcode, req->sq->qid); From patchwork Thu Nov 4 06:46:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12602651 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AF18C433F5 for ; Thu, 4 Nov 2021 07:23:06 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 191B86113B for ; Thu, 4 Nov 2021 07:23:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 191B86113B Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-523-9Ka1EKVYMtqI9P4G9oQEAg-1; Thu, 04 Nov 2021 03:23:01 -0400 X-MC-Unique: 9Ka1EKVYMtqI9P4G9oQEAg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D3AD7802CB6; Thu, 4 Nov 2021 07:22:56 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B61735DF5F; Thu, 4 Nov 2021 07:22:56 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 926EF181A1D1; Thu, 4 Nov 2021 07:22:56 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1A46nAtf006338 for ; Thu, 4 Nov 2021 02:49:10 -0400 Received: by smtp.corp.redhat.com (Postfix) id B10BC40C1252; Thu, 4 Nov 2021 06:49:10 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id ABAEF400F3C6 for ; Thu, 4 Nov 2021 06:49:10 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8D67D8011AF for ; Thu, 4 Nov 2021 06:49:10 +0000 (UTC) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2087.outbound.protection.outlook.com [40.107.92.87]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-431-RGZh_wCoPbWj703kLAutKQ-1; Thu, 04 Nov 2021 02:48:39 -0400 X-MC-Unique: RGZh_wCoPbWj703kLAutKQ-1 Received: from BN6PR22CA0049.namprd22.prod.outlook.com (2603:10b6:404:ca::11) by CH2PR12MB4922.namprd12.prod.outlook.com (2603:10b6:610:65::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.14; Thu, 4 Nov 2021 06:48:36 +0000 Received: from BN8NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:404:ca:cafe::ec) by BN6PR22CA0049.outlook.office365.com (2603:10b6:404:ca::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Thu, 4 Nov 2021 06:48:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none; infradead.org; dmarc=pass action=none header.from=nvidia.com Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT039.mail.protection.outlook.com (10.13.177.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:48:36 +0000 Received: from dev.nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 06:48:34 +0000 From: Chaitanya Kulkarni To: , , , , , , , Date: Wed, 3 Nov 2021 23:46:31 -0700 Message-ID: <20211104064634.4481-6-chaitanyak@nvidia.com> In-Reply-To: <20211104064634.4481-1-chaitanyak@nvidia.com> References: <20211104064634.4481-1-chaitanyak@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d463c2a0-3cc9-4ef9-715f-08d99f5f205d X-MS-TrafficTypeDiagnostic: CH2PR12MB4922: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0 X-Microsoft-Antispam-Message-Info: gLwEKNl/4NCZ2aXAcasSAQW/zJDJM2GuhK8L/BvmKkaQA4L+RDYpQZZrg3dCMopLZ9YgDUiqIntp8Jv+unb3obuOoQb0aFNTgEj/DRn40UHsElmL6Vgyasza905kvDkueExaOlyuKX6KKX5x1UD8oSHUHeXQd9N9+QVthtbE+6hdBtwDwXlJGjWo73HU6BD2GtIXXDPz/+BiHsdNGb5daYb7MdyvVNhF2vgVVCLiNY5dojm5ntd/cmijm5Xqh7J7zzuNIr1UjdwgSRw5vS9YiwTXO1CFg5l6fg2HD4KAnMqvNNFwC3Tx/f0II1YG9D2qpsOOnqkDMBVbktrunPlxK07bmHdQrm1EuySCQ/FcwSGY9cNlV4W255mJb+sNkVqGah6rozTO8ttomqDk388bNioAcb4ausqBFrgX8ffV8XV/hgq2Pzpa64VqeNwDDuGMORbB/lGPaJWsaD23L/NG3USaxpJH8Cwz3xBS6dds3ZQq5NUuJtgsfLSkJunSTy70KIrrFPH76GQiW7QocO+mjuFRCKVSWv+8rwvPkNHZoLgcnSqRjuIyBETVxbSIEnjbOgCm7W9ZCELS8LcGGomYbY9FWllovcTtvfXDyO0L5lhtkh+rsWwGAZB5FX+7L5yDr6/RH7KeOgFeS5IvzOMLpUHllH8z0c0V5uGQGD/IGtfJn8BMqTaojKC4Fa54KqbkqmDUI7xhiYXhg7lXGSroFKwm/q9fAFYUVAUnGIkEsJB5D+QPcsc8IrDhjCUC59yy X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(36860700001)(36756003)(70586007)(336012)(26005)(186003)(5660300002)(82310400003)(7696005)(107886003)(16526019)(2906002)(86362001)(15650500001)(4326008)(70206006)(2616005)(8936002)(316002)(36906005)(7636003)(1076003)(54906003)(110136005)(8676002)(356005)(7416002)(426003)(508600001)(7406005)(6666004)(83380400001)(47076005)(21314003)(2101003); DIR:OUT; SFP:1101 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 06:48:36.1661 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d463c2a0-3cc9-4ef9-715f-08d99f5f205d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4922 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 1A46nAtf006338 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Thu, 04 Nov 2021 03:22:43 -0400 Cc: snitzer@redhat.com, ebiggers@google.com, djwong@kernel.org, clm@fb.com, adilger.kernel@dilger.ca, osandov@fb.com, agk@redhat.com, javier@javigon.com, sagi@grimberg.me, dongli.zhang@oracle.com, willy@infradead.org, hch@lst.de, danil.kipnis@cloud.ionos.com, idryomov@gmail.com, jinpu.wang@cloud.ionos.com, Chaitanya Kulkarni , jejb@linux.ibm.com, josef@toxicpanda.com, ming.lei@redhat.com, dsterba@suse.com, viro@zeniv.linux.org.uk, jefflexu@linux.alibaba.com, bvanassche@acm.org, axboe@kernel.dk, tytso@mit.edu, martin.petersen@oracle.com, song@kernel.org, johannes.thumshirn@wdc.com, jlayton@kernel.org, kbusch@kernel.org, jack@suse.com Subject: [dm-devel] [RFC PATCH 5/8] nvmet: add Verify emulation support for bdev-ns X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Chaitanya Kulkarni Not all devices can support verify requests which can be mapped to the controller specific command. This patch adds a way to emulate REQ_OP_VERIFY for NVMeOF block device namespace. We add a new workqueue to offload the emulation with the help of __blkdev_emulate_verify(). Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/core.c | 12 +++++++- drivers/nvme/target/io-cmd-bdev.c | 51 ++++++++++++++++++++++++++----- drivers/nvme/target/nvmet.h | 3 ++ 3 files changed, 57 insertions(+), 9 deletions(-) diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 8ce4d59cc9e7..8a17a6479073 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -16,6 +16,7 @@ #include "nvmet.h" struct workqueue_struct *buffered_io_wq; +struct workqueue_struct *verify_wq; static const struct nvmet_fabrics_ops *nvmet_transports[NVMF_TRTYPE_MAX]; static DEFINE_IDA(cntlid_ida); @@ -1546,11 +1547,17 @@ static int __init nvmet_init(void) nvmet_ana_group_enabled[NVMET_DEFAULT_ANA_GRPID] = 1; + verify_wq = alloc_workqueue("nvmet-verify-wq", WQ_MEM_RECLAIM, 0); + if (!verify_wq) { + error = -ENOMEM; + goto out; + } + buffered_io_wq = alloc_workqueue("nvmet-buffered-io-wq", WQ_MEM_RECLAIM, 0); if (!buffered_io_wq) { error = -ENOMEM; - goto out; + goto out_free_verify_work_queue; } error = nvmet_init_discovery(); @@ -1566,6 +1573,8 @@ static int __init nvmet_init(void) nvmet_exit_discovery(); out_free_work_queue: destroy_workqueue(buffered_io_wq); +out_free_verify_work_queue: + destroy_workqueue(verify_wq); out: return error; } @@ -1576,6 +1585,7 @@ static void __exit nvmet_exit(void) nvmet_exit_discovery(); ida_destroy(&cntlid_ida); destroy_workqueue(buffered_io_wq); + destroy_workqueue(verify_wq); BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_entry) != 1024); BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_hdr) != 1024); diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 5a888cdadfea..80b8e7bfd1ae 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -433,25 +433,60 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) } } -static void nvmet_bdev_execute_verify(struct nvmet_req *req) +static void __nvmet_req_to_verify_sectors(struct nvmet_req *req, + sector_t *sects, sector_t *nr_sects) { struct nvme_verify_cmd *verify = &req->cmd->verify; + + *sects = le64_to_cpu(verify->slba) << (req->ns->blksize_shift - 9); + *nr_sects = (((sector_t)le16_to_cpu(verify->length) + 1) << + (req->ns->blksize_shift - 9)); +} + +static void nvmet_bdev_emulate_verify_work(struct work_struct *w) +{ + struct nvmet_req *req = container_of(w, struct nvmet_req, b.work); + sector_t nr_sector; + sector_t sector; + int ret = 0; + + __nvmet_req_to_verify_sectors(req, §or, &nr_sector); + if (!nr_sector) + goto out; + + ret = blkdev_emulate_verify(req->ns->bdev, sector, nr_sector, + GFP_KERNEL); +out: + nvmet_req_complete(req, + blk_to_nvme_status(req, errno_to_blk_status(ret))); +} + +static void nvmet_bdev_submit_emulate_verify(struct nvmet_req *req) +{ + INIT_WORK(&req->b.work, nvmet_bdev_emulate_verify_work); + queue_work(verify_wq, &req->b.work); +} + + +static void nvmet_bdev_execute_verify(struct nvmet_req *req) +{ struct bio *bio = NULL; sector_t nr_sector; sector_t sector; - int ret; + int ret = 0; if (!nvmet_check_transfer_len(req, 0)) return; + /* offload emulation */ if (!bdev_verify_sectors(req->ns->bdev)) { - nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR); + nvmet_bdev_submit_emulate_verify(req); return; } - sector = le64_to_cpu(verify->slba) << (req->ns->blksize_shift - 9); - nr_sector = (((sector_t)le16_to_cpu(verify->length) + 1) << - (req->ns->blksize_shift - 9)); + __nvmet_req_to_verify_sectors(req, §or, &nr_sector); + if (!nr_sector) + goto out; ret = __blkdev_issue_verify(req->ns->bdev, sector, nr_sector, GFP_KERNEL, &bio); @@ -459,9 +494,9 @@ static void nvmet_bdev_execute_verify(struct nvmet_req *req) bio->bi_private = req; bio->bi_end_io = nvmet_bio_done; submit_bio(bio); - } else { - nvmet_req_complete(req, errno_to_nvme_status(req, ret)); } +out: + nvmet_req_complete(req, errno_to_nvme_status(req, ret)); } u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 8776dd1a0490..7f3f584b1e7b 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -323,6 +323,8 @@ struct nvmet_req { union { struct { struct bio inline_bio; + /* XXX: should we take work out of union ? */ + struct work_struct work; } b; struct { bool mpool_alloc; @@ -355,6 +357,7 @@ struct nvmet_req { }; extern struct workqueue_struct *buffered_io_wq; +extern struct workqueue_struct *verify_wq; static inline void nvmet_set_result(struct nvmet_req *req, u32 result) { From patchwork Thu Nov 4 06:46:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12602663 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EFD9C433EF for ; Thu, 4 Nov 2021 07:23:20 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A480661108 for ; Thu, 4 Nov 2021 07:23:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A480661108 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-533-WZ7M9S6dOQeJTfaLj45A-g-1; Thu, 04 Nov 2021 03:23:09 -0400 X-MC-Unique: WZ7M9S6dOQeJTfaLj45A-g-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EAE03875049; Thu, 4 Nov 2021 07:23:04 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D018F67849; Thu, 4 Nov 2021 07:23:04 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id AD867181A1D0; Thu, 4 Nov 2021 07:23:04 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1A46nGAe006344 for ; Thu, 4 Nov 2021 02:49:16 -0400 Received: by smtp.corp.redhat.com (Postfix) id 2D8A92166B25; Thu, 4 Nov 2021 06:49:16 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 24F0E2166B2D for ; Thu, 4 Nov 2021 06:49:12 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0A8998011A5 for ; Thu, 4 Nov 2021 06:49:12 +0000 (UTC) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2077.outbound.protection.outlook.com [40.107.212.77]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-121-Fu5rq1bBN_isEtKpsmGg6A-1; Thu, 04 Nov 2021 02:49:06 -0400 X-MC-Unique: Fu5rq1bBN_isEtKpsmGg6A-1 Received: from MWHPR15CA0042.namprd15.prod.outlook.com (2603:10b6:300:ad::28) by SN1PR12MB2509.namprd12.prod.outlook.com (2603:10b6:802:29::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Thu, 4 Nov 2021 06:49:02 +0000 Received: from CO1NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ad:cafe::b3) by MWHPR15CA0042.outlook.office365.com (2603:10b6:300:ad::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:49:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none; infradead.org; dmarc=pass action=none header.from=nvidia.com Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT054.mail.protection.outlook.com (10.13.174.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:49:01 +0000 Received: from dev.nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 06:48:57 +0000 From: Chaitanya Kulkarni To: , , , , , , , Date: Wed, 3 Nov 2021 23:46:32 -0700 Message-ID: <20211104064634.4481-7-chaitanyak@nvidia.com> In-Reply-To: <20211104064634.4481-1-chaitanyak@nvidia.com> References: <20211104064634.4481-1-chaitanyak@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a61fc49e-d383-4a8c-dfd4-08d99f5f2f86 X-MS-TrafficTypeDiagnostic: SN1PR12MB2509: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3826 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0 X-Microsoft-Antispam-Message-Info: rnqQjkkgXo+euQ0U/XF7ZpgapkaSySwPee2l3GAUzTO2MbUOAsITMBSmaEIBLKz7qmzl6FkAsbLYgVpFdCtFQUfuUKfrke5ueMKTIcGvdzlrB4Y0t5/1dxSrLHbLMigdXUsoCdsZyPlITuGQgIzcsbg7p44TBdN5lwIPXKZrKgCfDeSZwF5N9TNi/Q9bd3w3BVHYQUDBdJoPQTRFaYLe4MCsAapvi9ZmdQYie87NaV95dEbVwHEXhPnwUV3lmuti6lnCP9NJhwALcpnjKCDHSfpZeZEMLN0Tiee8myRn6Nkg37UI5milL9fkYu3c8ITcuinQbf4WSIEPYqY32rlTE8fIQ9yef/b3GwRWUwi2kT/2pt8VuqRYmpfLnFxJW4UGHi7d+kB+QFqeULxb23JSUr0hqmlTU2ksHyXozuHjXKqrb/7EL/OmYQ6Q+37wmd6mkiayeOg2MW9WOwmlGByBVsifKGJpep9PUoM2tqJvrrn4wU+WfBdEWKyEqRTZ2MHM+w2X3z8cLIwYq7lhbQlNwDjO7OY0two9hKLf7tr0skpynbXyaKmjtW17YXxRs+lytgYBjCmZmMQ2+VU0PtQk/SMxfqGTwYKgSU0zpwaWfc9EzEB4iz1kxqP3nboLpOFyA2wZuV4f0SYzdo7VoxH0sEdIkRYhSl9wyUUwQAiNBr6DJndYycWSRrWmgj2y1h+lnm04j36WUskXMz8vLAOur28m89rXeQE3ED9wjvy+61c= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(70206006)(110136005)(356005)(316002)(86362001)(54906003)(8936002)(4326008)(15650500001)(6666004)(36860700001)(83380400001)(7696005)(7636003)(70586007)(36756003)(5660300002)(336012)(2616005)(426003)(2906002)(1076003)(508600001)(16526019)(7406005)(186003)(26005)(47076005)(107886003)(8676002)(7416002)(82310400003)(2101003); DIR:OUT; SFP:1101 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 06:49:01.6624 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a61fc49e-d383-4a8c-dfd4-08d99f5f2f86 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB2509 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 1A46nGAe006344 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Thu, 04 Nov 2021 03:22:43 -0400 Cc: snitzer@redhat.com, ebiggers@google.com, djwong@kernel.org, clm@fb.com, adilger.kernel@dilger.ca, osandov@fb.com, agk@redhat.com, javier@javigon.com, sagi@grimberg.me, dongli.zhang@oracle.com, willy@infradead.org, hch@lst.de, danil.kipnis@cloud.ionos.com, idryomov@gmail.com, jinpu.wang@cloud.ionos.com, Chaitanya Kulkarni , jejb@linux.ibm.com, josef@toxicpanda.com, ming.lei@redhat.com, dsterba@suse.com, viro@zeniv.linux.org.uk, jefflexu@linux.alibaba.com, bvanassche@acm.org, axboe@kernel.dk, tytso@mit.edu, martin.petersen@oracle.com, song@kernel.org, johannes.thumshirn@wdc.com, jlayton@kernel.org, kbusch@kernel.org, jack@suse.com Subject: [dm-devel] [RFC PATCH 6/8] nvmet: add verify emulation support for file-ns X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Chaitanya Kulkarni For now, there is no way to map verify operation to the VFS layer API. This patch emulates verify operation by offloading it to the workqueue and reading the data using vfs layer APIs for both buffered io and direct io mode. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/io-cmd-file.c | 151 ++++++++++++++++++++++++++++++ 1 file changed, 151 insertions(+) diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c index 0abbefd9925e..2b0291c4164c 100644 --- a/drivers/nvme/target/io-cmd-file.c +++ b/drivers/nvme/target/io-cmd-file.c @@ -12,6 +12,7 @@ #define NVMET_MAX_MPOOL_BVEC 16 #define NVMET_MIN_MPOOL_OBJ 16 +#define NVMET_VERIFY_BUF_LEN (BIO_MAX_PAGES << PAGE_SHIFT) int nvmet_file_ns_revalidate(struct nvmet_ns *ns) { @@ -381,6 +382,153 @@ static void nvmet_file_execute_write_zeroes(struct nvmet_req *req) schedule_work(&req->f.work); } +static void __nvmet_req_to_verify_offset(struct nvmet_req *req, loff_t *offset, + ssize_t *len) +{ + struct nvme_verify_cmd *verify = &req->cmd->verify; + + *offset = le64_to_cpu(verify->slba) << req->ns->blksize_shift; + *len = (((sector_t)le16_to_cpu(verify->length) + 1) << + req->ns->blksize_shift); +} + +static int do_buffered_io_emulate_verify(struct file *f, loff_t offset, + ssize_t len) +{ + char *buf = NULL; + int ret = 0; + ssize_t rc; + + buf = kmalloc(NVMET_VERIFY_BUF_LEN, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + while (len > 0) { + ssize_t curr_len = min_t(ssize_t, len, NVMET_VERIFY_BUF_LEN); + + rc = kernel_read(f, buf, curr_len, &offset); + if (rc != curr_len) { + pr_err("kernel_read %lu curr_len %lu\n", rc, curr_len); + ret = -EINVAL; + break; + } + + len -= curr_len; + offset += curr_len; + cond_resched(); + } + + kfree(buf); + return ret; +} + +static int do_direct_io_emulate_verify(struct file *f, loff_t offset, + ssize_t len) +{ + struct scatterlist *sgl = NULL; + struct bio_vec *bvec = NULL; + struct iov_iter iter = { 0 }; + struct kiocb iocb = { 0 }; + unsigned int sgl_nents; + ssize_t ret = 0; + int i; + + while (len > 0) { + ssize_t curr_len = min_t(ssize_t, len, NVMET_VERIFY_BUF_LEN); + struct scatterlist *sg = NULL; + unsigned int bv_len = 0; + ssize_t rc; + + sgl = sgl_alloc(curr_len, GFP_KERNEL, &sgl_nents); + if (!sgl) { + ret = -ENOMEM; + break; + } + + bvec = kmalloc_array(sgl_nents, sizeof(struct bio_vec), + GFP_KERNEL); + if (!bvec) { + ret = -ENOMEM; + break; + } + + for_each_sg(sgl, sg, sgl_nents, i) { + nvmet_file_init_bvec(&bvec[i], sg); + bv_len += sg->length; + } + + if (bv_len != curr_len) { + pr_err("length mismatch sgl & bvec\n"); + ret = -EINVAL; + break; + } + + iocb.ki_pos = offset; + iocb.ki_filp = f; + iocb.ki_complete = NULL; /* Sync I/O */ + iocb.ki_flags |= IOCB_DIRECT; + + iov_iter_bvec(&iter, READ, bvec, sgl_nents, bv_len); + + rc = call_read_iter(f, &iocb, &iter); + if (rc != curr_len) { + pr_err("read len mismatch expected %lu got %ld\n", + curr_len, rc); + ret = -EINVAL; + break; + } + + cond_resched(); + + len -= curr_len; + offset += curr_len; + + kfree(bvec); + sgl_free(sgl); + bvec = NULL; + sgl = NULL; + memset(&iocb, 0, sizeof(iocb)); + memset(&iter, 0, sizeof(iter)); + } + + kfree(bvec); + sgl_free(sgl); + return ret; +} + +static void nvmet_file_emulate_verify_work(struct work_struct *w) +{ + struct nvmet_req *req = container_of(w, struct nvmet_req, f.work); + loff_t offset; + ssize_t len; + int ret = 0; + + __nvmet_req_to_verify_offset(req, &offset, &len); + if (!len) + goto out; + + if (unlikely(offset + len > req->ns->size)) { + nvmet_req_complete(req, errno_to_nvme_status(req, -ENOSPC)); + return; + } + + if (req->ns->buffered_io) + ret = do_buffered_io_emulate_verify(req->ns->file, offset, len); + else + ret = do_direct_io_emulate_verify(req->ns->file, offset, len); +out: + nvmet_req_complete(req, errno_to_nvme_status(req, ret)); +} + +static void nvmet_file_execute_verify(struct nvmet_req *req) +{ + if (!nvmet_check_data_len_lte(req, 0)) + return; + + INIT_WORK(&req->f.work, nvmet_file_emulate_verify_work); + queue_work(verify_wq, &req->f.work); +} + u16 nvmet_file_parse_io_cmd(struct nvmet_req *req) { struct nvme_command *cmd = req->cmd; @@ -399,6 +547,9 @@ u16 nvmet_file_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_file_execute_write_zeroes; return 0; + case nvme_cmd_verify: + req->execute = nvmet_file_execute_verify; + return 0; default: pr_err("unhandled cmd for file ns %d on qid %d\n", cmd->common.opcode, req->sq->qid); From patchwork Thu Nov 4 06:46:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12602661 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 998AEC433F5 for ; Thu, 4 Nov 2021 07:23:17 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 39FB361108 for ; Thu, 4 Nov 2021 07:23:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 39FB361108 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-263-J6dIt5UvMrSSqLHlAvD_HQ-1; Thu, 04 Nov 2021 03:23:12 -0400 X-MC-Unique: J6dIt5UvMrSSqLHlAvD_HQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6C63780668B; Thu, 4 Nov 2021 07:23:08 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 52F8F60854; Thu, 4 Nov 2021 07:23:08 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 26D0B4EA29; Thu, 4 Nov 2021 07:23:08 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1A46na62006374 for ; Thu, 4 Nov 2021 02:49:36 -0400 Received: by smtp.corp.redhat.com (Postfix) id 3BD4C400F3C6; Thu, 4 Nov 2021 06:49:36 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast05.extmail.prod.ext.rdu2.redhat.com [10.11.55.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 36A77401A993 for ; Thu, 4 Nov 2021 06:49:36 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1A1BD802A5E for ; Thu, 4 Nov 2021 06:49:36 +0000 (UTC) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2081.outbound.protection.outlook.com [40.107.223.81]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-474-OZhwdaEhMxeli7kA7p608g-1; Thu, 04 Nov 2021 02:49:32 -0400 X-MC-Unique: OZhwdaEhMxeli7kA7p608g-1 Received: from MWHPR15CA0041.namprd15.prod.outlook.com (2603:10b6:300:ad::27) by BL0PR12MB4929.namprd12.prod.outlook.com (2603:10b6:208:1c4::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Thu, 4 Nov 2021 06:49:28 +0000 Received: from CO1NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ad:cafe::18) by MWHPR15CA0041.outlook.office365.com (2603:10b6:300:ad::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 06:49:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none; infradead.org; dmarc=pass action=none header.from=nvidia.com Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT054.mail.protection.outlook.com (10.13.174.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:49:27 +0000 Received: from dev.nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 06:49:25 +0000 From: Chaitanya Kulkarni To: , , , , , , , Date: Wed, 3 Nov 2021 23:46:33 -0700 Message-ID: <20211104064634.4481-8-chaitanyak@nvidia.com> In-Reply-To: <20211104064634.4481-1-chaitanyak@nvidia.com> References: <20211104064634.4481-1-chaitanyak@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4c88335d-44df-4ab6-fdb0-08d99f5f3ed1 X-MS-TrafficTypeDiagnostic: BL0PR12MB4929: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4125 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0 X-Microsoft-Antispam-Message-Info: cNcSmwHUNvU0MLrJY51f1jFTCFVJ/bInnujjBlLYf7Af8VWppB8y+0OxE3uGiPlteGRiS7INsnRvgmV3gr9EWJvNS1+5QUiZgHJqiP9YRUvg06GBcORfdopDWCIJwBoxEMff4Y1B6f10QE/2c5uo7sA5mmD59sDu1mB52sQlGoE49jbyrtfjFIx7ZC+wFQUrb0ElSbnd3swuqJysWxnbxQDEa0uvW0CO9CFrDABZ3mY+Ps6/+RA8ZFBNb+W/0tikWtCh+QOz9vxugVq0htgjcsybd8GfTcYn8GKt1A0aEz+r0u67l4kjeXN4aa/vUaeduBJQGWpH8nRaINa9LINagFgKLEdUqyZwJrMEn1AmcCvYfqYPOt6nkwp28Zqw4dj07/wT33nPSLmmWS53vrRm2wrjcANJ/Xf5G7q5xRASI0SLtW+Ovwm7BFvRwW3/p/qLAliRnyS6F29FI/vcMwGbdrTzncE/snlVLgHyDE3tmixpESGnpu5/9tyTMsCdodsRsIrKjDXZqL+mhgj14fynEjULEb8Ls8RgOUU7ggDwvMKsruE+vaPqYGfd+Yp9sSj5ErL/bm6iIgcr+uvi3fxvlmasnKNfPDR/QjMYn3eHcgoJoZKJXHL1pvtQg8ZpE1OpuqSSCLWERbybURW1rm+6XaqhJmUukqzw1s/W1fvQP5nhSX133syiHTEmTNvWOtDJx3+sKZLdjbNxRWPaiwiVjmeIvZJ17hmTngZwCVHj7a8= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(336012)(8676002)(86362001)(426003)(26005)(16526019)(186003)(36860700001)(47076005)(7406005)(7416002)(2616005)(54906003)(8936002)(316002)(110136005)(7696005)(36756003)(508600001)(356005)(107886003)(70586007)(7636003)(6666004)(83380400001)(5660300002)(4326008)(70206006)(2906002)(1076003)(82310400003)(2101003); DIR:OUT; SFP:1101 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 06:49:27.3205 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4c88335d-44df-4ab6-fdb0-08d99f5f3ed1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4929 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 1A46na62006374 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Thu, 04 Nov 2021 03:22:43 -0400 Cc: snitzer@redhat.com, ebiggers@google.com, djwong@kernel.org, clm@fb.com, adilger.kernel@dilger.ca, osandov@fb.com, agk@redhat.com, javier@javigon.com, sagi@grimberg.me, dongli.zhang@oracle.com, willy@infradead.org, hch@lst.de, danil.kipnis@cloud.ionos.com, idryomov@gmail.com, jinpu.wang@cloud.ionos.com, Chaitanya Kulkarni , jejb@linux.ibm.com, josef@toxicpanda.com, ming.lei@redhat.com, dsterba@suse.com, viro@zeniv.linux.org.uk, jefflexu@linux.alibaba.com, bvanassche@acm.org, axboe@kernel.dk, tytso@mit.edu, martin.petersen@oracle.com, song@kernel.org, johannes.thumshirn@wdc.com, jlayton@kernel.org, kbusch@kernel.org, jack@suse.com Subject: [dm-devel] [RFC PATCH 7/8] null_blk: add REQ_OP_VERIFY support X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Chaitanya Kulkarni Add a new module parameter, configfs attribute to configure handling of the REQ_OP_VERIFY. This is needed for testing newly added REQ_OP_VERIFY block layer operation. Signed-off-by: Chaitanya Kulkarni --- drivers/block/null_blk/main.c | 25 ++++++++++++++++++++++++- drivers/block/null_blk/null_blk.h | 1 + 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index d6c821d48090..36a5f5343cee 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -216,6 +216,10 @@ static unsigned int g_zone_nr_conv; module_param_named(zone_nr_conv, g_zone_nr_conv, uint, 0444); MODULE_PARM_DESC(zone_nr_conv, "Number of conventional zones when block device is zoned. Default: 0"); +static bool g_verify; +module_param_named(verify, g_verify, bool, 0444); +MODULE_PARM_DESC(verify, "Allow REQ_OP_VERIFY processing. Default: false"); + static unsigned int g_zone_max_open; module_param_named(zone_max_open, g_zone_max_open, uint, 0444); MODULE_PARM_DESC(zone_max_open, "Maximum number of open zones when block device is zoned. Default: 0 (no limit)"); @@ -358,6 +362,7 @@ NULLB_DEVICE_ATTR(blocking, bool, NULL); NULLB_DEVICE_ATTR(use_per_node_hctx, bool, NULL); NULLB_DEVICE_ATTR(memory_backed, bool, NULL); NULLB_DEVICE_ATTR(discard, bool, NULL); +NULLB_DEVICE_ATTR(verify, bool, NULL); NULLB_DEVICE_ATTR(mbps, uint, NULL); NULLB_DEVICE_ATTR(cache_size, ulong, NULL); NULLB_DEVICE_ATTR(zoned, bool, NULL); @@ -477,6 +482,7 @@ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_power, &nullb_device_attr_memory_backed, &nullb_device_attr_discard, + &nullb_device_attr_verify, &nullb_device_attr_mbps, &nullb_device_attr_cache_size, &nullb_device_attr_badblocks, @@ -539,7 +545,7 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item) static ssize_t memb_group_features_show(struct config_item *item, char *page) { return snprintf(page, PAGE_SIZE, - "memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active,blocksize,max_sectors\n"); + "memory_backed,discard,verify,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active,blocksize,max_sectors\n"); } CONFIGFS_ATTR_RO(memb_group_, features); @@ -601,6 +607,7 @@ static struct nullb_device *null_alloc_dev(void) dev->use_per_node_hctx = g_use_per_node_hctx; dev->zoned = g_zoned; dev->zone_size = g_zone_size; + dev->verify = g_verify; dev->zone_capacity = g_zone_capacity; dev->zone_nr_conv = g_zone_nr_conv; dev->zone_max_open = g_zone_max_open; @@ -1165,6 +1172,9 @@ static int null_handle_rq(struct nullb_cmd *cmd) struct req_iterator iter; struct bio_vec bvec; + if (req_op(rq) == REQ_OP_VERIFY) + return 0; + spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { len = bvec.bv_len; @@ -1192,6 +1202,9 @@ static int null_handle_bio(struct nullb_cmd *cmd) struct bio_vec bvec; struct bvec_iter iter; + if (bio_op(bio) == REQ_OP_VERIFY) + return 0; + spin_lock_irq(&nullb->lock); bio_for_each_segment(bvec, bio, iter) { len = bvec.bv_len; @@ -1609,6 +1622,15 @@ static void null_config_discard(struct nullb *nullb) blk_queue_flag_set(QUEUE_FLAG_DISCARD, nullb->q); } +static void null_config_verify(struct nullb *nullb) +{ + if (nullb->dev->verify == false) + return; + + blk_queue_max_verify_sectors(nullb->q, UINT_MAX >> 9); + blk_queue_flag_set(QUEUE_FLAG_VERIFY, nullb->q); +} + static const struct block_device_operations null_bio_ops = { .owner = THIS_MODULE, .submit_bio = null_submit_bio, @@ -1881,6 +1903,7 @@ static int null_add_dev(struct nullb_device *dev) blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); null_config_discard(nullb); + null_config_verify(nullb); sprintf(nullb->disk_name, "nullb%d", nullb->index); diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index 83504f3cc9d6..e6913c099e71 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -95,6 +95,7 @@ struct nullb_device { bool power; /* power on/off the device */ bool memory_backed; /* if data is stored in memory */ bool discard; /* if support discard */ + bool verify; /* if support verify */ bool zoned; /* if device is zoned */ }; From patchwork Thu Nov 4 06:46:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 12602655 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E27F8C433FE for ; Thu, 4 Nov 2021 07:23:08 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 834806112D for ; Thu, 4 Nov 2021 07:23:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 834806112D Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-513-UzGPTEYtNA-B6_-eeMpr1Q-1; Thu, 04 Nov 2021 03:23:04 -0400 X-MC-Unique: UzGPTEYtNA-B6_-eeMpr1Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 97DE310A8E00; Thu, 4 Nov 2021 07:22:59 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 76BFC6788F; Thu, 4 Nov 2021 07:22:59 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 248F84E58E; Thu, 4 Nov 2021 07:22:59 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 1A46ntUw006389 for ; Thu, 4 Nov 2021 02:49:55 -0400 Received: by smtp.corp.redhat.com (Postfix) id AAA1B400DEF8; Thu, 4 Nov 2021 06:49:55 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A43D540CFD0A for ; Thu, 4 Nov 2021 06:49:55 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 87D50800B26 for ; Thu, 4 Nov 2021 06:49:55 +0000 (UTC) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2064.outbound.protection.outlook.com [40.107.92.64]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-134-aBLicu2yNA6NTgVJugmnMQ-1; Thu, 04 Nov 2021 02:49:51 -0400 X-MC-Unique: aBLicu2yNA6NTgVJugmnMQ-1 Received: from MWHPR07CA0023.namprd07.prod.outlook.com (2603:10b6:300:116::33) by DM6PR12MB3803.namprd12.prod.outlook.com (2603:10b6:5:1ce::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.17; Thu, 4 Nov 2021 06:49:47 +0000 Received: from CO1NAM11FT028.eop-nam11.prod.protection.outlook.com (2603:10b6:300:116:cafe::1d) by MWHPR07CA0023.outlook.office365.com (2603:10b6:300:116::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:49:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none; infradead.org; dmarc=pass action=none header.from=nvidia.com Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT028.mail.protection.outlook.com (10.13.175.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 06:49:46 +0000 Received: from dev.nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 06:49:44 +0000 From: Chaitanya Kulkarni To: , , , , , , , Date: Wed, 3 Nov 2021 23:46:34 -0700 Message-ID: <20211104064634.4481-9-chaitanyak@nvidia.com> In-Reply-To: <20211104064634.4481-1-chaitanyak@nvidia.com> References: <20211104064634.4481-1-chaitanyak@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ee0b9fe1-2d93-4534-1f11-08d99f5f4a15 X-MS-TrafficTypeDiagnostic: DM6PR12MB3803: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4125 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0 X-Microsoft-Antispam-Message-Info: UY0SWpGBc1P4YkudM6UA+jyXXf244sHMqUERIe2IhT1uwMo/W79RqC6FOwoZTrBgcVvH5rN8m7cuHzRUA1tqhYCkKloU0TryGBGH55M0DGHV749FyxOnASykpJ470T0w0hr4ghyDk+5CuHKBMhjfX9B43mh4YYaQc9XonJA0p8ucmhd9c6XyDfx3kVw4sJN4mAFkZ65DZthVB7HZPziwCnTaxqXkhsChatibv+nTMUub80fqDgCqUht23Tb/lF6FAOgmnQ3WZq3r/uHuQRBIAQGW4PuhTSSa31eQJ+npp8FTjMBdfrs3EYzMigRYV6nK8EbzakU2oxlJ2RPwkRpAMCrpRONQA1lm2HcLSkSJVwybXeEvU7xkk2F0IYJJqIpHIXcjkp2JFPzKJmtsamsUkcDx+YUilSOuFGGFx4aOcGiZqF2nG6LtLzMtP25dD7LZXUsHmMGrQzGQTfcdJKfY5nCRWbmI4TpqBVCvJ2u069rfUTQ+uoZ7oVT/v/3vybdKt/K/ywdv8ujKi9ODUgXCGz3MVPfztKzIo3u71WAhzqJL1hExqTElAivOkuECDUOQ5fYAoJ2NtKKXThXroYPGV4oSFkkPEpHtu+QA0TKSkw61sddEbxaTzeul5hGjD1bodi81Eau6fKaT3VYPJY4Vm1UuscjWp6ufS9v2RAmTaeL3FytPCD/57dkLe3x4djbcW10MmOLaxlYuoBpoVFhjHxsVTWalIMoH9Yv4zUhcFyA= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(70206006)(82310400003)(6666004)(70586007)(16526019)(86362001)(336012)(186003)(7406005)(36860700001)(36756003)(26005)(7416002)(47076005)(508600001)(83380400001)(1076003)(5660300002)(30864003)(7636003)(2906002)(426003)(107886003)(4326008)(7696005)(8676002)(316002)(110136005)(54906003)(36906005)(2616005)(356005)(8936002)(2101003); DIR:OUT; SFP:1101 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 06:49:46.2044 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ee0b9fe1-2d93-4534-1f11-08d99f5f4a15 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT028.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3803 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 1A46ntUw006389 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Thu, 04 Nov 2021 03:22:43 -0400 Cc: snitzer@redhat.com, ebiggers@google.com, djwong@kernel.org, clm@fb.com, adilger.kernel@dilger.ca, osandov@fb.com, agk@redhat.com, javier@javigon.com, sagi@grimberg.me, dongli.zhang@oracle.com, willy@infradead.org, hch@lst.de, danil.kipnis@cloud.ionos.com, idryomov@gmail.com, jinpu.wang@cloud.ionos.com, Chaitanya Kulkarni , jejb@linux.ibm.com, josef@toxicpanda.com, ming.lei@redhat.com, dsterba@suse.com, viro@zeniv.linux.org.uk, jefflexu@linux.alibaba.com, bvanassche@acm.org, axboe@kernel.dk, tytso@mit.edu, martin.petersen@oracle.com, song@kernel.org, johannes.thumshirn@wdc.com, jlayton@kernel.org, kbusch@kernel.org, jack@suse.com Subject: [dm-devel] [RFC PATCH 8/8] md: add support for REQ_OP_VERIFY X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com From: Chaitanya Kulkarni Signed-off-by: Chaitanya Kulkarni --- drivers/md/dm-core.h | 1 + drivers/md/dm-io.c | 8 ++++++-- drivers/md/dm-linear.c | 11 ++++++++++- drivers/md/dm-mpath.c | 1 + drivers/md/dm-rq.c | 3 +++ drivers/md/dm-stripe.c | 1 + drivers/md/dm-table.c | 36 +++++++++++++++++++++++++++++++++++ drivers/md/dm.c | 31 ++++++++++++++++++++++++++++++ drivers/md/md-linear.c | 10 ++++++++++ drivers/md/md-multipath.c | 1 + drivers/md/md.h | 7 +++++++ drivers/md/raid10.c | 1 + drivers/md/raid5.c | 1 + include/linux/device-mapper.h | 6 ++++++ 14 files changed, 115 insertions(+), 3 deletions(-) diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index 086d293c2b03..8a07ac9165ec 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -114,6 +114,7 @@ struct mapped_device { void disable_discard(struct mapped_device *md); void disable_write_same(struct mapped_device *md); void disable_write_zeroes(struct mapped_device *md); +void disable_verify(struct mapped_device *md); static inline sector_t dm_get_size(struct mapped_device *md) { diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c index 4312007d2d34..da09d092e2c1 100644 --- a/drivers/md/dm-io.c +++ b/drivers/md/dm-io.c @@ -317,8 +317,11 @@ static void do_region(int op, int op_flags, unsigned region, special_cmd_max_sectors = q->limits.max_write_zeroes_sectors; else if (op == REQ_OP_WRITE_SAME) special_cmd_max_sectors = q->limits.max_write_same_sectors; + else if (op == REQ_OP_VERIFY) + special_cmd_max_sectors = q->limits.max_verify_sectors; if ((op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES || - op == REQ_OP_WRITE_SAME) && special_cmd_max_sectors == 0) { + op == REQ_OP_VERIFY || op == REQ_OP_WRITE_SAME) && + special_cmd_max_sectors == 0) { atomic_inc(&io->count); dec_count(io, region, BLK_STS_NOTSUPP); return; @@ -335,6 +338,7 @@ static void do_region(int op, int op_flags, unsigned region, switch (op) { case REQ_OP_DISCARD: case REQ_OP_WRITE_ZEROES: + case REQ_OP_VERIFY: num_bvecs = 0; break; case REQ_OP_WRITE_SAME: @@ -352,7 +356,7 @@ static void do_region(int op, int op_flags, unsigned region, bio_set_op_attrs(bio, op, op_flags); store_io_and_region_in_bio(bio, io, region); - if (op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES) { + if (op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES || op == REQ_OP_VERIFY) { num_sectors = min_t(sector_t, special_cmd_max_sectors, remaining); bio->bi_iter.bi_size = num_sectors << SECTOR_SHIFT; remaining -= num_sectors; diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c index 00774b5d7668..802c9cb917ae 100644 --- a/drivers/md/dm-linear.c +++ b/drivers/md/dm-linear.c @@ -62,6 +62,7 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv) ti->num_secure_erase_bios = 1; ti->num_write_same_bios = 1; ti->num_write_zeroes_bios = 1; + ti->num_verify_bios = 1; ti->private = lc; return 0; @@ -90,9 +91,17 @@ static void linear_map_bio(struct dm_target *ti, struct bio *bio) struct linear_c *lc = ti->private; bio_set_dev(bio, lc->dev->bdev); - if (bio_sectors(bio) || op_is_zone_mgmt(bio_op(bio))) + if (bio_sectors(bio) || op_is_zone_mgmt(bio_op(bio))) { bio->bi_iter.bi_sector = linear_map_sector(ti, bio->bi_iter.bi_sector); + if (bio_op(bio) == REQ_OP_VERIFY) + printk(KERN_INFO"dmrg: REQ_OP_VERIFY sector %10llu nr_sectors " + "%10u %s %s\n", + bio->bi_iter.bi_sector, bio->bi_iter.bi_size >> 9, + bio->bi_bdev->bd_disk->disk_name, + current->comm); + + } } static int linear_map(struct dm_target *ti, struct bio *bio) diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index bced42f082b0..d6eb0d287032 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -1255,6 +1255,7 @@ static int multipath_ctr(struct dm_target *ti, unsigned argc, char **argv) ti->num_discard_bios = 1; ti->num_write_same_bios = 1; ti->num_write_zeroes_bios = 1; + ti->num_verify_bios = 1; if (m->queue_mode == DM_TYPE_BIO_BASED) ti->per_io_data_size = multipath_per_bio_data_size(); else diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index 13b4385f4d5a..eaf19f8c9fca 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -224,6 +224,9 @@ static void dm_done(struct request *clone, blk_status_t error, bool mapped) else if (req_op(clone) == REQ_OP_WRITE_ZEROES && !clone->q->limits.max_write_zeroes_sectors) disable_write_zeroes(tio->md); + else if (req_op(clone) == REQ_OP_VERIFY && + !clone->q->limits.max_verify_sectors) + disable_verify(tio->md); } switch (r) { diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c index df359d33cda8..199ee57290a2 100644 --- a/drivers/md/dm-stripe.c +++ b/drivers/md/dm-stripe.c @@ -159,6 +159,7 @@ static int stripe_ctr(struct dm_target *ti, unsigned int argc, char **argv) ti->num_secure_erase_bios = stripes; ti->num_write_same_bios = stripes; ti->num_write_zeroes_bios = stripes; + ti->num_verify_bios = stripes; sc->chunk_size = chunk_size; if (chunk_size & (chunk_size - 1)) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 4acf2342f7ad..6a55c4c3b77a 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1709,6 +1709,36 @@ static bool dm_table_supports_nowait(struct dm_table *t) return true; } +static int device_not_verify_capable(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) +{ + struct request_queue *q = bdev_get_queue(dev->bdev); + + return q && !q->limits.max_verify_sectors; +} + +static bool dm_table_supports_verify(struct dm_table *t) +{ + struct dm_target *ti; + unsigned i = 0; + + while (i < dm_table_get_num_targets(t)) { + ti = dm_table_get_target(t, i++); + + if (!ti->num_verify_bios) + return false; + + if (!ti->type->iterate_devices || + ti->type->iterate_devices(ti, device_not_verify_capable, NULL)) + return false; + + printk(KERN_INFO"REQ_OP_VERIFY configured success for %s id %d\n", + ti->type->name, i); + } + + return true; +} + static int device_not_discard_capable(struct dm_target *ti, struct dm_dev *dev, sector_t start, sector_t len, void *data) { @@ -1830,6 +1860,12 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, if (dm_table_supports_secure_erase(t)) blk_queue_flag_set(QUEUE_FLAG_SECERASE, q); + if (!dm_table_supports_verify(t)) { + blk_queue_flag_clear(QUEUE_FLAG_VERIFY, q); + q->limits.max_verify_sectors = 0; + } else + blk_queue_flag_set(QUEUE_FLAG_VERIFY, q); + if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_WC))) { wc = true; if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_FUA))) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 479ec5bea09e..f70e387ce020 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -969,6 +969,15 @@ void disable_write_zeroes(struct mapped_device *md) limits->max_write_zeroes_sectors = 0; } +void disable_verify(struct mapped_device *md) +{ + struct queue_limits *limits = dm_get_queue_limits(md); + + /* device doesn't really support VERIFY, disable it */ + limits->max_verify_sectors = 0; + blk_queue_flag_clear(QUEUE_FLAG_VERIFY, md->queue); +} + static void clone_endio(struct bio *bio) { blk_status_t error = bio->bi_status; @@ -989,6 +998,9 @@ static void clone_endio(struct bio *bio) else if (bio_op(bio) == REQ_OP_WRITE_ZEROES && !q->limits.max_write_zeroes_sectors) disable_write_zeroes(md); + else if (bio_op(bio) == REQ_OP_VERIFY && + !q->limits.max_verify_sectors) + disable_verify(md); } /* @@ -1455,6 +1467,12 @@ static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, return 0; } +static unsigned get_num_verify_bios(struct dm_target *ti) +{ + printk(KERN_INFO"%s %d\n", __func__, __LINE__); + return ti->num_verify_bios; +} + static int __send_changing_extent_only(struct clone_info *ci, struct dm_target *ti, unsigned num_bios) { @@ -1480,15 +1498,25 @@ static int __send_changing_extent_only(struct clone_info *ci, struct dm_target * return 0; } +static int __send_verify(struct clone_info *ci, struct dm_target *ti) +{ + printk(KERN_INFO"%s %d\n", __func__, __LINE__); + return __send_changing_extent_only(ci, ti, get_num_verify_bios(ti)); +} + static bool is_abnormal_io(struct bio *bio) { bool r = false; + if (bio_op(bio) == REQ_OP_VERIFY) + printk(KERN_INFO"%s %d\n", __func__, __LINE__); + switch (bio_op(bio)) { case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_SAME: case REQ_OP_WRITE_ZEROES: + case REQ_OP_VERIFY: r = true; break; } @@ -1515,6 +1543,9 @@ static bool __process_abnormal_io(struct clone_info *ci, struct dm_target *ti, case REQ_OP_WRITE_ZEROES: num_bios = ti->num_write_zeroes_bios; break; + case REQ_OP_VERIFY: + num_bios = ti->num_verify_bios; + break; default: return false; } diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c index 63ed8329a98d..0d8355658f8f 100644 --- a/drivers/md/md-linear.c +++ b/drivers/md/md-linear.c @@ -65,6 +65,7 @@ static struct linear_conf *linear_conf(struct mddev *mddev, int raid_disks) struct md_rdev *rdev; int i, cnt; bool discard_supported = false; + bool verify_supported = false; conf = kzalloc(struct_size(conf, disks, raid_disks), GFP_KERNEL); if (!conf) @@ -99,6 +100,8 @@ static struct linear_conf *linear_conf(struct mddev *mddev, int raid_disks) if (blk_queue_discard(bdev_get_queue(rdev->bdev))) discard_supported = true; + if (blk_queue_verify(bdev_get_queue(rdev->bdev))) + verify_supported = true; } if (cnt != raid_disks) { pr_warn("md/linear:%s: not enough drives present. Aborting!\n", @@ -111,6 +114,12 @@ static struct linear_conf *linear_conf(struct mddev *mddev, int raid_disks) else blk_queue_flag_set(QUEUE_FLAG_DISCARD, mddev->queue); + if (!verify_supported) + blk_queue_flag_clear(QUEUE_FLAG_VERIFY, mddev->queue); + else + blk_queue_flag_set(QUEUE_FLAG_VERIFY, mddev->queue); + + /* * Here we calculate the device offsets. */ @@ -261,6 +270,7 @@ static bool linear_make_request(struct mddev *mddev, struct bio *bio) bio_sector); mddev_check_writesame(mddev, bio); mddev_check_write_zeroes(mddev, bio); + mddev_check_verify(mddev, bio); submit_bio_noacct(bio); } return true; diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c index 776bbe542db5..2856fc80a8a1 100644 --- a/drivers/md/md-multipath.c +++ b/drivers/md/md-multipath.c @@ -131,6 +131,7 @@ static bool multipath_make_request(struct mddev *mddev, struct bio * bio) mp_bh->bio.bi_private = mp_bh; mddev_check_writesame(mddev, &mp_bh->bio); mddev_check_write_zeroes(mddev, &mp_bh->bio); + mddev_check_verify(mddev, &mp_bh->bio); submit_bio_noacct(&mp_bh->bio); return true; } diff --git a/drivers/md/md.h b/drivers/md/md.h index bcbba1b5ec4a..f40b5a5bc862 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -802,6 +802,13 @@ static inline void mddev_check_write_zeroes(struct mddev *mddev, struct bio *bio mddev->queue->limits.max_write_zeroes_sectors = 0; } +static inline void mddev_check_verify(struct mddev *mddev, struct bio *bio) +{ + if (bio_op(bio) == REQ_OP_VERIFY && + !bio->bi_bdev->bd_disk->queue->limits.max_verify_sectors) + mddev->queue->limits.max_verify_sectors = 0; +} + struct mdu_array_info_s; struct mdu_disk_info_s; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index e1eefbec15d4..2ba1214bec2e 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3756,6 +3756,7 @@ static int raid10_run(struct mddev *mddev) mddev->chunk_sectors); blk_queue_max_write_same_sectors(mddev->queue, 0); blk_queue_max_write_zeroes_sectors(mddev->queue, 0); + blk_queue_max_verify_sectors(mddev->queue, 0); blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); raid10_set_io_opt(conf); } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index a348b2adf2a9..d723dfa2a3cb 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7718,6 +7718,7 @@ static int raid5_run(struct mddev *mddev) blk_queue_max_write_same_sectors(mddev->queue, 0); blk_queue_max_write_zeroes_sectors(mddev->queue, 0); + blk_queue_max_verify_sectors(mddev->queue, 0); rdev_for_each(rdev, mddev) { disk_stack_limits(mddev->gendisk, rdev->bdev, diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index 61a66fb8ebb3..761228e234d9 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -302,6 +302,12 @@ struct dm_target { */ unsigned num_write_zeroes_bios; + /* + * The number of VERIFY bios that will be submitted to the target. + * The bio number can be accessed with dm_bio_get_target_bio_nr. + */ + unsigned num_verify_bios; + /* * The minimum number of extra bytes allocated in each io for the * target to use.