From patchwork Fri Dec 4 09:46:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: SelvaKumar S X-Patchwork-Id: 11951491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F32AC4361A for ; Fri, 4 Dec 2020 11:03:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A36D229F0 for ; Fri, 4 Dec 2020 11:03:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387890AbgLDLDM (ORCPT ); Fri, 4 Dec 2020 06:03:12 -0500 Received: from mailout4.samsung.com ([203.254.224.34]:46257 "EHLO mailout4.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387702AbgLDLDM (ORCPT ); Fri, 4 Dec 2020 06:03:12 -0500 Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout4.samsung.com (KnoxPortal) with ESMTP id 20201204110225epoutp04dcba62ef76c2ade3f1b1939ca1684009~NfoN4XXH12046520465epoutp042 for ; Fri, 4 Dec 2020 11:02:25 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20201204110225epoutp04dcba62ef76c2ade3f1b1939ca1684009~NfoN4XXH12046520465epoutp042 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1607079745; bh=LkfscQGUqGi5vNP0JFLqpySqtZKgWZFy6zzZtQPfxqY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fDatY6+LKeiY5tHuGq0E3G/BK1vx9k4YgScrOdk4Dvj3KihH4moupGjnR53xuSEm6 vJ6AIuqkZO4koowOvb+bGhos7YM0NWapnXOi5AuAJbICAViZa7QUHFXDB0ig2CK6kr Yx/sP4KpLjMFc+zfJ5wkY4NPrSrWzJFW/x+1yt8I= Received: from epsmges5p3new.samsung.com (unknown [182.195.42.75]) by epcas5p4.samsung.com (KnoxPortal) with ESMTP id 20201204110224epcas5p409c9c9f48d10c8d5b635b710199e41ce~NfoNYKalR1182411824epcas5p4Z; Fri, 4 Dec 2020 11:02:24 +0000 (GMT) Received: from epcas5p1.samsung.com ( [182.195.41.39]) by epsmges5p3new.samsung.com (Symantec Messaging Gateway) with SMTP id 14.E1.33964.0471ACF5; Fri, 4 Dec 2020 20:02:24 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p3.samsung.com (KnoxPortal) with ESMTPA id 20201204094731epcas5p307fe5a0b9360c5057cd48e42c9300053~Nem07aUXO3264732647epcas5p33; Fri, 4 Dec 2020 09:47:31 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20201204094731epsmtrp168c53d646a5885afb1f4f5314a847624~Nem06gFV62092920929epsmtrp18; Fri, 4 Dec 2020 09:47:31 +0000 (GMT) X-AuditID: b6c32a4b-ea1ff700000184ac-87-5fca17409dbe Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id BD.28.13470.3B50ACF5; Fri, 4 Dec 2020 18:47:31 +0900 (KST) Received: from localhost.localdomain (unknown [107.110.206.5]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20201204094721epsmtip2cfb4d47d6104e6da69959fd1c8c8d662~Nemrhwffj0388403884epsmtip2O; Fri, 4 Dec 2020 09:47:21 +0000 (GMT) From: SelvaKumar S To: linux-nvme@lists.infradead.org Cc: kbusch@kernel.org, axboe@kernel.dk, damien.lemoal@wdc.com, hch@lst.de, sagi@grimberg.me, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, dm-devel@redhat.com, snitzer@redhat.com, selvajove@gmail.com, nj.shetty@samsung.com, joshi.k@samsung.com, javier.gonz@samsung.com, SelvaKumar S Subject: [RFC PATCH v2 1/2] block: add simple copy support Date: Fri, 4 Dec 2020 15:16:58 +0530 Message-Id: <20201204094659.12732-2-selvakuma.s1@samsung.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201204094659.12732-1-selvakuma.s1@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrJKsWRmVeSWpSXmKPExsWy7bCmuq6D+Kl4g/Vr5CxW3+1ns2ht/8Zk sffdbFaLlauPMlk8vvOZ3eLo/7dsFpMOXWO02HtL2+LyrjlsFvOXPWW32PZ7PrPFlSmLmC3W vX7PYvHg/XV2i9c/TrJZtG38yugg4LFz1l12j/P3NrJ4XD5b6rFpVSebx+Yl9R67bzawebzf d5XNo2/LKkaPz5vkPNoPdDMFcEVx2aSk5mSWpRbp2yVwZXQ8usRasGgGY8XmGadYGxgv13Qx cnJICJhI3LvTy9TFyMUhJLCbUWLb6SPMEM4nRomrq9qgMt8YJaa/vMcC07LxeSdUYi+jxJLJ S6Ccz4wSP05tYQSpYhPQlbi2ZBNYh4iAksTf9U0sIEXMAqeYJJr+7WIHSQgLWEk0bP3DDGKz CKhKXHr8BqyZV8BW4u/LJnaIdfISMy99B7M5BewkVmx7xwZRIyhxcuYTsAXMQDXNW2eDHS4h cIdD4vrGS6wQzS4SP6feg7KFJV4d3wI1VEriZX8blF0u8axzGhOE3cAo0fe+HMK2l7i45y9Q nANogabE+l36EGFZiamn1jFB7OWT6P39BKqVV2LHvCdg5RICahKntptBhGUkPhzexQYR9pD4 sYYNElYTGSVmN01lmsCoMAvJN7OQfDMLYfECRuZVjJKpBcW56anFpgXGeanlesWJucWleel6 yfm5mxjBqU/Lewfjowcf9A4xMnEwHmKU4GBWEuGNVT0ZL8SbklhZlVqUH19UmpNafIhRmoNF SZxX6ceZOCGB9MSS1OzU1ILUIpgsEwenVAPTucT8/ZYyu1KVE097SPy+2vLx3MH+RPYWh+rg gvf+K4ru6OitOXZzsghD6f+L1qy/N59TkBN9mdR0/XKyl6nCz6zzjd0Nxr2GB1cfST0aePTg 1NMtG/nvVSzQKjzYzftxdva9VRc8rW9kFX15wFti9bash+3fakH7vf8Ph3OZy5hP7QrTsDp8 5vGRdykdYqs+lD75HqB6oyAlTzAh7UOUoKjdsfOlt7t0bd3iHl6/9VB7iaKDkuX8z3/t9O+a GczkENMTauW6kqATdUv9iERaWZP0PtUFkrf2y0i9nZlvl69i5M5oe+h2/LpVhmYrbQ4xMe+d 6MRy77Wv+sWyW9WFjD8Mn77sStyw4nbpvBVKLMUZiYZazEXFiQBAg3i97AMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrIIsWRmVeSWpSXmKPExsWy7bCSvO5m1lPxBivnS1usvtvPZtHa/o3J Yu+72awWK1cfZbJ4fOczu8XR/2/ZLCYdusZosfeWtsXlXXPYLOYve8puse33fGaLK1MWMVus e/2exeLB++vsFq9/nGSzaNv4ldFBwGPnrLvsHufvbWTxuHy21GPTqk42j81L6j1232xg83i/ 7yqbR9+WVYwenzfJebQf6GYK4IrisklJzcksSy3St0vgyuh4dIm1YNEMxorNM06xNjBeruli 5OSQEDCR2Pi8k6mLkYtDSGA3o8TSXRfYIBIyEmvvdkLZwhIr/z1nhyj6yChxc+ZrdpAEm4Cu xLUlm1hAbBEBJYm/65vAbGaBO0wS2364gtjCAlYSDVv/MIPYLAKqEpcev2EEsXkFbCX+vmxi h1ggLzHz0ncwm1PATmLFtndgi4WAaprbF0LVC0qcnPkEaD4H0Hx1ifXzhCBWyUs0b53NPIFR cBaSqlkIVbOQVC1gZF7FKJlaUJybnltsWGCYl1quV5yYW1yal66XnJ+7iREci1qaOxi3r/qg d4iRiYPxEKMEB7OSCG+s6sl4Id6UxMqq1KL8+KLSnNTiQ4zSHCxK4rwXuoBSAumJJanZqakF qUUwWSYOTqkGJvPHD1YlbT3Gplm8PYJjflzktuN7zl8T8lDmUtLdethmJlsAx5ZDEt12d83C ja4J+CbU7LZ7nfNRqqpqfVqo3b/Tc86Up/fNn/WBof7DhY7jc+aeBWLBTXwVzXrXdZb/2f7q mfL2fV+tdolWP9b50/wrli2xnj127qqyGQt2C+oJc11aqz/t7i6Fo1/C7jjHvD6Xukdk56l9 jy55LX22vmnF8tKlbhouMUva1hlbXHq4ufej6YKFp1q9L0Z1bT35fp9ylOPcZO5TTjuLY3ZN lpk+2/huxlnZrvydYecuv/+So35ZOrDkUNOqnD8Pn/659eTb4uoz+w//3z/76Kmp/Aa6NQu1 omJ3aj0qe1uTf1uJpTgj0VCLuag4EQDZDv1VNAMAAA== X-CMS-MailID: 20201204094731epcas5p307fe5a0b9360c5057cd48e42c9300053 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P X-CMS-RootMailID: 20201204094731epcas5p307fe5a0b9360c5057cd48e42c9300053 References: <20201204094659.12732-1-selvakuma.s1@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add new BLKCOPY ioctl that offloads copying of multiple sources to a destination to the device. Accept copy_ranges that contains destination, no of sources and pointer to the array of source ranges. Each range_entry contains start and length of source ranges (in bytes). Introduce REQ_OP_COPY, a no-merge copy offload operation. Create bio with control information as payload and submit to the device. REQ_OP_COPY(19) is a write op and takes zone_write_lock when submitted to zoned device. Introduce queue limits for simple copy and other helper functions. Add device limits as sysfs entries. - max_copy_sectors - max_copy_ranges_sectors - max_copy_nr_ranges max_copy_sectors = 0 indicates the device doesn't support copy. Signed-off-by: SelvaKumar S Signed-off-by: Kanchan Joshi Signed-off-by: Nitesh Shetty Signed-off-by: Javier González --- block/blk-core.c | 94 ++++++++++++++++++++++++++--- block/blk-lib.c | 123 ++++++++++++++++++++++++++++++++++++++ block/blk-merge.c | 2 + block/blk-settings.c | 11 ++++ block/blk-sysfs.c | 23 +++++++ block/blk-zoned.c | 1 + block/bounce.c | 1 + block/ioctl.c | 43 +++++++++++++ include/linux/bio.h | 1 + include/linux/blk_types.h | 15 +++++ include/linux/blkdev.h | 15 +++++ include/uapi/linux/fs.h | 13 ++++ 12 files changed, 334 insertions(+), 8 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 2db8bda43b6e..07d64514e77b 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -719,6 +719,17 @@ static noinline int should_fail_bio(struct bio *bio) } ALLOW_ERROR_INJECTION(should_fail_bio, ERRNO); +static inline int bio_check_copy_eod(struct bio *bio, sector_t start, + sector_t nr_sectors, sector_t maxsector) +{ + if (nr_sectors && maxsector && + (nr_sectors > maxsector || start > maxsector - nr_sectors)) { + handle_bad_sector(bio, maxsector); + return -EIO; + } + return 0; +} + /* * Check whether this bio extends beyond the end of the device or partition. * This may well happen - the kernel calls bread() without checking the size of @@ -737,6 +748,65 @@ static inline int bio_check_eod(struct bio *bio, sector_t maxsector) return 0; } +/* + * Check for copy limits and remap source ranges if needed. + */ +static int blk_check_copy(struct bio *bio) +{ + struct hd_struct *p = NULL; + struct request_queue *q = bio->bi_disk->queue; + struct blk_copy_payload *payload; + int i, maxsector, start_sect = 0, ret = -EIO; + unsigned short nr_range; + + rcu_read_lock(); + + if (bio->bi_partno) { + p = __disk_get_part(bio->bi_disk, bio->bi_partno); + if (unlikely(!p)) + goto out; + if (unlikely(bio_check_ro(bio, p))) + goto out; + maxsector = part_nr_sects_read(p); + start_sect = p->start_sect; + } else { + if (unlikely(bio_check_ro(bio, &bio->bi_disk->part0))) + goto out; + maxsector = get_capacity(bio->bi_disk); + } + + payload = bio_data(bio); + nr_range = payload->copy_range; + + /* cannot handle copy crossing nr_ranges limit */ + if (payload->copy_range > q->limits.max_copy_nr_ranges) + goto out; + + /* cannot handle copy more than copy limits */ + if (payload->copy_size > q->limits.max_copy_sectors) + goto out; + + /* check if copy length crosses eod */ + if (unlikely(bio_check_copy_eod(bio, bio->bi_iter.bi_sector, + payload->copy_size, maxsector))) + goto out; + bio->bi_iter.bi_sector += start_sect; + + for (i = 0; i < nr_range; i++) { + if (unlikely(bio_check_copy_eod(bio, payload->range[i].src, + payload->range[i].len, maxsector))) + goto out; + payload->range[i].src += start_sect; + } + + if (p) + bio->bi_partno = 0; + ret = 0; +out: + rcu_read_unlock(); + return ret; +} + /* * Remap block n of partition p to block n+start(p) of the disk. */ @@ -825,14 +895,16 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio) if (should_fail_bio(bio)) goto end_io; - if (bio->bi_partno) { - if (unlikely(blk_partition_remap(bio))) - goto end_io; - } else { - if (unlikely(bio_check_ro(bio, &bio->bi_disk->part0))) - goto end_io; - if (unlikely(bio_check_eod(bio, get_capacity(bio->bi_disk)))) - goto end_io; + if (likely(!op_is_copy(bio->bi_opf))) { + if (bio->bi_partno) { + if (unlikely(blk_partition_remap(bio))) + goto end_io; + } else { + if (unlikely(bio_check_ro(bio, &bio->bi_disk->part0))) + goto end_io; + if (unlikely(bio_check_eod(bio, get_capacity(bio->bi_disk)))) + goto end_io; + } } /* @@ -856,6 +928,12 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio) if (!blk_queue_discard(q)) goto not_supported; break; + case REQ_OP_COPY: + if (!blk_queue_copy(q)) + goto not_supported; + if (unlikely(blk_check_copy(bio))) + goto end_io; + break; case REQ_OP_SECURE_ERASE: if (!blk_queue_secure_erase(q)) goto not_supported; diff --git a/block/blk-lib.c b/block/blk-lib.c index e90614fd8d6a..96f727c4d0de 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -150,6 +150,129 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, } EXPORT_SYMBOL(blkdev_issue_discard); +int __blkdev_issue_copy(struct block_device *bdev, sector_t dest, + sector_t nr_srcs, struct range_entry *rlist, gfp_t gfp_mask, + int flags, struct bio **biop) +{ + struct request_queue *q = bdev_get_queue(bdev); + struct bio *bio; + struct blk_copy_payload *payload; + sector_t bs_mask; + sector_t src_sects, len = 0, total_len = 0; + int i, ret, total_size; + + if (!q) + return -ENXIO; + + if (!nr_srcs) + return -EINVAL; + + if (bdev_read_only(bdev)) + return -EPERM; + + if (!blk_queue_copy(q)) + return -EOPNOTSUPP; + + bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1; + if (dest & bs_mask) + return -EINVAL; + + total_size = struct_size(payload, range, nr_srcs); + payload = kmalloc(total_size, GFP_ATOMIC | __GFP_NOWARN); + if (!payload) + return -ENOMEM; + + bio = bio_alloc(gfp_mask, 1); + bio->bi_iter.bi_sector = dest; + bio->bi_opf = REQ_OP_COPY | REQ_NOMERGE; + bio_set_dev(bio, bdev); + + payload->dest = dest; + + for (i = 0; i < nr_srcs; i++) { + /* copy payload provided are in bytes */ + src_sects = rlist[i].src; + if (src_sects & bs_mask) { + ret = -EINVAL; + goto err; + } + src_sects = src_sects >> SECTOR_SHIFT; + + if (len & bs_mask) { + ret = -EINVAL; + goto err; + } + + len = rlist[i].len >> SECTOR_SHIFT; + if (len > q->limits.max_copy_range_sectors) { + ret = -EINVAL; + goto err; + } + + total_len += len; + + WARN_ON_ONCE((src_sects << 9) > UINT_MAX); + + payload->range[i].src = src_sects; + payload->range[i].len = len; + } + + /* storing # of source ranges */ + payload->copy_range = i; + /* storing copy len so far */ + payload->copy_size = total_len; + + ret = bio_add_page(bio, virt_to_page(payload), total_size, + offset_in_page(payload)); + if (ret != total_size) { + ret = -ENOMEM; + goto err; + } + + *biop = bio; + return 0; +err: + kfree(payload); + bio_put(bio); + return ret; +} +EXPORT_SYMBOL(__blkdev_issue_copy); + +/** + * blkdev_issue_copy - queue a copy + * @bdev: blockdev to issue copy for + * @dest: dest sector + * @nr_srcs: number of source ranges to copy + * @rlist: list of range entries + * @gfp_mask: memory allocation flags (for bio_alloc) + * @flags: BLKDEV_COPY_* flags to control behaviour //TODO + * + * Description: + * Issue a copy request for dest sector with source in rlist + */ +int blkdev_issue_copy(struct block_device *bdev, sector_t dest, + int nr_srcs, struct range_entry *rlist, + gfp_t gfp_mask, unsigned long flags) +{ + struct bio *bio = NULL; + int ret; + + ret = __blkdev_issue_copy(bdev, dest, nr_srcs, rlist, gfp_mask, flags, + &bio); + if (!ret && bio) { + ret = submit_bio_wait(bio); + if (ret == -EOPNOTSUPP) + ret = 0; + + kfree(page_address(bio_first_bvec_all(bio)->bv_page) + + bio_first_bvec_all(bio)->bv_offset); + bio_put(bio); + } + + return ret; +} +EXPORT_SYMBOL(blkdev_issue_copy); + /** * __blkdev_issue_write_same - generate number of bios with same page * @bdev: target blockdev diff --git a/block/blk-merge.c b/block/blk-merge.c index bcf5e4580603..a16e7598d6ad 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -301,6 +301,8 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) struct bio *split = NULL; switch (bio_op(*bio)) { + case REQ_OP_COPY: + break; case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: split = blk_bio_discard_split(q, *bio, &q->bio_split, nr_segs); diff --git a/block/blk-settings.c b/block/blk-settings.c index 9741d1d83e98..18e357939ed4 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -60,6 +60,9 @@ void blk_set_default_limits(struct queue_limits *lim) lim->io_opt = 0; lim->misaligned = 0; lim->zoned = BLK_ZONED_NONE; + lim->max_copy_sectors = 0; + lim->max_copy_nr_ranges = 0; + lim->max_copy_range_sectors = 0; } EXPORT_SYMBOL(blk_set_default_limits); @@ -549,6 +552,14 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->io_opt = lcm_not_zero(t->io_opt, b->io_opt); t->chunk_sectors = lcm_not_zero(t->chunk_sectors, b->chunk_sectors); + /* copy limits */ + t->max_copy_sectors = min_not_zero(t->max_copy_sectors, + b->max_copy_sectors); + t->max_copy_range_sectors = min_not_zero(t->max_copy_range_sectors, + b->max_copy_range_sectors); + t->max_copy_nr_ranges = min_not_zero(t->max_copy_nr_ranges, + b->max_copy_nr_ranges); + /* Physical block size a multiple of the logical block size? */ if (t->physical_block_size & (t->logical_block_size - 1)) { t->physical_block_size = t->logical_block_size; diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index b513f1683af0..e5aabb6a3ac1 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -166,6 +166,23 @@ static ssize_t queue_discard_granularity_show(struct request_queue *q, char *pag return queue_var_show(q->limits.discard_granularity, page); } +static ssize_t queue_max_copy_sectors_show(struct request_queue *q, char *page) +{ + return queue_var_show(q->limits.max_copy_sectors, page); +} + +static ssize_t queue_max_copy_range_sectors_show(struct request_queue *q, + char *page) +{ + return queue_var_show(q->limits.max_copy_range_sectors, page); +} + +static ssize_t queue_max_copy_nr_ranges_show(struct request_queue *q, + char *page) +{ + return queue_var_show(q->limits.max_copy_nr_ranges, page); +} + static ssize_t queue_discard_max_hw_show(struct request_queue *q, char *page) { @@ -590,6 +607,9 @@ QUEUE_RO_ENTRY(queue_zoned, "zoned"); QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones"); QUEUE_RO_ENTRY(queue_max_open_zones, "max_open_zones"); QUEUE_RO_ENTRY(queue_max_active_zones, "max_active_zones"); +QUEUE_RO_ENTRY(queue_max_copy_sectors, "max_copy_sectors"); +QUEUE_RO_ENTRY(queue_max_copy_range_sectors, "max_copy_range_sectors"); +QUEUE_RO_ENTRY(queue_max_copy_nr_ranges, "max_copy_nr_ranges"); QUEUE_RW_ENTRY(queue_nomerges, "nomerges"); QUEUE_RW_ENTRY(queue_rq_affinity, "rq_affinity"); @@ -636,6 +656,9 @@ static struct attribute *queue_attrs[] = { &queue_discard_max_entry.attr, &queue_discard_max_hw_entry.attr, &queue_discard_zeroes_data_entry.attr, + &queue_max_copy_sectors_entry.attr, + &queue_max_copy_range_sectors_entry.attr, + &queue_max_copy_nr_ranges_entry.attr, &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, &queue_zone_append_max_entry.attr, diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 6817a673e5ce..6e5fef3cc615 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -75,6 +75,7 @@ bool blk_req_needs_zone_write_lock(struct request *rq) case REQ_OP_WRITE_ZEROES: case REQ_OP_WRITE_SAME: case REQ_OP_WRITE: + case REQ_OP_COPY: return blk_rq_zone_is_seq(rq); default: return false; diff --git a/block/bounce.c b/block/bounce.c index 162a6eee8999..7fbdc52decb3 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -254,6 +254,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask, bio->bi_iter.bi_size = bio_src->bi_iter.bi_size; switch (bio_op(bio)) { + case REQ_OP_COPY: case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: diff --git a/block/ioctl.c b/block/ioctl.c index 6b785181344f..a4a507d85e56 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -142,6 +142,47 @@ static int blk_ioctl_discard(struct block_device *bdev, fmode_t mode, GFP_KERNEL, flags); } +static int blk_ioctl_copy(struct block_device *bdev, fmode_t mode, + unsigned long arg, unsigned long flags) +{ + struct copy_range crange; + struct range_entry *rlist; + struct request_queue *q = bdev_get_queue(bdev); + sector_t dest; + int ret; + + if (!(mode & FMODE_WRITE)) + return -EBADF; + + if (!blk_queue_copy(q)) + return -EOPNOTSUPP; + + if (copy_from_user(&crange, (void __user *)arg, sizeof(crange))) + return -EFAULT; + + if (crange.dest & ((1 << SECTOR_SHIFT) - 1)) + return -EFAULT; + dest = crange.dest >> SECTOR_SHIFT; + + rlist = kmalloc_array(crange.nr_range, sizeof(*rlist), + GFP_ATOMIC | __GFP_NOWARN); + + if (!rlist) + return -ENOMEM; + + if (copy_from_user(rlist, (void __user *)crange.range_list, + sizeof(*rlist) * crange.nr_range)) { + ret = -EFAULT; + goto out; + } + + ret = blkdev_issue_copy(bdev, dest, crange.nr_range, + rlist, GFP_KERNEL, flags); +out: + kfree(rlist); + return ret; +} + static int blk_ioctl_zeroout(struct block_device *bdev, fmode_t mode, unsigned long arg) { @@ -467,6 +508,8 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode, case BLKSECDISCARD: return blk_ioctl_discard(bdev, mode, arg, BLKDEV_DISCARD_SECURE); + case BLKCOPY: + return blk_ioctl_copy(bdev, mode, arg, 0); case BLKZEROOUT: return blk_ioctl_zeroout(bdev, mode, arg); case BLKREPORTZONE: diff --git a/include/linux/bio.h b/include/linux/bio.h index ecf67108f091..7e40a37f0ee5 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -71,6 +71,7 @@ static inline bool bio_has_data(struct bio *bio) static inline bool bio_no_advance_iter(const struct bio *bio) { return bio_op(bio) == REQ_OP_DISCARD || + bio_op(bio) == REQ_OP_COPY || bio_op(bio) == REQ_OP_SECURE_ERASE || bio_op(bio) == REQ_OP_WRITE_SAME || bio_op(bio) == REQ_OP_WRITE_ZEROES; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index d9b69bbde5cc..4ecb9c16702d 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -360,6 +360,8 @@ enum req_opf { REQ_OP_ZONE_RESET = 15, /* reset all the zone present on the device */ REQ_OP_ZONE_RESET_ALL = 17, + /* copy ranges within device */ + REQ_OP_COPY = 19, /* SCSI passthrough using struct scsi_request */ REQ_OP_SCSI_IN = 32, @@ -486,6 +488,11 @@ static inline bool op_is_discard(unsigned int op) return (op & REQ_OP_MASK) == REQ_OP_DISCARD; } +static inline bool op_is_copy(unsigned int op) +{ + return (op & REQ_OP_MASK) == REQ_OP_COPY; +} + /* * Check if a bio or request operation is a zone management operation, with * the exception of REQ_OP_ZONE_RESET_ALL which is treated as a special case @@ -545,4 +552,12 @@ struct blk_rq_stat { u64 batch; }; +struct blk_copy_payload { + sector_t dest; + int copy_range; + int copy_size; + int err; + struct range_entry range[]; +}; + #endif /* __LINUX_BLK_TYPES_H */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 05b346a68c2e..dbeaeebf41c4 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -340,10 +340,13 @@ struct queue_limits { unsigned int max_zone_append_sectors; unsigned int discard_granularity; unsigned int discard_alignment; + unsigned int max_copy_sectors; unsigned short max_segments; unsigned short max_integrity_segments; unsigned short max_discard_segments; + unsigned short max_copy_range_sectors; + unsigned short max_copy_nr_ranges; unsigned char misaligned; unsigned char discard_misaligned; @@ -625,6 +628,7 @@ struct request_queue { #define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */ #define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */ #define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */ +#define QUEUE_FLAG_COPY 30 /* supports copy */ #define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_SAME_COMP) | \ @@ -647,6 +651,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); #define blk_queue_io_stat(q) test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags) #define blk_queue_add_random(q) test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags) #define blk_queue_discard(q) test_bit(QUEUE_FLAG_DISCARD, &(q)->queue_flags) +#define blk_queue_copy(q) test_bit(QUEUE_FLAG_COPY, &(q)->queue_flags) #define blk_queue_zone_resetall(q) \ test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags) #define blk_queue_secure_erase(q) \ @@ -1059,6 +1064,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, return min(q->limits.max_discard_sectors, UINT_MAX >> SECTOR_SHIFT); + if (unlikely(op == REQ_OP_COPY)) + return q->limits.max_copy_sectors; + if (unlikely(op == REQ_OP_WRITE_SAME)) return q->limits.max_write_same_sectors; @@ -1330,6 +1338,13 @@ extern int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, int flags, struct bio **biop); +extern int __blkdev_issue_copy(struct block_device *bdev, sector_t dest, + sector_t nr_srcs, struct range_entry *rlist, gfp_t gfp_mask, + int flags, struct bio **biop); +extern int blkdev_issue_copy(struct block_device *bdev, sector_t dest, + int nr_srcs, struct range_entry *rlist, + gfp_t gfp_mask, unsigned long flags); + #define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */ #define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */ diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index f44eb0a04afd..5cadb176317a 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -64,6 +64,18 @@ struct fstrim_range { __u64 minlen; }; +struct range_entry { + __u64 src; + __u64 len; +}; + +struct copy_range { + __u64 dest; + __u64 nr_range; + __u64 range_list; + __u64 rsvd; +}; + /* extent-same (dedupe) ioctls; these MUST match the btrfs ioctl definitions */ #define FILE_DEDUPE_RANGE_SAME 0 #define FILE_DEDUPE_RANGE_DIFFERS 1 @@ -184,6 +196,7 @@ struct fsxattr { #define BLKSECDISCARD _IO(0x12,125) #define BLKROTATIONAL _IO(0x12,126) #define BLKZEROOUT _IO(0x12,127) +#define BLKCOPY _IOWR(0x12, 128, struct copy_range) /* * A jump here: 130-131 are reserved for zoned block devices * (see uapi/linux/blkzoned.h) From patchwork Fri Dec 4 09:46:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: SelvaKumar S X-Patchwork-Id: 11951493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86BC5C4167B for ; Fri, 4 Dec 2020 11:03:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3726122A84 for ; Fri, 4 Dec 2020 11:03:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387975AbgLDLDT (ORCPT ); Fri, 4 Dec 2020 06:03:19 -0500 Received: from mailout1.samsung.com ([203.254.224.24]:59238 "EHLO mailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387702AbgLDLDT (ORCPT ); Fri, 4 Dec 2020 06:03:19 -0500 Received: from epcas5p1.samsung.com (unknown [182.195.41.39]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20201204110237epoutp0122e3b017a9c0153bdb7556e032bfa1c0~NfoZATVXU1775217752epoutp01b for ; Fri, 4 Dec 2020 11:02:37 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20201204110237epoutp0122e3b017a9c0153bdb7556e032bfa1c0~NfoZATVXU1775217752epoutp01b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1607079757; bh=QEawhnHLuWf/3lYOyqbYtAwEeqa/idcrwD6MA5QVDP4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XO2t6yHJkmI/3+BPr1zSFIStXAXqv+G0utw5w8DqQtyh3jTmNDU1cHcZKn7d8xxq7 gkMjV8tgqdUQHzF33iN20tcQelAd2RcIwzFXV6ASZfzFxYvfz0CXc1XnV0cnMomS+q mC3EBJYpwSyFuTLBywYBvsse9nVwpTAyZZ1++0M8= Received: from epsmges5p3new.samsung.com (unknown [182.195.42.75]) by epcas5p4.samsung.com (KnoxPortal) with ESMTP id 20201204110236epcas5p44e480a88dd5f4d1e09dc55b4c42755b6~NfoYZVYNE0722207222epcas5p4U; Fri, 4 Dec 2020 11:02:36 +0000 (GMT) Received: from epcas5p4.samsung.com ( [182.195.41.42]) by epsmges5p3new.samsung.com (Symantec Messaging Gateway) with SMTP id 6D.E1.33964.C471ACF5; Fri, 4 Dec 2020 20:02:36 +0900 (KST) Received: from epsmtrp2.samsung.com (unknown [182.195.40.14]) by epcas5p1.samsung.com (KnoxPortal) with ESMTPA id 20201204094747epcas5p121b6eccf78a29ed4cba7c22d6b42d160~NenDZ3B4h1317013170epcas5p1K; Fri, 4 Dec 2020 09:47:47 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp2.samsung.com (KnoxPortal) with ESMTP id 20201204094747epsmtrp2cbf8c7f11ca1cc9bc229a92b68fad15e~NenDY1msY3021930219epsmtrp2f; Fri, 4 Dec 2020 09:47:47 +0000 (GMT) X-AuditID: b6c32a4b-ea1ff700000184ac-a9-5fca174c5e3f Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id A3.38.13470.2C50ACF5; Fri, 4 Dec 2020 18:47:46 +0900 (KST) Received: from localhost.localdomain (unknown [107.110.206.5]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20201204094742epsmtip242cb2d31ecb62fb151ab37d94ac0c0a5~Nem-fZOEc0441104411epsmtip2E; Fri, 4 Dec 2020 09:47:42 +0000 (GMT) From: SelvaKumar S To: linux-nvme@lists.infradead.org Cc: kbusch@kernel.org, axboe@kernel.dk, damien.lemoal@wdc.com, hch@lst.de, sagi@grimberg.me, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, dm-devel@redhat.com, snitzer@redhat.com, selvajove@gmail.com, nj.shetty@samsung.com, joshi.k@samsung.com, javier.gonz@samsung.com, SelvaKumar S Subject: [RFC PATCH v2 2/2] nvme: add simple copy support Date: Fri, 4 Dec 2020 15:16:59 +0530 Message-Id: <20201204094659.12732-3-selvakuma.s1@samsung.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201204094659.12732-1-selvakuma.s1@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrOKsWRmVeSWpSXmKPExsWy7bCmlq6P+Kl4g3OzxS1W3+1ns2ht/8Zk sffdbFaLlauPMlk8vvOZ3eLo/7dsFpMOXWO02HtL2+LyrjlsFvOXPWW32PZ7PrPFlSmLmC3W vX7PYvHg/XV2i9c/TrJZtG38yugg4LFz1l12j/P3NrJ4XD5b6rFpVSebx+Yl9R67bzawebzf d5XNo2/LKkaPz5vkPNoPdDMFcEVx2aSk5mSWpRbp2yVwZXyfu4W9YI9VxfaJV9gaGPv1uhg5 OSQETCT6T35m7mLk4hAS2M0o8eDPEnYI5xOjxMNVs1kgnG+MEgeXXmCDaTnavgAqsZdR4uiO vawQzmdGicnNU1lBqtgEdCWuLdnEAmKLCChJ/F3fBNbBLHCKSaLp3y52kISwgKXEjfNfGbsY OThYBFQlJp3OBQnzCthKXPh0GWqbvMTMS9/ByjkF7CRWbHvHBlEjKHFy5hOw+cxANc1bZ4M9 ISHwgEPi/J6dzCAzJQRcJJ61aUPMEZZ4dXwLO4QtJfH53V6o+eUSzzqnMUHYDYwSfe/LIWx7 iYt7/jKBjGEW0JRYv0sfIiwrMfXUOiaItXwSvb+fQLXySuyY94QJYquaxKntZhBhGYkPh3dB bfKQ2PXtDRMkqCYySuxde4ptAqPCLCTfzELyzSyEzQsYmVcxSqYWFOempxabFhjnpZbrFSfm Fpfmpesl5+duYgSnPS3vHYyPHnzQO8TIxMF4iFGCg1lJhDdW9WS8EG9KYmVValF+fFFpTmrx IUZpDhYlcV6lH2fihATSE0tSs1NTC1KLYLJMHJxSDUx22g/LRU+ey+K+/32X+S/XOyJyvcd0 j9U1et3lik7cajzbSLopUHFu2n2O+We/Hd6x7q+HPpfvZNlLRswNW4SOzltcreN8WGJvgdGU PH1Xvgc5HIuMldZ0mdsELLGoKm/KP/Kd/3ddpHj62Ysvd/zbyhe7WlN7X9G7vFkz7hq/ZlkY 93Ra14uSt6e5LvBK+M8I/L4pI493t+iucxJ7/czuXz/T/e7HSxNNyX3TEtzu3pzwJ2jBrTuv sp3YuRbstFzsY60Wej/St4mP7eLLGc4aP/kOv3gcW1cUxz3v878FG19MfJ977oKpLn+kYrBr 46o7538lHHh0TDysKfT5AbW6ltzjxd9llUtf7n3dHDlTiaU4I9FQi7moOBEASOiJxOoDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprPIsWRmVeSWpSXmKPExsWy7bCSvO4h1lPxBvd32VisvtvPZtHa/o3J Yu+72awWK1cfZbJ4fOczu8XR/2/ZLCYdusZosfeWtsXlXXPYLOYve8puse33fGaLK1MWMVus e/2exeLB++vsFq9/nGSzaNv4ldFBwGPnrLvsHufvbWTxuHy21GPTqk42j81L6j1232xg83i/ 7yqbR9+WVYwenzfJebQf6GYK4IrisklJzcksSy3St0vgyvg+dwt7wR6riu0Tr7A1MPbrdTFy ckgImEgcbV/A0sXIxSEksJtRYs7nOcwQCRmJtXc72SBsYYmV/56zQxR9ZJT4/PI4E0iCTUBX 4tqSTSwgtoiAksTf9U1gNrPAHSaJbT9cQWxhAUuJG+e/MnYxcnCwCKhKTDqdCxLmFbCVuPDp MtR8eYmZl76zg9icAnYSK7a9A4sLAdU0ty9khKgXlDg58wkLyBhmAXWJ9fOEIDbJSzRvnc08 gVFwFpKqWQhVs5BULWBkXsUomVpQnJueW2xYYJiXWq5XnJhbXJqXrpecn7uJERyHWpo7GLev +qB3iJGJg/EQowQHs5IIb6zqyXgh3pTEyqrUovz4otKc1OJDjNIcLErivBe6gFIC6Yklqdmp qQWpRTBZJg5OqQYmNgOdP9wzZj44/lFFVvq0+Qoh3cXPOOLdD0w+VHvj0NkSfm7FiZzbOa8c 1M3dt/vDB7GNU56sWbP88ZmdKxhTfDomBzfVvA7Yv8Rrw/TL66/cWTy761nutStrrV+F3p35 Z9pb/9krmCZrP9lYpsrnHdHQVf19nuVKY/OIzB8Pnlw82Zhptz1lwzTWE2w/Zk7mFVbX7zKN m/LE4uhKlfyv09+dn3+gwO5I2VrlOpUuLstjUXNnPXrIcLQ5Yo76kZn/pi5wLuXuL7NTt49d 4hHhlOs6m3/H9ckfQ35JHs41P+J/+u7BRR2JHh57Pkek3vW/mhn+euIy9iP3/mae+lrSFxj4 Nerw0aNupseCD13+kbxTiaU4I9FQi7moOBEAprCWNDIDAAA= X-CMS-MailID: 20201204094747epcas5p121b6eccf78a29ed4cba7c22d6b42d160 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P X-CMS-RootMailID: 20201204094747epcas5p121b6eccf78a29ed4cba7c22d6b42d160 References: <20201204094659.12732-1-selvakuma.s1@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add support for TP 4065a ("Simple Copy Command"), v2020.05.04 ("Ratified") The implementation uses the payload passed from the block layer to form simple copy command. Set the device copy limits to queue limits. Signed-off-by: SelvaKumar S Signed-off-by: Kanchan Joshi Signed-off-by: Nitesh Shetty Signed-off-by: Javier González --- drivers/nvme/host/core.c | 87 ++++++++++++++++++++++++++++++++++++++++ include/linux/nvme.h | 43 ++++++++++++++++++-- 2 files changed, 127 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 9b6ebeb29cca..ff45e57223f0 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -647,6 +647,65 @@ static inline void nvme_setup_flush(struct nvme_ns *ns, cmnd->common.nsid = cpu_to_le32(ns->head->ns_id); } +static inline blk_status_t nvme_setup_copy(struct nvme_ns *ns, + struct request *req, struct nvme_command *cmnd) +{ + struct nvme_ctrl *ctrl = ns->ctrl; + struct nvme_copy_range *range = NULL; + struct blk_copy_payload *payload; + unsigned short nr_range = 0; + u16 control = 0, ssrl; + u32 dsmgmt = 0; + u64 slba; + int i; + + payload = bio_data(req->bio); + nr_range = payload->copy_range; + + if (req->cmd_flags & REQ_FUA) + control |= NVME_RW_FUA; + + if (req->cmd_flags & REQ_FAILFAST_DEV) + control |= NVME_RW_LR; + + cmnd->copy.opcode = nvme_cmd_copy; + cmnd->copy.nsid = cpu_to_le32(ns->head->ns_id); + cmnd->copy.sdlba = cpu_to_le64(blk_rq_pos(req) >> (ns->lba_shift - 9)); + + range = kmalloc_array(nr_range, sizeof(*range), + GFP_ATOMIC | __GFP_NOWARN); + if (!range) + return BLK_STS_RESOURCE; + + for (i = 0; i < nr_range; i++) { + slba = payload->range[i].src; + slba = slba >> (ns->lba_shift - 9); + + ssrl = payload->range[i].len; + ssrl = ssrl >> (ns->lba_shift - 9); + + range[i].slba = cpu_to_le64(slba); + range[i].nlb = cpu_to_le16(ssrl - 1); + } + + cmnd->copy.nr_range = nr_range - 1; + + req->special_vec.bv_page = virt_to_page(range); + req->special_vec.bv_offset = offset_in_page(range); + req->special_vec.bv_len = sizeof(*range) * nr_range; + req->rq_flags |= RQF_SPECIAL_PAYLOAD; + + if (ctrl->nr_streams) + nvme_assign_write_stream(ctrl, req, &control, &dsmgmt); + + //TBD end-to-end + + cmnd->rw.control = cpu_to_le16(control); + cmnd->rw.dsmgmt = cpu_to_le32(dsmgmt); + + return BLK_STS_OK; +} + static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req, struct nvme_command *cmnd) { @@ -829,6 +888,9 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, case REQ_OP_DISCARD: ret = nvme_setup_discard(ns, req, cmd); break; + case REQ_OP_COPY: + ret = nvme_setup_copy(ns, req, cmd); + break; case REQ_OP_READ: ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_read); break; @@ -1850,6 +1912,29 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns) blk_queue_max_write_zeroes_sectors(queue, UINT_MAX); } +static void nvme_config_copy(struct gendisk *disk, struct nvme_ns *ns, + struct nvme_id_ns *id) +{ + struct nvme_ctrl *ctrl = ns->ctrl; + struct request_queue *queue = disk->queue; + + if (!(ctrl->oncs & NVME_CTRL_ONCS_COPY)) { + queue->limits.max_copy_sectors = 0; + queue->limits.max_copy_range_sectors = 0; + queue->limits.max_copy_nr_ranges = 0; + blk_queue_flag_clear(QUEUE_FLAG_COPY, queue); + return; + } + + /* setting copy limits */ + blk_queue_flag_test_and_set(QUEUE_FLAG_COPY, queue); + queue->limits.max_copy_sectors = le64_to_cpu(id->mcl) * + (1 << (ns->lba_shift - 9)); + queue->limits.max_copy_range_sectors = le32_to_cpu(id->mssrl) * + (1 << (ns->lba_shift - 9)); + queue->limits.max_copy_nr_ranges = id->msrc + 1; +} + static void nvme_config_write_zeroes(struct gendisk *disk, struct nvme_ns *ns) { u64 max_blocks; @@ -2045,6 +2130,7 @@ static void nvme_update_disk_info(struct gendisk *disk, set_capacity_and_notify(disk, capacity); nvme_config_discard(disk, ns); + nvme_config_copy(disk, ns, id); nvme_config_write_zeroes(disk, ns); if (id->nsattr & NVME_NS_ATTR_RO) @@ -4616,6 +4702,7 @@ static inline void _nvme_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_download_firmware) != 64); BUILD_BUG_ON(sizeof(struct nvme_format_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_dsm_cmd) != 64); + BUILD_BUG_ON(sizeof(struct nvme_copy_command) != 64); BUILD_BUG_ON(sizeof(struct nvme_write_zeroes_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_abort_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_get_log_page_command) != 64); diff --git a/include/linux/nvme.h b/include/linux/nvme.h index d92535997687..11ed72a2164d 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -289,7 +289,7 @@ struct nvme_id_ctrl { __u8 nvscc; __u8 nwpc; __le16 acwu; - __u8 rsvd534[2]; + __le16 ocfs; __le32 sgls; __le32 mnan; __u8 rsvd544[224]; @@ -314,6 +314,7 @@ enum { NVME_CTRL_ONCS_WRITE_ZEROES = 1 << 3, NVME_CTRL_ONCS_RESERVATIONS = 1 << 5, NVME_CTRL_ONCS_TIMESTAMP = 1 << 6, + NVME_CTRL_ONCS_COPY = 1 << 8, NVME_CTRL_VWC_PRESENT = 1 << 0, NVME_CTRL_OACS_SEC_SUPP = 1 << 0, NVME_CTRL_OACS_DIRECTIVES = 1 << 5, @@ -362,7 +363,10 @@ struct nvme_id_ns { __le16 npdg; __le16 npda; __le16 nows; - __u8 rsvd74[18]; + __le16 mssrl; + __le32 mcl; + __u8 msrc; + __u8 rsvd91[11]; __le32 anagrpid; __u8 rsvd96[3]; __u8 nsattr; @@ -673,6 +677,7 @@ enum nvme_opcode { nvme_cmd_resv_report = 0x0e, nvme_cmd_resv_acquire = 0x11, nvme_cmd_resv_release = 0x15, + nvme_cmd_copy = 0x19, nvme_cmd_zone_mgmt_send = 0x79, nvme_cmd_zone_mgmt_recv = 0x7a, nvme_cmd_zone_append = 0x7d, @@ -691,7 +696,8 @@ enum nvme_opcode { nvme_opcode_name(nvme_cmd_resv_register), \ nvme_opcode_name(nvme_cmd_resv_report), \ nvme_opcode_name(nvme_cmd_resv_acquire), \ - nvme_opcode_name(nvme_cmd_resv_release)) + nvme_opcode_name(nvme_cmd_resv_release), \ + nvme_opcode_name(nvme_cmd_copy)) /* @@ -863,6 +869,36 @@ struct nvme_dsm_range { __le64 slba; }; +struct nvme_copy_command { + __u8 opcode; + __u8 flags; + __u16 command_id; + __le32 nsid; + __u64 rsvd2; + __le64 metadata; + union nvme_data_ptr dptr; + __le64 sdlba; + __u8 nr_range; + __u8 rsvd12; + __le16 control; + __le16 rsvd13; + __le16 dspec; + __le32 ilbrt; + __le16 lbat; + __le16 lbatm; +}; + +struct nvme_copy_range { + __le64 rsvd0; + __le64 slba; + __le16 nlb; + __le16 rsvd18; + __le32 rsvd20; + __le32 eilbrt; + __le16 elbat; + __le16 elbatm; +}; + struct nvme_write_zeroes_cmd { __u8 opcode; __u8 flags; @@ -1400,6 +1436,7 @@ struct nvme_command { struct nvme_download_firmware dlfw; struct nvme_format_cmd format; struct nvme_dsm_cmd dsm; + struct nvme_copy_command copy; struct nvme_write_zeroes_cmd write_zeroes; struct nvme_zone_mgmt_send_cmd zms; struct nvme_zone_mgmt_recv_cmd zmr;