From patchwork Tue Apr 26 10:12:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94677C433EF for ; Tue, 26 Apr 2022 12:15:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349830AbiDZMSM (ORCPT ); Tue, 26 Apr 2022 08:18:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349844AbiDZMSI (ORCPT ); Tue, 26 Apr 2022 08:18:08 -0400 Received: from mailout3.samsung.com (mailout3.samsung.com [203.254.224.33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B09A81BA; Tue, 26 Apr 2022 05:14:59 -0700 (PDT) Received: from epcas5p3.samsung.com (unknown [182.195.41.41]) by mailout3.samsung.com (KnoxPortal) with ESMTP id 20220426121454epoutp03822085a2a850d758d8c4358a8dba87fc~pcUhynVE31888718887epoutp035; Tue, 26 Apr 2022 12:14:54 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout3.samsung.com 20220426121454epoutp03822085a2a850d758d8c4358a8dba87fc~pcUhynVE31888718887epoutp035 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975294; bh=Y6nLu4EUmmRo801BjsJamjU9FacPnHUT/9x7GE7rhSo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sxNKoZt6j7K21LZeNMNcpANzs1jp5OtTBhVSdOBoEGBqnkvX4U9rCLOLba97EmGcs r/97HLy3ZmRSJu0XfofwJbFWLyPeGIFtW0EqabfkB9LwThRmQcYQKtIHGX+pU0V+wd JBMTHKlyhHm4fQZu3rcWM2UK+DB7jWgsz/5LG9PA= Received: from epsnrtp4.localdomain (unknown [182.195.42.165]) by epcas5p2.samsung.com (KnoxPortal) with ESMTP id 20220426121453epcas5p2d52c6695014c9969d7b79bfaab7aef46~pcUhW689V3067830678epcas5p20; Tue, 26 Apr 2022 12:14:53 +0000 (GMT) Received: from epsmges5p2new.samsung.com (unknown [182.195.38.178]) by epsnrtp4.localdomain (Postfix) with ESMTP id 4Kngmp1FwZz4x9Pw; Tue, 26 Apr 2022 12:14:50 +0000 (GMT) Received: from epcas5p2.samsung.com ( [182.195.41.40]) by epsmges5p2new.samsung.com (Symantec Messaging Gateway) with SMTP id 56.3F.09827.A32E7626; Tue, 26 Apr 2022 21:14:50 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p4.samsung.com (KnoxPortal) with ESMTPA id 20220426101910epcas5p4fd64f83c6da9bbd891107d158a2743b5~pave0_0R70865008650epcas5p4F; Tue, 26 Apr 2022 10:19:10 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20220426101910epsmtrp148b5805b8c074921e462188833d374ad~paveyax752263822638epsmtrp1R; Tue, 26 Apr 2022 10:19:10 +0000 (GMT) X-AuditID: b6c32a4a-b51ff70000002663-ec-6267e23a98e5 Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id 22.49.08853.E17C7626; Tue, 26 Apr 2022 19:19:10 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426101903epsmtip1ddd1407d9c85bbbf445ca97418c7cbaf~pavYpuX6P0426404264epsmtip1k; Tue, 26 Apr 2022 10:19:03 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Kanchan Joshi , Arnav Dawn , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 01/10] block: Introduce queue limits for copy-offload support Date: Tue, 26 Apr 2022 15:42:29 +0530 Message-Id: <20220426101241.30100-2-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01Te1BUZRTvu/fu3YUJu4CMn5TDusUMSjyWFvxAsAapbkHF4Awmg8ICt8UB dtd9RGIjIODwkHcwuigvTRQYQAwFYakgIhYJneUhGCCCOcC4qKhovGK5UP73O79zfr/znfPN 4eEWg1xr3mGpilFIxdEC0pS41r7DzsHjviTMea6PQLW633HUMjTHQVUj2SQqfPwKR7O/TnBQ XvZpLlro6cWRfnIT0hqKOOjWy0QMTdSvYGjolyYMtZTnYehyVQeGHlacB6i57AmGFseFqGPl EYnGnw0TKK9tAKAH/RoMaYftUYu2i0D6G2dJVHLxARdlDDaSqHVGi6OKzmUM/alZJFFu51UO apxMBOjaQgmO2kf7CVQzM0ugP4bfRimnXnFR71In5yNbWt/nS2vGekg6N8nApZs0I1y6d/QK Qet71HR9ZRpJX70QT+ffqQB081ACSZ+42YHTp58+I+nMJANJN6WMcegnD4YJera1n/TfEhTl GcmIIxgFn5GGyyIOSyVeAt99IXtDXN2chQ5Cd7RLwJeKYxgvgY+fv8Mnh6NXtyngfyuOVq9S /mKlUuC0x1MhU6sYfqRMqfISMPKIaLlI7qgUxyjVUomjlFF5CJ2dXVxXC0OjIjtby0n5SNx3 Kb9lkAmgTpIOTHiQEsHZhy/JdGDKs6CaAVzUFRJs8BTAEn0yxgYvALz48xi+IfnL0LAu0QKY ZDi3HqRgcDD7n1U9j0dS9rB7hWcUbKYIeHl+fs0Wpwp4cCCtGBgTllQA7J+5u+ZKULawt6GF Y8RmlAfMm76JG30g5QSzx8yNtAm1G/7YYcDYEnPYdWaSMGKcsoFJDUW40R9SWaaw+1I+wWp9 YFlVMPtoSzjd+ROXxdZwKvvkOo6F10+WYqw2GcB0nY5gEx/C2y1LmNEHp3bA2htOLL0NFuhq MLbvJpi5MImxvBlsLN7A78Lq2lKSxVvhwHziOqZhadk5nN1VFoD3FmtADuBrXptH89o8mv9b lwK8Emxl5MoYCaN0lbtImdj/fjlcFlMP1i5s5+eNYPzeY8c2gPFAG4A8XLDZrMD2mzALswjx 0ThGIQtRqKMZZRtwXd13Lm5tFS5bPVGpKkQocncWubm5idw/cBMKtph1S+rEFpRErGKiGEbO KDZ0GM/EOgE7VvEso330voAKH7j0XK1f+jvZvDhVeCTzetGTo/EZrtMg9fwBxXCvMHdqe+BK mtZdLbqwu/h43Z5QYJsT7Hhon5053r5/P8+9p7Gfk9BgG6B/Y7kyvKn69sSoffDUmdlytW3s 1+8ULu77OM2XX3G39axvofAL4Vsq+s6IU9vTfFPvXYKAgYISG9AVqHKXXjF86Rmk0/RderNQ 5LX3mJ/8VnrkEW5dXU1cYHV8NXoY4OLTf9/GzzI15VZoysH3CXPmUV5mUMV7STNWTVrZZ4Mn ppa+fz6+fHCXN7/mxYHCOf/S7TnlLf6fhtllxfrQkuNAl1QT0Nw08VUl91DoNqvOH/q8BYQy UizciSuU4n8B5/fKD+oEAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA02Sf1CTdQDG7/313cvuZi8T5Qt0FLujBM/BOv74Bh1h59Xb1ZK78Ai70qEv w5ON3cYAxQglrkJqMU+loTd1CA0sZYhtjC1F5kT55c0ZYEDIODFoCGpAA6lJ3fnfc8/nc89f D00Iq6hIeo+ygFMrZXkiwCcvXRVFb4p2y7MTfUeE6PyNawRqH3xEoaZhHUDHHi4SaObKOIX0 uhoeCvT0EcjjW4Mc/loK9S8cxNG4ZQVHg5dtOGo/o8eRucmFo/sNJgzZT8/iaGlMglwrfwI0 9niIRPqOOxia8Bpw5BjaiNodXSTytJ0AyFg/wUOHf7UC5JxyEKjB/RRHvYYlgKrdLRSy+g5i 6FLASKCrI14S/TQ1Q6LrQ1GoomqRh/qW3VRaLOu5/R5rGO0BbHW5n8faDMM8tm+kmWQ9PVrW 0vg1YFvqPmePDDRgrH2wDLCHul0EWzP3GLDflPsBa6sYpdjZiSGSnXF6QXr4dv4bu7m8PYWc OiF1Jz/X7TwDVMP7iys6D4My7IK8EguhIZMEf/O3gkqMTwsZOwb1FhuxCiJg/XLnf3ktND+9 z1uVynG4MG+gKjGaBsxGeHOFDjphDAnN8/Nk0CGYX2jomg6AIFjLpMN6Vx0ZzCQTC/ta26lg FjDJUP9HNxHcgUwC1I2GBusQJgWedfnxYC38VxkIFK/aobDre9+zFYJ5CZa31hLfYYzhOWR4 Dp3C8EYsglNpFHKFRqKSKLkisUam0GiVcvGufIUFe3aY+Dgr9nPjQ3EHhtNYBwZpQhQmOBqb ky0U7Jbt28+p83eotXmcpgOLoklRuKC/smuHkJHLCri9HKfi1P9TnA6JLMOTpNkuvNC+7tab L1h2TeZoYs52WQXJueu1VmlKhqnhqM60rzfhQNXs3bsxJYmvK0ubX+2PKKoq5SflSzPFF+3c ayDnxLQ7RbE5NOneUrRI9cHxzC/XFM05H105NHlq+q0X3w83jWyovbc3LP5yjdS0pS3qVmaW mXfj062f2Z2+tMQn3t7bWWHn5qjxDZ+sz9Bayj78qyX1wEAy/47xQTH4uyRz5cL1Be27FdcG lkt422zd0q9eNoOsZn1c5+astBF9ncP7dqrxHW/n77rQ7TGL5xq1Tcb6L+gMCRZXqKw+eX7L pGP84kdtYvhDYN23x6bGnsyXaiP9nn7oeVDwY/orypsfi0hNrkwST6g1sn8AmlHhSZ8DAAA= X-CMS-MailID: 20220426101910epcas5p4fd64f83c6da9bbd891107d158a2743b5 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426101910epcas5p4fd64f83c6da9bbd891107d158a2743b5 References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add device limits as sysfs entries, - copy_offload (RW) - copy_max_bytes (RW) - copy_max_hw_bytes (RO) - copy_max_range_bytes (RW) - copy_max_range_hw_bytes (RO) - copy_max_nr_ranges (RW) - copy_max_nr_ranges_hw (RO) Above limits help to split the copy payload in block layer. copy_offload, used for setting copy offload(1) or emulation(0). copy_max_bytes: maximum total length of copy in single payload. copy_max_range_bytes: maximum length in a single entry. copy_max_nr_ranges: maximum number of entries in a payload. copy_max_*_hw_*: Reflects the device supported maximum limits. Signed-off-by: Nitesh Shetty Signed-off-by: Kanchan Joshi Signed-off-by: Arnav Dawn Reviewed-by: Hannes Reinecke --- Documentation/ABI/stable/sysfs-block | 83 ++++++++++++++++ block/blk-settings.c | 59 ++++++++++++ block/blk-sysfs.c | 138 +++++++++++++++++++++++++++ include/linux/blkdev.h | 13 +++ 4 files changed, 293 insertions(+) diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block index e8797cd09aff..65e64b5a0105 100644 --- a/Documentation/ABI/stable/sysfs-block +++ b/Documentation/ABI/stable/sysfs-block @@ -155,6 +155,89 @@ Description: last zone of the device which may be smaller. +What: /sys/block//queue/copy_offload +Date: April 2022 +Contact: linux-block@vger.kernel.org +Description: + [RW] When read, this file shows whether offloading copy to + device is enabled (1) or disabled (0). Writing '0' to this + file will disable offloading copies for this device. + Writing any '1' value will enable this feature. + + +What: /sys/block//queue/copy_max_bytes +Date: April 2022 +Contact: linux-block@vger.kernel.org +Description: + [RW] While 'copy_max_hw_bytes' is the hardware limit for the + device, 'copy_max_bytes' setting is the software limit. + Setting this value lower will make Linux issue smaller size + copies. + + +What: /sys/block//queue/copy_max_hw_bytes +Date: April 2022 +Contact: linux-block@vger.kernel.org +Description: + [RO] Devices that support offloading copy functionality may have + internal limits on the number of bytes that can be offloaded + in a single operation. The `copy_max_hw_bytes` + parameter is set by the device driver to the maximum number of + bytes that can be copied in a single operation. Copy + requests issued to the device must not exceed this limit. + A value of 0 means that the device does not + support copy offload. + + +What: /sys/block//queue/copy_max_nr_ranges +Date: April 2022 +Contact: linux-block@vger.kernel.org +Description: + [RW] While 'copy_max_nr_ranges_hw' is the hardware limit for the + device, 'copy_max_nr_ranges' setting is the software limit. + + +What: /sys/block//queue/copy_max_nr_ranges_hw +Date: April 2022 +Contact: linux-block@vger.kernel.org +Description: + [RO] Devices that support offloading copy functionality may have + internal limits on the number of ranges in single copy operation + that can be offloaded in a single operation. + A range is tuple of source, destination and length of data + to be copied. The `copy_max_nr_ranges_hw` parameter is set by + the device driver to the maximum number of ranges that can be + copied in a single operation. Copy requests issued to the device + must not exceed this limit. A value of 0 means that the device + does not support copy offload. + + +What: /sys/block//queue/copy_max_range_bytes +Date: April 2022 +Contact: linux-block@vger.kernel.org +Description: + [RW] While 'copy_max_range_hw_bytes' is the hardware limit for + the device, 'copy_max_range_bytes' setting is the software + limit. + + +What: /sys/block//queue/copy_max_range_hw_bytes +Date: April 2022 +Contact: linux-block@vger.kernel.org +Description: + [RO] Devices that support offloading copy functionality may have + internal limits on the size of data, that can be copied in a + single range within a single copy operation. + A range is tuple of source, destination and length of data to be + copied. The `copy_max_range_hw_bytes` parameter is set by the + device driver to set the maximum length in bytes of a range + that can be copied in an operation. + Copy requests issued to the device must not exceed this limit. + Sum of sizes of all ranges in a single opeartion should not + exceed 'copy_max_hw_bytes'. A value of 0 means that the device + does not support copy offload. + + What: /sys/block//queue/crypto/ Date: February 2022 Contact: linux-block@vger.kernel.org diff --git a/block/blk-settings.c b/block/blk-settings.c index 6ccceb421ed2..70167aee3bf7 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -57,6 +57,12 @@ void blk_set_default_limits(struct queue_limits *lim) lim->misaligned = 0; lim->zoned = BLK_ZONED_NONE; lim->zone_write_granularity = 0; + lim->max_hw_copy_sectors = 0; + lim->max_copy_sectors = 0; + lim->max_hw_copy_nr_ranges = 0; + lim->max_copy_nr_ranges = 0; + lim->max_hw_copy_range_sectors = 0; + lim->max_copy_range_sectors = 0; } EXPORT_SYMBOL(blk_set_default_limits); @@ -81,6 +87,12 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; lim->max_zone_append_sectors = UINT_MAX; + lim->max_hw_copy_sectors = ULONG_MAX; + lim->max_copy_sectors = ULONG_MAX; + lim->max_hw_copy_range_sectors = UINT_MAX; + lim->max_copy_range_sectors = UINT_MAX; + lim->max_hw_copy_nr_ranges = USHRT_MAX; + lim->max_copy_nr_ranges = USHRT_MAX; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -177,6 +189,45 @@ void blk_queue_max_discard_sectors(struct request_queue *q, } EXPORT_SYMBOL(blk_queue_max_discard_sectors); +/** + * blk_queue_max_copy_sectors - set max sectors for a single copy payload + * @q: the request queue for the device + * @max_copy_sectors: maximum number of sectors to copy + **/ +void blk_queue_max_copy_sectors(struct request_queue *q, + unsigned int max_copy_sectors) +{ + q->limits.max_hw_copy_sectors = max_copy_sectors; + q->limits.max_copy_sectors = max_copy_sectors; +} +EXPORT_SYMBOL_GPL(blk_queue_max_copy_sectors); + +/** + * blk_queue_max_copy_range_sectors - set max sectors for a single range, in a copy payload + * @q: the request queue for the device + * @max_copy_range_sectors: maximum number of sectors to copy in a single range + **/ +void blk_queue_max_copy_range_sectors(struct request_queue *q, + unsigned int max_copy_range_sectors) +{ + q->limits.max_hw_copy_range_sectors = max_copy_range_sectors; + q->limits.max_copy_range_sectors = max_copy_range_sectors; +} +EXPORT_SYMBOL_GPL(blk_queue_max_copy_range_sectors); + +/** + * blk_queue_max_copy_nr_ranges - set max number of ranges, in a copy payload + * @q: the request queue for the device + * @max_copy_nr_ranges: maximum number of ranges + **/ +void blk_queue_max_copy_nr_ranges(struct request_queue *q, + unsigned int max_copy_nr_ranges) +{ + q->limits.max_hw_copy_nr_ranges = max_copy_nr_ranges; + q->limits.max_copy_nr_ranges = max_copy_nr_ranges; +} +EXPORT_SYMBOL_GPL(blk_queue_max_copy_nr_ranges); + /** * blk_queue_max_secure_erase_sectors - set max sectors for a secure erase * @q: the request queue for the device @@ -572,6 +623,14 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->max_segment_size = min_not_zero(t->max_segment_size, b->max_segment_size); + t->max_copy_sectors = min(t->max_copy_sectors, b->max_copy_sectors); + t->max_hw_copy_sectors = min(t->max_hw_copy_sectors, b->max_hw_copy_sectors); + t->max_copy_range_sectors = min(t->max_copy_range_sectors, b->max_copy_range_sectors); + t->max_hw_copy_range_sectors = min(t->max_hw_copy_range_sectors, + b->max_hw_copy_range_sectors); + t->max_copy_nr_ranges = min(t->max_copy_nr_ranges, b->max_copy_nr_ranges); + t->max_hw_copy_nr_ranges = min(t->max_hw_copy_nr_ranges, b->max_hw_copy_nr_ranges); + t->misaligned |= b->misaligned; alignment = queue_limit_alignment_offset(b, start); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 88bd41d4cb59..bae987c10f7f 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -212,6 +212,129 @@ static ssize_t queue_discard_zeroes_data_show(struct request_queue *q, char *pag return queue_var_show(0, page); } +static ssize_t queue_copy_offload_show(struct request_queue *q, char *page) +{ + return queue_var_show(blk_queue_copy(q), page); +} + +static ssize_t queue_copy_offload_store(struct request_queue *q, + const char *page, size_t count) +{ + unsigned long copy_offload; + ssize_t ret = queue_var_store(©_offload, page, count); + + if (ret < 0) + return ret; + + if (copy_offload && !q->limits.max_hw_copy_sectors) + return -EINVAL; + + if (copy_offload) + blk_queue_flag_set(QUEUE_FLAG_COPY, q); + else + blk_queue_flag_clear(QUEUE_FLAG_COPY, q); + + return ret; +} + +static ssize_t queue_copy_max_hw_show(struct request_queue *q, char *page) +{ + return sprintf(page, "%llu\n", + (unsigned long long)q->limits.max_hw_copy_sectors << 9); +} + +static ssize_t queue_copy_max_show(struct request_queue *q, char *page) +{ + return sprintf(page, "%llu\n", + (unsigned long long)q->limits.max_copy_sectors << 9); +} + +static ssize_t queue_copy_max_store(struct request_queue *q, + const char *page, size_t count) +{ + unsigned long max_copy; + ssize_t ret = queue_var_store(&max_copy, page, count); + + if (ret < 0) + return ret; + + if (max_copy & (queue_logical_block_size(q) - 1)) + return -EINVAL; + + max_copy >>= 9; + if (max_copy > q->limits.max_hw_copy_sectors) + max_copy = q->limits.max_hw_copy_sectors; + + q->limits.max_copy_sectors = max_copy; + return ret; +} + +static ssize_t queue_copy_range_max_hw_show(struct request_queue *q, char *page) +{ + return sprintf(page, "%llu\n", + (unsigned long long)q->limits.max_hw_copy_range_sectors << 9); +} + +static ssize_t queue_copy_range_max_show(struct request_queue *q, + char *page) +{ + return sprintf(page, "%llu\n", + (unsigned long long)q->limits.max_copy_range_sectors << 9); +} + +static ssize_t queue_copy_range_max_store(struct request_queue *q, + const char *page, size_t count) +{ + unsigned long max_copy; + ssize_t ret = queue_var_store(&max_copy, page, count); + + if (ret < 0) + return ret; + + if (max_copy & (queue_logical_block_size(q) - 1)) + return -EINVAL; + + max_copy >>= 9; + if (max_copy > UINT_MAX) + return -EINVAL; + + if (max_copy > q->limits.max_hw_copy_range_sectors) + max_copy = q->limits.max_hw_copy_range_sectors; + + q->limits.max_copy_range_sectors = max_copy; + return ret; +} + +static ssize_t queue_copy_nr_ranges_max_hw_show(struct request_queue *q, char *page) +{ + return queue_var_show(q->limits.max_hw_copy_nr_ranges, page); +} + +static ssize_t queue_copy_nr_ranges_max_show(struct request_queue *q, + char *page) +{ + return queue_var_show(q->limits.max_copy_nr_ranges, page); +} + +static ssize_t queue_copy_nr_ranges_max_store(struct request_queue *q, + const char *page, size_t count) +{ + unsigned long max_nr; + ssize_t ret = queue_var_store(&max_nr, page, count); + + if (ret < 0) + return ret; + + if (max_nr > USHRT_MAX) + return -EINVAL; + + if (max_nr > q->limits.max_hw_copy_nr_ranges) + max_nr = q->limits.max_hw_copy_nr_ranges; + + q->limits.max_copy_nr_ranges = max_nr; + return ret; +} + static ssize_t queue_write_same_max_show(struct request_queue *q, char *page) { return queue_var_show(0, page); @@ -596,6 +719,14 @@ QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones"); QUEUE_RO_ENTRY(queue_max_open_zones, "max_open_zones"); QUEUE_RO_ENTRY(queue_max_active_zones, "max_active_zones"); +QUEUE_RW_ENTRY(queue_copy_offload, "copy_offload"); +QUEUE_RO_ENTRY(queue_copy_max_hw, "copy_max_hw_bytes"); +QUEUE_RW_ENTRY(queue_copy_max, "copy_max_bytes"); +QUEUE_RO_ENTRY(queue_copy_range_max_hw, "copy_max_range_hw_bytes"); +QUEUE_RW_ENTRY(queue_copy_range_max, "copy_max_range_bytes"); +QUEUE_RO_ENTRY(queue_copy_nr_ranges_max_hw, "copy_max_nr_ranges_hw"); +QUEUE_RW_ENTRY(queue_copy_nr_ranges_max, "copy_max_nr_ranges"); + QUEUE_RW_ENTRY(queue_nomerges, "nomerges"); QUEUE_RW_ENTRY(queue_rq_affinity, "rq_affinity"); QUEUE_RW_ENTRY(queue_poll, "io_poll"); @@ -642,6 +773,13 @@ static struct attribute *queue_attrs[] = { &queue_discard_max_entry.attr, &queue_discard_max_hw_entry.attr, &queue_discard_zeroes_data_entry.attr, + &queue_copy_offload_entry.attr, + &queue_copy_max_hw_entry.attr, + &queue_copy_max_entry.attr, + &queue_copy_range_max_hw_entry.attr, + &queue_copy_range_max_entry.attr, + &queue_copy_nr_ranges_max_hw_entry.attr, + &queue_copy_nr_ranges_max_entry.attr, &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, &queue_zone_append_max_entry.attr, diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 1b24c1fb3bb1..3596fd37fae7 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -270,6 +270,13 @@ struct queue_limits { unsigned int discard_alignment; unsigned int zone_write_granularity; + unsigned long max_hw_copy_sectors; + unsigned long max_copy_sectors; + unsigned int max_hw_copy_range_sectors; + unsigned int max_copy_range_sectors; + unsigned short max_hw_copy_nr_ranges; + unsigned short max_copy_nr_ranges; + unsigned short max_segments; unsigned short max_integrity_segments; unsigned short max_discard_segments; @@ -574,6 +581,7 @@ struct request_queue { #define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */ #define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */ #define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */ +#define QUEUE_FLAG_COPY 30 /* supports copy offload */ #define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_SAME_COMP) | \ @@ -596,6 +604,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); test_bit(QUEUE_FLAG_STABLE_WRITES, &(q)->queue_flags) #define blk_queue_io_stat(q) test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags) #define blk_queue_add_random(q) test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags) +#define blk_queue_copy(q) test_bit(QUEUE_FLAG_COPY, &(q)->queue_flags) #define blk_queue_zone_resetall(q) \ test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags) #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) @@ -960,6 +969,10 @@ extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int); extern void blk_queue_max_segments(struct request_queue *, unsigned short); extern void blk_queue_max_discard_segments(struct request_queue *, unsigned short); +extern void blk_queue_max_copy_sectors(struct request_queue *q, unsigned int max_copy_sectors); +extern void blk_queue_max_copy_range_sectors(struct request_queue *q, + unsigned int max_copy_range_sectors); +extern void blk_queue_max_copy_nr_ranges(struct request_queue *q, unsigned int max_copy_nr_ranges); void blk_queue_max_secure_erase_sectors(struct request_queue *q, unsigned int max_sectors); extern void blk_queue_max_segment_size(struct request_queue *, unsigned int); From patchwork Tue Apr 26 10:12:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 999BAC433FE for ; Tue, 26 Apr 2022 12:15:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349852AbiDZMSS (ORCPT ); Tue, 26 Apr 2022 08:18:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349832AbiDZMSO (ORCPT ); Tue, 26 Apr 2022 08:18:14 -0400 Received: from mailout1.samsung.com (mailout1.samsung.com [203.254.224.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19643B8D; Tue, 26 Apr 2022 05:15:02 -0700 (PDT) Received: from epcas5p4.samsung.com (unknown [182.195.41.42]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20220426121459epoutp017ee6522e724ee4bbfd092a738a84e3d0~pcUm7r7hV2247722477epoutp01R; Tue, 26 Apr 2022 12:14:59 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20220426121459epoutp017ee6522e724ee4bbfd092a738a84e3d0~pcUm7r7hV2247722477epoutp01R DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975299; bh=tqXJBdcELALEIuFjFoScxTSZS/LJo98WnLSlGhz8AJI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FgM6N/lgH+TA4NGsMRPUNMRbWsAfUZzs3Havh17DPs9cGGcHUVU4Y8pHWH4tiAMax oGv/nZSfOAJ1iEosGQW3fLjZk2KLn8f1zpKbC5PSrZ6GwSzSWumXq67udNc+mMVScY 5FW3/aMC/BZgB9NHzhm4+Y7pcGt4RkdQ/Ybri1MU= Received: from epsnrtp3.localdomain (unknown [182.195.42.164]) by epcas5p1.samsung.com (KnoxPortal) with ESMTP id 20220426121459epcas5p15efc43d1ebe2e6d28f7302c036c598e3~pcUmGCle50384703847epcas5p1g; Tue, 26 Apr 2022 12:14:59 +0000 (GMT) Received: from epsmges5p2new.samsung.com (unknown [182.195.38.180]) by epsnrtp3.localdomain (Postfix) with ESMTP id 4Kngmv0qYDz4x9Pt; Tue, 26 Apr 2022 12:14:55 +0000 (GMT) Received: from epcas5p4.samsung.com ( [182.195.41.42]) by epsmges5p2new.samsung.com (Symantec Messaging Gateway) with SMTP id 6B.3F.09827.E32E7626; Tue, 26 Apr 2022 21:14:54 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p3.samsung.com (KnoxPortal) with ESMTPA id 20220426101921epcas5p341707619b5e836490284a42c92762083~pavpK3rUF2402324023epcas5p3y; Tue, 26 Apr 2022 10:19:21 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20220426101921epsmtrp1c93fb5e33ffceee41e1d965b9e3c13e9~pavpIvLBS2263822638epsmtrp1W; Tue, 26 Apr 2022 10:19:21 +0000 (GMT) X-AuditID: b6c32a4a-b3bff70000002663-f7-6267e23ee567 Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id D7.AA.08924.927C7626; Tue, 26 Apr 2022 19:19:21 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426101915epsmtip1e9c7141da10bbb26f46630ee8a0de460~pavjguXAW0426304263epsmtip1g; Tue, 26 Apr 2022 10:19:15 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Arnav Dawn , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 02/10] block: Add copy offload support infrastructure Date: Tue, 26 Apr 2022 15:42:30 +0530 Message-Id: <20220426101241.30100-3-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01Ta0xTZxjOd87p6YEEOBYXPmEG7CSbIJfK7eMmMwM5m24j7oeJC2FHOBYG tE0L6i6ZXEQjDLkYBhQCqBsOURiIjFsBYQyBITBuQoaIFAzrxn0CLeiorZv/nvfJ83zv875f XgoXDPCtqShJHCeXsDFC0pSobd/n4BTwRHzStQmiyu5fcdQ0tsJD5RMZJPpucQNHC/emeSg7 I4+PdL19OBpUmyPVfAEP9a8nYmi6+gWGxlrrMdR0LRtDZeUdGHp64zpAjVeXMLQ5JUJTq+ME ym4bAWhmWIkh1bgjalJ1EWiwoZBExaUzfJQ2WkeiZo0KRzc6n2PogXKTRFmdd3ioTp0IUK2u GEftj4YJVKFZIND9cRuU8u0GH/VtdfLe3csMDh1hlJO9JJOVPM9n6pUTfKbvURXBDPbGM9U3 L5HMne/PMVce3gBM41gCyST91oEzecurJJOePE8y9SmTPGZpZpxgFpqHyRCrE9F+kRwbwcnt OEm4NCJKIvYXHvkk7L0wD09XkZPIG3kJ7SRsLOcvDDwa4nQ4KmZ7j0K702xM/DYVwioUQpeD fnJpfBxnFylVxPkLOVlEjMxd5qxgYxXxErGzhIvzEbm6HvDYFn4WHakZWgKy4eNnNbPNeAK4 HJwKTChIu8OkxAIsFZhSAroRwPXNFNJQLAOoLCsGhmIFwB7tBPHK0p83a7Q0ANjfUm1UpWCw v+TJtoqiSNoR9ryg9IadNAHL1tYIvQanF/nw1oCOp9dY0gzsyvXXawjaHhYPzfH12Iz2gV0v Cvl6CaRdYMbkDj1tQvvCHzrmMYNkB+zKV7/Mg9O2MPluAa5/HtIXTGF3+RxpCBoIW0s6+QZs Cf/srDFiaziXccGIz8CfL5RgBvN5AFO7u41TBsCBpi1MHwKn98HKBhcDvRvmdFdghsbmMF2n xgy8GawreoXfgrcqS4wZdsGRtUQjZmCRSk0YdnUZwGez01gmsFO+NpDytYGU/7cuAfhNsIuT KWLFnMJDdkDCnfnvl8OlsdXg5W05fFAHph4vOrcBjAJtAFK4cKdZjv2pkwKzCPaLLzm5NEwe H8Mp2oDH9sKzcOs3wqXbxymJCxO5e7u6e3p6unu7eYqEVmY94p9YAS1m47hojpNx8lc+jDKx TsCOhwZecqNC3dIa2f2coD0oycpsw/uBKvkjq7lycF1HnQ2ysHkz/2+Qc6hIE3l7nuQ0aMqk 8v6HtqsP3g85d/VYepLDU4W0hzzBHPNtt9fiv0s9qnTsobdr3zn2rMpL7BUU6MlwnzPX/Ezr HjqGhkuqhVm78+kNYWnCNxu1AetlVr/8qPJtDaKSCnr2FhYG2EgWTvncTrnfu5q/su6QueQu ISzv1dpWiNTn55YTTgckTDxqGdGm9H1tqbuYKyvLDZ14bKfMO3SQH5VmHTtqUTWrDf5r8aJJ y+E/1KM1Fm6N+cEbd/dUHM3Rfvr846/+KZ1X5xXIBObaGtv9exjLzCtbSEgoIlmRAy5XsP8C kAMhVOQEAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA03SbUxTZxTA8T333t5emmCuxekjJDM2mMWiQO2mZ0OY2bLsEs2UZZkb2eKK 3BUYL11rN9kyV+hcBpuiTCYUMhy+NEBWERSLFKJ1wLqBhZQWW2MJsRSWbkUqCUWKzEKW+O3k +f+e5Hw4DCn+ThDP5Bcf4dXFikIJLaI6b0te2L51QJmT+u8v8XD5z34SLO5HAmi9X0XDzw8X SJi59UAA1VW1QlgcspPg8K2BnmC9AIbDZQQ8aF8mwH2ziwBLUzUBza19BEwZzyPo/nWWgMiE DCbmPBRUW10IJp0GAno8SWDpsVHguNFAQ+OlSSH8MGamoTfQQ4Jx4AkBdwwRGk4PdAjA7CtD 0LnYSMJtr5MCU2CGgj88CXD8xwUh2JcGBHsSOcfoXs4wPkRzp/VBIddluC/k7N4rFOcY0nLt LRU013HhG+6nu0bEdbt1NFc+2EdytaE5mjuhD9Jc1/FxATc76aG4mV4nfWBDtmh3Ll+Y/zmv Tsn4WJQXGJ1FKufBowF/L6lDJ9+qRDEMZl/Cw7V+ohKJGDFrRrhmzEWtho340tLv5Ooch5uf TAlXkZ7A3d97nwaGodkk/NcyEzXrWAo3z89TUUOytQw2tX5LRE0cy2Hb2fSoodgtuHH0b2F0 jmVfxbblBmGUYDYFV42vjT7HsGn4Yl9w5af4Kbm7eHRVr8W2Ot/KZiS7Ceuv1ZOnEGt4Jhme SecQ0YI28ipNkbJII1PtKOa/SNYoijTaYmXy4ZKidrRyKlKpGVlaHiZbEcEgK8IMKVkXW7Pl kxxxbK6i9EteXXJIrS3kNVaUwFCSDbHDlbZDYlapOMJ/yvMqXv1/JZiYeB2he6cu64xovjSt /HGr8attfOb0i7fK4+JLd2Xcu+rIypTvYHg3t7tNFO7tXqNq886dKZAoN5+QN9mUZc4K2b7I Y1/HTi1177d+ddo271w41xU4NtJlauo07ax5H4fXZ3+kvxgK+637phXKgvOT+q8/e2X7I4U8 0bi/M0dmO1ywpH/5ueyCppg7b+8/VxcqyysUua8w4RKXfU9qvR2/odiUbfngusAmf28Cn3Rp I9c3vy6hq82qvZfL7TcGw2SSf8xlPvu8VNy27H8t1OLLKOEPjkxHGhzpwwsX+t+t6xiskKeb /plRZSZIk3VZiadCu45NvZnizRd7PhxRHtiaGpRQmjyFTEqqNYr/AP+WeWiZAwAA X-CMS-MailID: 20220426101921epcas5p341707619b5e836490284a42c92762083 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426101921epcas5p341707619b5e836490284a42c92762083 References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Introduce blkdev_issue_copy which supports source and destination bdevs, and an array of (source, destination and copy length) tuples. Introduce REQ_COPY copy offload operation flag. Create a read-write bio pair with a token as payload and submitted to the device in order. Read request populates token with source specific information which is then passed with write request. This design is courtesy Mikulas Patocka's token based copy Larger copy will be divided, based on max_copy_sectors, max_copy_range_sector limits. Signed-off-by: Nitesh Shetty Signed-off-by: Arnav Dawn Reported-by: kernel test robot --- block/blk-lib.c | 232 ++++++++++++++++++++++++++++++++++++++ block/blk.h | 2 + include/linux/blk_types.h | 21 ++++ include/linux/blkdev.h | 2 + include/uapi/linux/fs.h | 14 +++ 5 files changed, 271 insertions(+) diff --git a/block/blk-lib.c b/block/blk-lib.c index 09b7e1200c0f..ba9da2d2f429 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -117,6 +117,238 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, } EXPORT_SYMBOL(blkdev_issue_discard); +/* + * Wait on and process all in-flight BIOs. This must only be called once + * all bios have been issued so that the refcount can only decrease. + * This just waits for all bios to make it through bio_copy_end_io. IO + * errors are propagated through cio->io_error. + */ +static int cio_await_completion(struct cio *cio) +{ + int ret = 0; + unsigned long flags; + + spin_lock_irqsave(&cio->lock, flags); + if (cio->refcount) { + cio->waiter = current; + __set_current_state(TASK_UNINTERRUPTIBLE); + spin_unlock_irqrestore(&cio->lock, flags); + blk_io_schedule(); + /* wake up sets us TASK_RUNNING */ + spin_lock_irqsave(&cio->lock, flags); + cio->waiter = NULL; + ret = cio->io_err; + } + spin_unlock_irqrestore(&cio->lock, flags); + kvfree(cio); + + return ret; +} + +static void bio_copy_end_io(struct bio *bio) +{ + struct copy_ctx *ctx = bio->bi_private; + struct cio *cio = ctx->cio; + sector_t clen; + int ri = ctx->range_idx; + unsigned long flags; + bool wake = false; + + if (bio->bi_status) { + cio->io_err = bio->bi_status; + clen = (bio->bi_iter.bi_sector << SECTOR_SHIFT) - ctx->start_sec; + cio->rlist[ri].comp_len = min_t(sector_t, clen, cio->rlist[ri].comp_len); + } + __free_page(bio->bi_io_vec[0].bv_page); + kfree(ctx); + bio_put(bio); + + spin_lock_irqsave(&cio->lock, flags); + if (((--cio->refcount) <= 0) && cio->waiter) + wake = true; + spin_unlock_irqrestore(&cio->lock, flags); + if (wake) + wake_up_process(cio->waiter); +} + +/* + * blk_copy_offload - Use device's native copy offload feature + * Go through user provide payload, prepare new payload based on device's copy offload limits. + */ +int blk_copy_offload(struct block_device *src_bdev, int nr_srcs, + struct range_entry *rlist, struct block_device *dst_bdev, gfp_t gfp_mask) +{ + struct request_queue *sq = bdev_get_queue(src_bdev); + struct request_queue *dq = bdev_get_queue(dst_bdev); + struct bio *read_bio, *write_bio; + struct copy_ctx *ctx; + struct cio *cio; + struct page *token; + sector_t src_blk, copy_len, dst_blk; + sector_t remaining, max_copy_len = LONG_MAX; + unsigned long flags; + int ri = 0, ret = 0; + + cio = kzalloc(sizeof(struct cio), GFP_KERNEL); + if (!cio) + return -ENOMEM; + cio->rlist = rlist; + spin_lock_init(&cio->lock); + + max_copy_len = min_t(sector_t, sq->limits.max_copy_sectors, dq->limits.max_copy_sectors); + max_copy_len = min3(max_copy_len, (sector_t)sq->limits.max_copy_range_sectors, + (sector_t)dq->limits.max_copy_range_sectors) << SECTOR_SHIFT; + + for (ri = 0; ri < nr_srcs; ri++) { + cio->rlist[ri].comp_len = rlist[ri].len; + src_blk = rlist[ri].src; + dst_blk = rlist[ri].dst; + for (remaining = rlist[ri].len; remaining > 0; remaining -= copy_len) { + copy_len = min(remaining, max_copy_len); + + token = alloc_page(gfp_mask); + if (unlikely(!token)) { + ret = -ENOMEM; + goto err_token; + } + + ctx = kzalloc(sizeof(struct copy_ctx), gfp_mask); + if (!ctx) { + ret = -ENOMEM; + goto err_ctx; + } + ctx->cio = cio; + ctx->range_idx = ri; + ctx->start_sec = dst_blk; + + read_bio = bio_alloc(src_bdev, 1, REQ_OP_READ | REQ_COPY | REQ_NOMERGE, + gfp_mask); + if (!read_bio) { + ret = -ENOMEM; + goto err_read_bio; + } + read_bio->bi_iter.bi_sector = src_blk >> SECTOR_SHIFT; + __bio_add_page(read_bio, token, PAGE_SIZE, 0); + /*__bio_add_page increases bi_size by len, so overwrite it with copy len*/ + read_bio->bi_iter.bi_size = copy_len; + ret = submit_bio_wait(read_bio); + bio_put(read_bio); + if (ret) + goto err_read_bio; + + write_bio = bio_alloc(dst_bdev, 1, REQ_OP_WRITE | REQ_COPY | REQ_NOMERGE, + gfp_mask); + if (!write_bio) { + ret = -ENOMEM; + goto err_read_bio; + } + write_bio->bi_iter.bi_sector = dst_blk >> SECTOR_SHIFT; + __bio_add_page(write_bio, token, PAGE_SIZE, 0); + /*__bio_add_page increases bi_size by len, so overwrite it with copy len*/ + write_bio->bi_iter.bi_size = copy_len; + write_bio->bi_end_io = bio_copy_end_io; + write_bio->bi_private = ctx; + + spin_lock_irqsave(&cio->lock, flags); + ++cio->refcount; + spin_unlock_irqrestore(&cio->lock, flags); + + submit_bio(write_bio); + src_blk += copy_len; + dst_blk += copy_len; + } + } + + /* Wait for completion of all IO's*/ + return cio_await_completion(cio); + +err_read_bio: + kfree(ctx); +err_ctx: + __free_page(token); +err_token: + rlist[ri].comp_len = min_t(sector_t, rlist[ri].comp_len, (rlist[ri].len - remaining)); + + cio->io_err = ret; + return cio_await_completion(cio); +} + +static inline int blk_copy_sanity_check(struct block_device *src_bdev, + struct block_device *dst_bdev, struct range_entry *rlist, int nr) +{ + unsigned int align_mask = max( + bdev_logical_block_size(dst_bdev), bdev_logical_block_size(src_bdev)) - 1; + sector_t len = 0; + int i; + + for (i = 0; i < nr; i++) { + if (rlist[i].len) + len += rlist[i].len; + else + return -EINVAL; + if ((rlist[i].dst & align_mask) || (rlist[i].src & align_mask) || + (rlist[i].len & align_mask)) + return -EINVAL; + rlist[i].comp_len = 0; + } + + if (len && len >= MAX_COPY_TOTAL_LENGTH) + return -EINVAL; + + return 0; +} + +static inline bool blk_check_copy_offload(struct request_queue *src_q, + struct request_queue *dest_q) +{ + if (blk_queue_copy(dest_q) && blk_queue_copy(src_q)) + return true; + + return false; +} + +/* + * blkdev_issue_copy - queue a copy + * @src_bdev: source block device + * @nr_srcs: number of source ranges to copy + * @rlist: array of source/dest/len + * @dest_bdev: destination block device + * @gfp_mask: memory allocation flags (for bio_alloc) + * + * Description: + * Copy source ranges from source block device to destination block device. + * length of a source range cannot be zero. + */ +int blkdev_issue_copy(struct block_device *src_bdev, int nr, + struct range_entry *rlist, struct block_device *dest_bdev, gfp_t gfp_mask) +{ + struct request_queue *src_q = bdev_get_queue(src_bdev); + struct request_queue *dest_q = bdev_get_queue(dest_bdev); + int ret = -EINVAL; + + if (!src_q || !dest_q) + return -ENXIO; + + if (!nr) + return -EINVAL; + + if (nr >= MAX_COPY_NR_RANGE) + return -EINVAL; + + if (bdev_read_only(dest_bdev)) + return -EPERM; + + ret = blk_copy_sanity_check(src_bdev, dest_bdev, rlist, nr); + if (ret) + return ret; + + if (blk_check_copy_offload(src_q, dest_q)) + ret = blk_copy_offload(src_bdev, nr, rlist, dest_bdev, gfp_mask); + + return ret; +} +EXPORT_SYMBOL_GPL(blkdev_issue_copy); + static int __blkdev_issue_write_zeroes(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop, unsigned flags) diff --git a/block/blk.h b/block/blk.h index 434017701403..6010eda58c70 100644 --- a/block/blk.h +++ b/block/blk.h @@ -291,6 +291,8 @@ static inline bool blk_may_split(struct request_queue *q, struct bio *bio) break; } + if (unlikely(op_is_copy(bio->bi_opf))) + return false; /* * All drivers must accept single-segments bios that are <= PAGE_SIZE. * This is a quick and dirty check that relies on the fact that diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index c62274466e72..f5b01f284c43 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -418,6 +418,7 @@ enum req_flag_bits { /* for driver use */ __REQ_DRV, __REQ_SWAP, /* swapping request. */ + __REQ_COPY, /* copy request */ __REQ_NR_BITS, /* stops here */ }; @@ -443,6 +444,7 @@ enum req_flag_bits { #define REQ_DRV (1ULL << __REQ_DRV) #define REQ_SWAP (1ULL << __REQ_SWAP) +#define REQ_COPY (1ULL << __REQ_COPY) #define REQ_FAILFAST_MASK \ (REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER) @@ -459,6 +461,11 @@ enum stat_group { NR_STAT_GROUPS }; +static inline bool op_is_copy(unsigned int op) +{ + return (op & REQ_COPY); +} + #define bio_op(bio) \ ((bio)->bi_opf & REQ_OP_MASK) @@ -533,4 +540,18 @@ struct blk_rq_stat { u64 batch; }; +struct cio { + struct range_entry *rlist; + struct task_struct *waiter; /* waiting task (NULL if none) */ + spinlock_t lock; /* protects refcount and waiter */ + int refcount; + blk_status_t io_err; +}; + +struct copy_ctx { + int range_idx; + sector_t start_sec; + struct cio *cio; +}; + #endif /* __LINUX_BLK_TYPES_H */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 3596fd37fae7..c6cb3fe82ba2 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1121,6 +1121,8 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop); int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp); +int blkdev_issue_copy(struct block_device *src_bdev, int nr_srcs, + struct range_entry *src_rlist, struct block_device *dest_bdev, gfp_t gfp_mask); #define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */ #define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */ diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index bdf7b404b3e7..822c28cebf3a 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -64,6 +64,20 @@ struct fstrim_range { __u64 minlen; }; +/* Maximum no of entries supported */ +#define MAX_COPY_NR_RANGE (1 << 12) + +/* maximum total copy length */ +#define MAX_COPY_TOTAL_LENGTH (1 << 27) + +/* Source range entry for copy */ +struct range_entry { + __u64 src; + __u64 dst; + __u64 len; + __u64 comp_len; +}; + /* extent-same (dedupe) ioctls; these MUST match the btrfs ioctl definitions */ #define FILE_DEDUPE_RANGE_SAME 0 #define FILE_DEDUPE_RANGE_DIFFERS 1 From patchwork Tue Apr 26 10:12:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C9D2C433F5 for ; Tue, 26 Apr 2022 12:15:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349864AbiDZMSU (ORCPT ); Tue, 26 Apr 2022 08:18:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349840AbiDZMSS (ORCPT ); Tue, 26 Apr 2022 08:18:18 -0400 Received: from mailout2.samsung.com (mailout2.samsung.com [203.254.224.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B05EF51; Tue, 26 Apr 2022 05:15:08 -0700 (PDT) Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout2.samsung.com (KnoxPortal) with ESMTP id 20220426121506epoutp0259208e927a4b192598ff5b13901a900c~pcUspQRP_1557515575epoutp02E; Tue, 26 Apr 2022 12:15:06 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20220426121506epoutp0259208e927a4b192598ff5b13901a900c~pcUspQRP_1557515575epoutp02E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975306; bh=Aw/eZeQwqDIFGLFSvcbd3db7NUyv8o6zR/cHud6bu0Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DFTxyiVo0uQbSCvnSAAZPovYkorcIrqkOTwng0EERSfWnDlWDR7IFmrfh7hNSB2Zu n+i44pGgFgrXKhKHQukD2KbHuTW58+blv7zbfmESeLAz/8yRRcFvlB3+v2u0X8cHVB ZWUV88y1kT3zgmttOCNTw1Ay5m0pp3D/jaLtlUts= Received: from epsnrtp2.localdomain (unknown [182.195.42.163]) by epcas5p2.samsung.com (KnoxPortal) with ESMTP id 20220426121505epcas5p28f1533d8e7a8ba2c0aa0f1f34e3145c7~pcUrxxzqD1792317923epcas5p2i; Tue, 26 Apr 2022 12:15:05 +0000 (GMT) Received: from epsmges5p1new.samsung.com (unknown [182.195.38.175]) by epsnrtp2.localdomain (Postfix) with ESMTP id 4Kngn06WMBz4x9Py; Tue, 26 Apr 2022 12:15:00 +0000 (GMT) Received: from epcas5p3.samsung.com ( [182.195.41.41]) by epsmges5p1new.samsung.com (Symantec Messaging Gateway) with SMTP id BE.13.10063.442E7626; Tue, 26 Apr 2022 21:15:00 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p2.samsung.com (KnoxPortal) with ESMTPA id 20220426101938epcas5p291690dd1f0e931cd9f8139daaf3f9296~pav4xODFD0348903489epcas5p24; Tue, 26 Apr 2022 10:19:38 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20220426101938epsmtrp13afcc873554906dc561017b6e9ca4c13~pav4vqRwT2220522205epsmtrp1I; Tue, 26 Apr 2022 10:19:38 +0000 (GMT) X-AuditID: b6c32a49-4b5ff7000000274f-ae-6267e24452ea Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id D8.49.08853.A37C7626; Tue, 26 Apr 2022 19:19:38 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426101932epsmtip17cfd9cf53a0ae8b9d0dafb6e571d64c4~pavzUoAnQ3271432714epsmtip1Z; Tue, 26 Apr 2022 10:19:32 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , =?utf-8?q?Javier_Gonz=C3=A1lez?= , Arnav Dawn , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 03/10] block: Introduce a new ioctl for copy Date: Tue, 26 Apr 2022 15:42:31 +0530 Message-Id: <20220426101241.30100-4-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01Te0xTZxTfd297byHBXCvOD9hYU4YJOB5VKB8gKqLbNZpIpmaZxuAVroUB bdMWdEs2gYJmjIfgMFhEUTcJoBbBIchDngIVLFBAYeMhD5W58Q7IeDhKYfO/3/md8zvPHB7O 7ySteSFSFauQMmFCwpxTVOPg4LRnQHLS9cKkCGl1j3FU1jXFRXk9yQS6ND6Ho7GqQS5KTU4n 0XyzHkeGoXWofDSDi1reRmNosOAdhroqSzBUdiMVQzl5dRh6lX0ToNLrE8veP6ZItPBChF5M d3NQanUnQMMdGgyVd29BZeWNHGR4eIVA124Nk+inZ8UEqnhTjqPs+iUMPdUsECilvpCLioei ASqav4ajmt4ODrr7ZoyDGrptUFzCHIn0i/XcXfa0oX0/relrJugU9ShJl2h6SFrfe49DG5oj 6ILcHwm68Jez9MXn2YAu7Yoi6JimOpxOn5wm6ET1KEGXxPVx6Ynhbg49VtFB+G86Gro9mGWC WIWAlQbKgkKkEh/h/kMBfgHuYleRk8gTeQgFUiac9RHuOeDv9HlI2PI2hYJIJiximfJnlEqh y47tClmEihUEy5QqHyErDwqTu8mdlUy4MkIqcZayKi+Rq+tW9+XAE6HBRSOb5dHCM4spTWQU eGkTD8x4kHKDUU35nHhgzuNTpQA2VGdxTcYkgG1tLaTJmAGwX1vBWZMkJNSsSsoBbDwfu+Lg U3EYvJPkEw94PILaAp+84xlpS4oDc2ZnV+JxKp8Hh6YTMKNjA7UT9mf2AyPmUPbwfrthBVtQ XvBW4QJpzAMpF5jct95Im1He8Ne6UcwUsh42Xh5aKYtTn0D1bxm4MT+kLpvDV2UvcVOje2BM ychq0xvgn/X3SRO2hiPJ51bxafjgXBZmEscCGK/TrQp2wtayRczYBE45QO1DFxP9MUzT3cVM hdfBxPkhzMRbwOKra9gO3tZmESZsBTtno1cxDRf+qQamxSUBqGnvIi8Agea9gTTvDaT5v3QW wHOBFStXhktYpbtcJGVP/3fkQFl4AVh5MMd9xaCnf9y5GmA8UA0gDxdaWqTZnzrJtwhivv2O VcgCFBFhrLIauC8vPAW33hgoW/5QqSpA5Obp6iYWi908t4lFwk0WTyT5DJ+SMCo2lGXlrGJN h/HMrKMwvmeN17i/79yufReLHkT6FU+8TtZN6CWGhsOJP3hmBPeLPHyb4lvyBc44vfXvY9e7 HEeuftBGnJ2UH1JHHo2prT3org/jOXk/wlUnfmc+eyvqsMjMPNZdrD2fIa+xGpDsFjqJ1bYi xn1WNTB4SuylOPjR02jm01Jl6k3vHQce51/6MilnaarvuHTYdluoj8D/iNZLG4wmR6/c4X+9 u5ZIi9QODP9c9U3Ko+eH99okzVh15qOFI3n6GGdL9Yd2nbKlG1803PNrdbDL5ffHRCQ7VG68 7XGGr51psD3+fVBLbFprZGxRia4wXO6a01uZ61u4ubmqdQItfZVO6kr1MX89e00IOcpgRuSI K5TMvz9S7fTpBAAA X-Brightmail-Tracker: H4sIAAAAAAAAA02Sa0xTZxzGfc8dxlkORcOLzXTpBnKHEre9Q6YmW5bjzOauH3SJrsxDNXJp Whi4bKNdJdMOg2WBYEHwUiAFpwEUC7RuwmpFhOIKo22UQigQNSlyMYxRdBayzG9Pfs/v/3z6 M7joLLmROZybLyhzZdkSKpRo75FsSkq3yzNTf6th0eXbN3Fkcc+TqPl+GYUqHy/haObGBInK y6potNzvwJHT9zKy+qtJNPi3BkMTrc8w5P69A0OW8+UYMjXbMDTdeAGgrnOzz9t78zQKjEvR +IKHQOXdfwE0OWzAkNWTgCzWXgI5O2soVNcwSaOfR8wUuv7IiqNG+1MMDRgCFNLb20hk9mkA al+uw1HP6DCBLj2aIdAtjxiVlC7RyLFiJ3dG886h3bzB20/xeq2f5jsM92neMdpC8M7+Ar61 6QTFtxmL+V9cjYDvcqsp/sc7Npyvmlug+JNaP8V3lHhJfnbSQ/Az14epjyP3hWYcFLIPfyMo U7Z/FXqo/UGMQiMpWtHfodVgSqwDIQzktsLS0h5CB0IZEdcFYHvzSWqtiIINK3/gazkCmp5O 08Es4rQYdGslOsAwFJcA+54xQbyeI6BpcXF1B+dcDLw25V+9jeB2wLEzYyCYCS4aXhlyrmaW S4cNbQE6uAO5FFjmDQ/iEG4brLf5sSAWPVdcy0VrdjjsPe0jghjntsDLtaIgxrnNUHu1Gj8F wg0vWIb/LcML1lmAN4EoQaHKkeeopApprlCYrJLlqApy5clf5+W0gtUnio8zg2tNj5O7AcaA bgAZXLKerYjOyhSxB2VHvxWUeQeUBdmCqhuIGUISyQ7qeg+IOLksXzgiCApB+V+LMSEb1dhn Ldt31NVPzhr7TB/e3RSfOMV23R7JIn+aJ1zvp3x+y/FKoJx4r6n0on9LCf2dsfBT3+s/VLa4 j28L+OVPxEtvVL2TmEZe7IsQbbB9GRfIlBRuuFp7xOKP05gT0lJHAmF3Y1rTrBUS6a+x9Zl9 HsPOMbXz4brKvXSs62b07hMfvDr+9rE/B3Tv3jhjLNl8fFei8cGbmrcGFa958yPJIlt6f/HC TMau89NbLansBGkq0jxxS7NUDpZ9GDk0/dLRT2q+uDQ3ELv3o32dYXHVc6eOxSBqUSyOMnfu uVIRFpaEJdm8+03C93tMIer9CaP6dffGM+Zr/8nzyU8zDs5X3Kbn7RJCdUgmjceVKtm/AyOj iLMDAAA= X-CMS-MailID: 20220426101938epcas5p291690dd1f0e931cd9f8139daaf3f9296 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426101938epcas5p291690dd1f0e931cd9f8139daaf3f9296 References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add new BLKCOPY ioctl that offloads copying of one or more sources ranges to one or more destination in a device. COPY ioctl accepts a 'copy_range' structure that contains no of range, a reserved field , followed by an array of ranges. Each source range is represented by 'range_entry' that contains source start offset, destination start offset and length of source ranges (in bytes) MAX_COPY_NR_RANGE, limits the number of entries for the IOCTL and MAX_COPY_TOTAL_LENGTH limits the total copy length, IOCTL can handle. Example code, to issue BLKCOPY: /* Sample example to copy three entries with [dest,src,len], * [32768, 0, 4096] [36864, 4096, 4096] [40960,8192,4096] on same device */ int main(void) { int i, ret, fd; unsigned long src = 0, dst = 32768, len = 4096; struct copy_range *cr; cr = (struct copy_range *)malloc(sizeof(*cr)+ (sizeof(struct range_entry)*3)); cr->nr_range = 3; cr->reserved = 0; for (i = 0; i< cr->nr_range; i++, src += len, dst += len) { cr->range_list[i].dst = dst; cr->range_list[i].src = src; cr->range_list[i].len = len; cr->range_list[i].comp_len = 0; } fd = open("/dev/nvme0n1", O_RDWR); if (fd < 0) return 1; ret = ioctl(fd, BLKCOPY, cr); if (ret != 0) printf("copy failed, ret= %d\n", ret); for (i=0; i< cr->nr_range; i++) if (cr->range_list[i].len != cr->range_list[i].comp_len) printf("Partial copy for entry %d: requested %llu, completed %llu\n", i, cr->range_list[i].len, cr->range_list[i].comp_len); close(fd); free(cr); return ret; } Signed-off-by: Nitesh Shetty Signed-off-by: Javier González Signed-off-by: Arnav Dawn Reviewed-by: Hannes Reinecke --- block/ioctl.c | 32 ++++++++++++++++++++++++++++++++ include/uapi/linux/fs.h | 9 +++++++++ 2 files changed, 41 insertions(+) diff --git a/block/ioctl.c b/block/ioctl.c index 46949f1b0dba..58d93c20ff30 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -120,6 +120,36 @@ static int blk_ioctl_discard(struct block_device *bdev, fmode_t mode, return err; } +static int blk_ioctl_copy(struct block_device *bdev, fmode_t mode, + unsigned long arg) +{ + struct copy_range crange, *ranges = NULL; + size_t payload_size = 0; + int ret; + + if (!(mode & FMODE_WRITE)) + return -EBADF; + + if (copy_from_user(&crange, (void __user *)arg, sizeof(crange))) + return -EFAULT; + + if (unlikely(!crange.nr_range || crange.reserved || crange.nr_range >= MAX_COPY_NR_RANGE)) + return -EINVAL; + + payload_size = (crange.nr_range * sizeof(struct range_entry)) + sizeof(crange); + + ranges = memdup_user((void __user *)arg, payload_size); + if (IS_ERR(ranges)) + return PTR_ERR(ranges); + + ret = blkdev_issue_copy(bdev, ranges->nr_range, ranges->range_list, bdev, GFP_KERNEL); + if (copy_to_user((void __user *)arg, ranges, payload_size)) + ret = -EFAULT; + + kfree(ranges); + return ret; +} + static int blk_ioctl_secure_erase(struct block_device *bdev, fmode_t mode, void __user *argp) { @@ -481,6 +511,8 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode, return blk_ioctl_discard(bdev, mode, arg); case BLKSECDISCARD: return blk_ioctl_secure_erase(bdev, mode, argp); + case BLKCOPY: + return blk_ioctl_copy(bdev, mode, arg); case BLKZEROOUT: return blk_ioctl_zeroout(bdev, mode, arg); case BLKGETDISKSEQ: diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index 822c28cebf3a..a3b13406ffb8 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -78,6 +78,14 @@ struct range_entry { __u64 comp_len; }; +struct copy_range { + __u64 nr_range; + __u64 reserved; + + /* Range_list always must be at the end */ + struct range_entry range_list[]; +}; + /* extent-same (dedupe) ioctls; these MUST match the btrfs ioctl definitions */ #define FILE_DEDUPE_RANGE_SAME 0 #define FILE_DEDUPE_RANGE_DIFFERS 1 @@ -199,6 +207,7 @@ struct fsxattr { #define BLKROTATIONAL _IO(0x12,126) #define BLKZEROOUT _IO(0x12,127) #define BLKGETDISKSEQ _IOR(0x12,128,__u64) +#define BLKCOPY _IOWR(0x12, 129, struct copy_range) /* * A jump here: 130-136 are reserved for zoned block devices * (see uapi/linux/blkzoned.h) From patchwork Tue Apr 26 10:12:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07B76C433EF for ; Tue, 26 Apr 2022 12:16:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349854AbiDZMTh (ORCPT ); Tue, 26 Apr 2022 08:19:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349882AbiDZMSv (ORCPT ); Tue, 26 Apr 2022 08:18:51 -0400 Received: from mailout3.samsung.com (mailout3.samsung.com [203.254.224.33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F4B11110; Tue, 26 Apr 2022 05:15:11 -0700 (PDT) Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout3.samsung.com (KnoxPortal) with ESMTP id 20220426121509epoutp03683a4ab48b63f19af30f75532c893003~pcUwFM7631718017180epoutp03b; Tue, 26 Apr 2022 12:15:09 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout3.samsung.com 20220426121509epoutp03683a4ab48b63f19af30f75532c893003~pcUwFM7631718017180epoutp03b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975309; bh=5UBtS0ms0IOiJds1Mb3s7CJgKX4t4CEpR6CiZcgXT4k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O7TeS6sq8Oci60wRSJ7qFAkjawn4FheY56CIaaIzBg3HtgNMFPCL9UjGSbPBhV/eE SSi2HvTZOMK6g8miqJzN3/7Nc19EioG0vsXUYSP4xAgNSb8kJs3EddN6bWfkIFyH8i F7eYYXD6lFEhUWOulU6S1S7vHNW+e2HOtu3eG0Eo= Received: from epsnrtp1.localdomain (unknown [182.195.42.162]) by epcas5p1.samsung.com (KnoxPortal) with ESMTP id 20220426121509epcas5p10681d041cb3a2dab2d7e9fac1d620772~pcUvv6drF2178321783epcas5p1r; Tue, 26 Apr 2022 12:15:09 +0000 (GMT) Received: from epsmges5p1new.samsung.com (unknown [182.195.38.181]) by epsnrtp1.localdomain (Postfix) with ESMTP id 4Kngn54tQ5z4x9Ps; Tue, 26 Apr 2022 12:15:05 +0000 (GMT) Received: from epcas5p4.samsung.com ( [182.195.41.42]) by epsmges5p1new.samsung.com (Symantec Messaging Gateway) with SMTP id A1.23.10063.942E7626; Tue, 26 Apr 2022 21:15:05 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p1.samsung.com (KnoxPortal) with ESMTPA id 20220426101951epcas5p1f53a2120010607354dc29bf8331f6af8~pawFTa_Zl2413724137epcas5p1G; Tue, 26 Apr 2022 10:19:51 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20220426101951epsmtrp176ced2e9b9a9f969fd96eb3b8dc05e12~pawFSMys82263822638epsmtrp1b; Tue, 26 Apr 2022 10:19:51 +0000 (GMT) X-AuditID: b6c32a49-4cbff7000000274f-b7-6267e24970f3 Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id 5F.AA.08924.747C7626; Tue, 26 Apr 2022 19:19:51 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426101946epsmtip14d82f1c02263c9ee984eda8ecb45bbfd~pawAJsemo3271432714epsmtip1a; Tue, 26 Apr 2022 10:19:46 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Vincent Fu , Arnav Dawn , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 04/10] block: add emulation for copy Date: Tue, 26 Apr 2022 15:42:32 +0530 Message-Id: <20220426101241.30100-5-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01Tf1RTZRg+3713dxun0RUhP/AQ8yLVQIUpG58oVEfqXJUKtXM6BzvglNuG jG3thwoeE0RQEViAPycJBOgRUhIFBtuM8BCKTtMJISYoIiUUCFpEYsQYlv893/O+z/P++M7L wz06uD68RJWe1apkSpp0I+ouigIXruyVbwz5+ao/qm77AUfW2084qOqukUSHHo/jaPj7BxxU YDzCRc/s13Hk6HNHtqFjHPTjX+kYelAziaHbTQ0Ysn5dgKFTVS0Y+uVkGUCW0hEMTdwXo/tP uwhU0NwB0MN2E4ZsXUHIartMIEdjEYmKTzzkov0/mUl0YdCGo5Ot/2DommmCRPmt5zjI3JcO UN2zYhxd7G4n0JnBYQJd6pqLMnPGuejB/r0AXX/eynkngHHcWs2Yeuwkk58xxGUaTHe5zPXu swTjsBuYmsp9JHOufCdT2HkSMJbbaSSz62oLzhwZfUoyuRlDJNOQ2cNhRh52EczwhXYyZk5s 0nIFK0tgtUJWtUmdkKiSR9Cr18WviJdIQ8QLxUtRGC1UyZLZCDoqOmbh+4nKqW3Swi0ypWGK ipHpdHRw5HKt2qBnhQq1Th9Bs5oEpSZUs0gnS9YZVPJFKlYfLg4JWSyZStyQpGjKcABN2eJt eeM5RBqwv5UN+DxIhcLDtyqwbODG86AsAP79Ry/X9RgFcLz90EzkCYClWVXgheT3iwdIJ/ag GgGssPNdOBODZTf8swGPR1JB8Mokz0l7UgQ8NTZGOH1w6iAPHmitJ5yB2VQY/LO4aNqHoAKg dVfFNBZQ4XAibwg4fSAVDI09s5w0n1oGK1qGMFfKLHj5aN+0DU75wYzaY7jTH1JZbtCWe5hw 9RkFe0sHSReeDQdaz3Nd2Ac+MmbN4K2wPqsEc4l3A5jd1jYjfhvesD7HnE3glAhWNwa7aF94 sO0M5irsDnOf9WEuXgDNx19gf/hNdclMXW/YMZY+gxlYbDVzXAvNAzC9cx/3SyA0vTSQ6aWB TP+XLgF4JfBmNbpkOauTaMQqdut/n7xJnVwDpg8scKUZ3L33eFEzwHigGUAeTnsKDgZ8ttFD kCBLSWW16nitQcnqmoFkauH5uI/XJvXUhar08eLQpSGhUqk0dOkSqZieI7gi/1bmQcllejaJ ZTWs9oUO4/F90jCvxO/Cl0v2ONJwr09uJirv4YrNkQcqPduKrn0Vm3DeQu4oV9ypu//K7gb3 Eyew3RXtonePn51rqxcGfT4/c9Dead2xwxAn1G1PWUvVxjbtzJ9Xzw3QfiFfmx9m9p/P1b/X 4OcebRk4vmT01aTyvn7vkc30aRFpNEZExN4c8E3xHo1qWfXxmg+3+HHknoUbLYplY9Lq2l/t 67ah4aCkD0p6RdvfTI2eVEQXi4R8fmRgv9caep7v86iRTz1TuzPnDxf1C0ciOxLClhh69m54 bdVvqQuso471h/pzq1tOvx78KO5OdyHZFJfTuHXlJYmgU+ReUL/gKE+2Z5L2e2POepOkr8P8 EU3oFDJxIK7Vyf4Ft6O9FOkEAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA02SfUwTdxzG/d1dr0cjy1FM+NUa7bqQzIq8ONFvQAnL/tglWxxGzSIxumPc QKSlaaky4xDpWKSAUIYMCwYFhQCJZlAIL0WklSG6WpOujJdgRwCNcRaZL0io6E63xP8+eT5P nr8ehpSXSNYyh3W5gkHHZ6tpGdXlUq/f/PlwRlps6dmNcPXWbyQ4xp9KoG2qnIbqJ0skzA/O SKCyvEYKy24PCd7ZD6A/UCuBuy9PETDT/pqA8es9BDgaKgloaRsi4EFzI4K+iwsEBKfjYPrZ BAWVzlEEcz4bAf0Tm8DRP0KBt7eOhvqmOSmU/NlNw7VH/SQ0D68QcMcWpME63CGB7tlTCLqW 60lw3fNRcOXRPAU3J5RQVLokhZmS0wg8r4YlyZGc948vOJvfTXNWc0DK9dimpJzn3q8U53Wb uPbWYprruHSS+3msGXF94wU0V/j7EMnV/POM5srMAZrrKfJLuIW5CYqbv+ajUyJSZTvShezD RwVDTNI3sszrZi/SN27JO7NUShUg98cWFMJgdit+7KqiLUjGyNluhItamsl3QoGbXt34j8Nx y8oDqchy1kzgpuBWC2IYmt2Eb79mxHgNS+GWxUVK3CHZAQY7bhYToghnt+MX9XW0yBQbiR2F l99yKJuAg2cCSNzBbAwu94eJcQibiC8PBQgxlv9bGVvOe9cOwyPnZimRSXYDNnfWkhWItb2n bO+pC4hoRQpBb9RmaI1x+i064Vi0kdcaTbqM6G9ztO3o7V80mm7kaH0S7UQEg5wIM6R6TejZ yO/S5KHp/PfHBUPOIYMpWzA6kZKh1BGhdy0jh+RsBp8rHBEEvWD43xJMyNoCAu2632vha08c WK1bHiVN6/zFF/et4ruCyQ/z3UHVbD5fVT2w35ISk6ANuBaHYiPy0ajyljNs5srKCO8mWeV6 3x3Vvpf37ZMxCfFPVccr1A1l7ZeCxaYNdQN79qu0zMLVlR9zv6KmCzMHlSdPZ50IV7TRhR/2 tX1q3Ru365y9cXGbmU70dGzWa2TqnRFWpvq8IeqjRIW90fvQqvgpberA9JFO6su/f/hk3V/J ma4b0Vm1is7Y3p0enPXcQ9YnaiZTD5p/2QP687vjU5VNDb4QiWEM2UuPqmR5g7ejNA3uVZMV 9tx4qqpm+2NrUk7O1/OfpQf9SVGr3WUpNeXpUjVlzOTjNKTByL8BBqQOep4DAAA= X-CMS-MailID: 20220426101951epcas5p1f53a2120010607354dc29bf8331f6af8 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426101951epcas5p1f53a2120010607354dc29bf8331f6af8 References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org For the devices which does not support copy, copy emulation is added. Copy-emulation is implemented by reading from source ranges into memory and writing to the corresponding destination synchronously. Signed-off-by: Nitesh Shetty Signed-off-by: Vincent Fu Signed-off-by: Arnav Dawn Reported-by: kernel test robot --- block/blk-lib.c | 128 ++++++++++++++++++++++++++++++++++++++++- block/blk-map.c | 2 +- include/linux/blkdev.h | 2 + 3 files changed, 130 insertions(+), 2 deletions(-) diff --git a/block/blk-lib.c b/block/blk-lib.c index ba9da2d2f429..58c30a42ea44 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -273,6 +273,65 @@ int blk_copy_offload(struct block_device *src_bdev, int nr_srcs, return cio_await_completion(cio); } +int blk_submit_rw_buf(struct block_device *bdev, void *buf, sector_t buf_len, + sector_t sector, unsigned int op, gfp_t gfp_mask) +{ + struct request_queue *q = bdev_get_queue(bdev); + struct bio *bio, *parent = NULL; + sector_t max_hw_len = min_t(unsigned int, queue_max_hw_sectors(q), + queue_max_segments(q) << (PAGE_SHIFT - SECTOR_SHIFT)) << SECTOR_SHIFT; + sector_t len, remaining; + int ret; + + for (remaining = buf_len; remaining > 0; remaining -= len) { + len = min_t(int, max_hw_len, remaining); +retry: + bio = bio_map_kern(q, buf, len, gfp_mask); + if (IS_ERR(bio)) { + len >>= 1; + if (len) + goto retry; + return PTR_ERR(bio); + } + + bio->bi_iter.bi_sector = sector >> SECTOR_SHIFT; + bio->bi_opf = op; + bio_set_dev(bio, bdev); + bio->bi_end_io = NULL; + bio->bi_private = NULL; + + if (parent) { + bio_chain(parent, bio); + submit_bio(parent); + } + parent = bio; + sector += len; + buf = (char *) buf + len; + } + ret = submit_bio_wait(bio); + bio_put(bio); + + return ret; +} + +static void *blk_alloc_buf(sector_t req_size, sector_t *alloc_size, gfp_t gfp_mask) +{ + int min_size = PAGE_SIZE; + void *buf; + + while (req_size >= min_size) { + buf = kvmalloc(req_size, gfp_mask); + if (buf) { + *alloc_size = req_size; + return buf; + } + /* retry half the requested size */ + req_size >>= 1; + } + + return NULL; +} + static inline int blk_copy_sanity_check(struct block_device *src_bdev, struct block_device *dst_bdev, struct range_entry *rlist, int nr) { @@ -298,6 +357,68 @@ static inline int blk_copy_sanity_check(struct block_device *src_bdev, return 0; } +/* returns the total copy length still need to be copied */ +static inline sector_t blk_copy_max_range(struct range_entry *rlist, int nr, sector_t *max_len) +{ + int i; + sector_t len = 0; + + *max_len = 0; + for (i = 0; i < nr; i++) { + *max_len = max(*max_len, rlist[i].len - rlist[i].comp_len); + len += (rlist[i].len - rlist[i].comp_len); + } + + return len; +} + +/* + * If native copy offload feature is absent, this function tries to emulate, + * by copying data from source to a temporary buffer and from buffer to + * destination device. + */ +static int blk_copy_emulate(struct block_device *src_bdev, int nr, + struct range_entry *rlist, struct block_device *dest_bdev, gfp_t gfp_mask) +{ + void *buf = NULL; + int ret, nr_i = 0; + sector_t src, dst, copy_len, buf_len, read_len, copied_len, + max_len = 0, remaining = 0, offset = 0; + + copy_len = blk_copy_max_range(rlist, nr, &max_len); + buf = blk_alloc_buf(max_len, &buf_len, gfp_mask); + if (!buf) + return -ENOMEM; + + for (copied_len = 0; copied_len < copy_len; copied_len += read_len) { + if (!remaining) { + offset = rlist[nr_i].comp_len; + src = rlist[nr_i].src + offset; + dst = rlist[nr_i].dst + offset; + remaining = rlist[nr_i++].len - offset; + } + + read_len = min_t(sector_t, remaining, buf_len); + if (!read_len) + continue; + ret = blk_submit_rw_buf(src_bdev, buf, read_len, src, REQ_OP_READ, gfp_mask); + if (ret) + goto out; + src += read_len; + remaining -= read_len; + ret = blk_submit_rw_buf(dest_bdev, buf, read_len, dst, REQ_OP_WRITE, + gfp_mask); + if (ret) + goto out; + else + rlist[nr_i - 1].comp_len += read_len; + dst += read_len; + } +out: + kvfree(buf); + return ret; +} + static inline bool blk_check_copy_offload(struct request_queue *src_q, struct request_queue *dest_q) { @@ -325,6 +446,7 @@ int blkdev_issue_copy(struct block_device *src_bdev, int nr, struct request_queue *src_q = bdev_get_queue(src_bdev); struct request_queue *dest_q = bdev_get_queue(dest_bdev); int ret = -EINVAL; + bool offload = false; if (!src_q || !dest_q) return -ENXIO; @@ -342,9 +464,13 @@ int blkdev_issue_copy(struct block_device *src_bdev, int nr, if (ret) return ret; - if (blk_check_copy_offload(src_q, dest_q)) + offload = blk_check_copy_offload(src_q, dest_q); + if (offload) ret = blk_copy_offload(src_bdev, nr, rlist, dest_bdev, gfp_mask); + if (ret || !offload) + ret = blk_copy_emulate(src_bdev, nr, rlist, dest_bdev, gfp_mask); + return ret; } EXPORT_SYMBOL_GPL(blkdev_issue_copy); diff --git a/block/blk-map.c b/block/blk-map.c index 7ffde64f9019..ca2ad2c21f42 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -340,7 +340,7 @@ static void bio_map_kern_endio(struct bio *bio) * Map the kernel address into a bio suitable for io to a block * device. Returns an error pointer in case of error. */ -static struct bio *bio_map_kern(struct request_queue *q, void *data, +struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, gfp_t gfp_mask) { unsigned long kaddr = (unsigned long)data; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index c6cb3fe82ba2..ea1f3c8f8dad 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1121,6 +1121,8 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop); int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp); +struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, + gfp_t gfp_mask); int blkdev_issue_copy(struct block_device *src_bdev, int nr_srcs, struct range_entry *src_rlist, struct block_device *dest_bdev, gfp_t gfp_mask); From patchwork Tue Apr 26 10:12:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41915C433F5 for ; Tue, 26 Apr 2022 12:16:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350067AbiDZMTQ (ORCPT ); Tue, 26 Apr 2022 08:19:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349921AbiDZMTE (ORCPT ); Tue, 26 Apr 2022 08:19:04 -0400 Received: from mailout4.samsung.com (mailout4.samsung.com [203.254.224.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A55B5559B; Tue, 26 Apr 2022 05:15:17 -0700 (PDT) Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout4.samsung.com (KnoxPortal) with ESMTP id 20220426121515epoutp0437e539ae68c2cf5fcbf9420fb22c98dd~pcU12E1P52403124031epoutp04Z; Tue, 26 Apr 2022 12:15:15 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20220426121515epoutp0437e539ae68c2cf5fcbf9420fb22c98dd~pcU12E1P52403124031epoutp04Z DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975315; bh=2/hkUw/3X2gk0+5AZSSktsabbgo+VjUMC1NDNj4AJwg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b7BND2mOow7A9UM+woTMlwKxKQ43sNOko1wuaTADg3WI5GKutE0/oJlY5S6rrYu2C Mv/VhpKcKxFA+4wW2QHXlzAO/aw4vjexU6MJt9ejCQ+cwXUXEAfyRB1g8uB/J2J6ad 1k+Tyl8gMAPq4I7foLncHLOdbp827plEXeBQbBv4= Received: from epsnrtp2.localdomain (unknown [182.195.42.163]) by epcas5p2.samsung.com (KnoxPortal) with ESMTP id 20220426121515epcas5p2ce7ecf5f7bef603348cda529aab37a4b~pcU1bEt4D2624626246epcas5p2Y; Tue, 26 Apr 2022 12:15:15 +0000 (GMT) Received: from epsmges5p2new.samsung.com (unknown [182.195.38.180]) by epsnrtp2.localdomain (Postfix) with ESMTP id 4KngnC6ps9z4x9Q1; Tue, 26 Apr 2022 12:15:11 +0000 (GMT) Received: from epcas5p3.samsung.com ( [182.195.41.41]) by epsmges5p2new.samsung.com (Symantec Messaging Gateway) with SMTP id 86.4F.09827.F42E7626; Tue, 26 Apr 2022 21:15:11 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p4.samsung.com (KnoxPortal) with ESMTPA id 20220426102001epcas5p4e321347334971d704cb19ffa25f9d0b4~pawOYHjZy3243632436epcas5p4H; Tue, 26 Apr 2022 10:20:01 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20220426102001epsmtrp172543dabc421aaef233c3b335d047536~pawOV5vR92263822638epsmtrp1i; Tue, 26 Apr 2022 10:20:01 +0000 (GMT) X-AuditID: b6c32a4a-b3bff70000002663-2c-6267e24f133e Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id 26.BA.08924.157C7626; Tue, 26 Apr 2022 19:20:01 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426101955epsmtip13ba2f31a98b5974fda4e654680a1c4bd~pawIxZdur0427604276epsmtip10; Tue, 26 Apr 2022 10:19:55 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Kanchan Joshi , =?utf-8?q?Javier_Gonz=C3=A1lez?= , Arnav Dawn , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 05/10] nvme: add copy offload support Date: Tue, 26 Apr 2022 15:42:33 +0530 Message-Id: <20220426101241.30100-6-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA02Te0xTdxTHcx+9LRjwCqg/YDhSwhZwPMoK+8Fkusi2m+nCw6mTuNUCd4UB bdeWuZEpMMSM13gJg+LkoYAgAwTGG+KAAtYhsgoKjtaOMqMGKg+DBpS1FDb/+/zO+Z7nL4eF Wd1j2rGihDJaIuTHsAlzvKXPxcUt8G9BmOeoggvrlQMY7JpYZMArU1kELHjyHIP636cZMDer kAlXhkcwqNJZwu65Yga89SwJhdONayicuNaOwq7yXBRWX1Gg8EHVRQR2ls0bvH8tMuGqlgMV a7ME1C5N4jC3dxyBM2NyFHZP7oZd3ddxqOo4T8CSyhkmTL/TRsCex90YrBp8icKb8lUC5gw2 MWCbLgmBLSslGOxTj+Gw7rEeh0OT9jAl4zkTjrwYZOx7g1LdPkDJNcMElZM8x6Ta5VNMakR9 FadUw3FUY00qQTVdSqDy7lYhVOdEIkH98IcCowoXlggqM3mOoNpTNAxqfmYSp/Q9Y0TQztDo PZE0P4KWONLCcFFElFDgzz5wiLef5+3jyXHj+MJ32I5Cfiztzw44GOT2YVSMYaVsx2/4MXEG UxBfKmV7vLdHIoqT0Y6RIqnMn02LI2LEXLG7lB8rjRMK3IW0zI/j6enlbRCeiI6sV/czxRVx 3zaoavFEpIqXhpixAMkFzcvlTCNbkZ0I0NRFpCHmBl5AQE/2S9z0WETATN8ssRlxZ/YCboro QEBripNJlIICZe0/BhGLRZC7wY01llFjQ+Kgenl5PRFGjrHAwC3teiJrEoLs0WHMyDjpDHpV K4iRLUg/cPmqej0PID1Almab0WxGvgsqFHOoSbINXC/SrfeAka+D5N+KMWN+QJaZgz7FTczU aACoXKximtgaPBps3mA78DDr7AafBK1nS1FT8BkEpCmVuMmxF4x2vUCNTWCkC6jv8DCZHUC+ sg41FbYEmSs61GS3AG0XNtkJ1NaXbizLFowvJ20wBfK0C4RpWT8h4IGqmJGNOMpfGUj+ykDy /0uXIlgNYkuLpbECWuot9hLSJ//75HBRbCOyfmWuH7ch2vtP3HsRlIX0IoCFsW0s8p2/DLOy iOB/F09LRDxJXAwt7UW8DQvPwey2h4sMZyqU8ThcX0+uj48P1/dtHw57p8UNQQPfihTwZXQ0 TYtpyWYcyjKzS0QrYbGfbHIp9XS/VXx61MOQj9QZXQk+ksldp4J2FHG8nml9E7ymE4fK3hy3 Do+MkpTILPdRvK3j+RmXTnt4vfVV+wR3PldfFow5nPqFL3cOPv5UH1ZR06q7Vn1kpvnMOafy R3oRV5PiLv/stXsHgz2OFxQd0i5XdlvHhpqFNDv05mVovQJ3fGqhvRwpVNuf/9q+SXn33FyN cm+q2qbhe9knoZmBAz+f0NW4a4RbAn6sn92ewn5623qwOr4qdaL/4p9bdePHdn1BjqVRId62 DdL09/s+L9Md/lXShguyo4f6Go/QAvpYQXCDELQEc5uPrrjq78uztkxZrX6QcDTErLBl/+Fw Ni6N5HNcMYmU/y9q74zx7gQAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA02Sa0xTdxiHPZeec2B0O5QG/pbEbQ2gwYDr0OwVjWzL3M42I/bTEpaJBY/o BOx6GY4Ps64hju5SYQ6xGHArWLmMS9mgaAuutXSIXLaOSRu5qBCSEYooG7KinYUs89ub53ny +/QyhMgikDBHCjW8qlCRL6UiyQ6XdEOK3JOX81KvVQIt13sJsPseCKBxzEhBxb1lAuZ/viuA cmMlDcGBIQK8U8+CI1AlgOGHJ3G4aw3h4LvahYP9+3Ic6hvdOMxYzBhc+W7hib31gIaV2zJw h+YouL3oJ6Hc+QcG0yMmHBz+zWB39JHgvXyegpqL0zR8cdNGQfesgwCL5zEOg6YVCso87QKw TZ3EoCNYQ4BrfISE5tl5En7xx0PJl8s0DD3yCF5N4ry/v8uZJgYorkwfoLku0xjNDY23kZx3 QMtZG0oprr32BPfNqAXjrvh0FPfZDTfBVd5fpLiv9AGK6yqZEHAL036Sm+8eofbFZUXuPMjn H/mYV23ZdSDycMv4NVpZpz3e6m0idZgl24BFMIjdim7OVZMGLJIRsTYMmUN/kWtiPbr46Bqx dseg+scz9Fqkx1Gtq58yYAxDsZtRf4gJN2KWRPVLS6tDBBtiUOedUiosYlhAp38dWB0i2UTk 9Aax8C1k09GltvHVHcRuQcaJ6DCOYHegOncAD2PRk2Q0eHytjkZ956bIMCbYjailWhTGBPs8 0v9URZzGok1PVab/K9NT1QWMaMDW80p1QV6BWqZ8uZAvSlUrCtTawrzU3GMFVmz1l5KTbZi9 4V6qE8MZzIkhhpCKhd8mHsoRCQ8qPinmVceyVdp8Xu3E4hlSGiccNvRli9g8hYY/yvNKXvWf xZkIiQ7X2bOyKzX6+2XT2kvK4pn2Nrz40PLurFq++sVYx2DV7nTzD5tCv23YU1OXNvZM3fWa xP0LHwUgbtOO1oySz+3VssnSU/b+TvTKB8WldI4nZm9ngjaqU/HGiiueyXgBDJPbzyQ0JZQ0 V5xt+tG6Ne28pnXbQtfZd66qe/5Ma9a236DfrOgRKUnjbE+UJFPkH7ZtPJWzGH9AuL1RKU+W J7yd6BPzuzINxpHJv1P85svsvCtLFvX6uq+7R8X/ZJj94szWo906G9Fx4s5+uTt39n33p0mD Gom2TY0/lzZMbzvzIfuaNMXXW7TOsTQpiw3eavHUyt+KmksaykiPVeS+VySSkurDClkyoVIr /gX3jBHiugMAAA== X-CMS-MailID: 20220426102001epcas5p4e321347334971d704cb19ffa25f9d0b4 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426102001epcas5p4e321347334971d704cb19ffa25f9d0b4 References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org For device supporting native copy, nvme driver receives read and write request with BLK_COPY op flags. For read request the nvme driver populates the payload with source information. For write request the driver converts it to nvme copy command using the source information in the payload and submits to the device. current design only supports single source range. This design is courtesy Mikulas Patocka's token based copy trace event support for nvme_copy_cmd. Set the device copy limits to queue limits. Signed-off-by: Kanchan Joshi Signed-off-by: Nitesh Shetty Signed-off-by: Javier González Signed-off-by: Arnav Dawn Reported-by: kernel test robot --- drivers/nvme/host/core.c | 116 +++++++++++++++++++++++++++++++++++++- drivers/nvme/host/fc.c | 4 ++ drivers/nvme/host/nvme.h | 7 +++ drivers/nvme/host/pci.c | 25 ++++++++ drivers/nvme/host/rdma.c | 6 ++ drivers/nvme/host/tcp.c | 14 +++++ drivers/nvme/host/trace.c | 19 +++++++ include/linux/nvme.h | 43 +++++++++++++- 8 files changed, 229 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index b9b0fbde97c8..9cbc8faace78 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -724,6 +724,87 @@ static inline void nvme_setup_flush(struct nvme_ns *ns, cmnd->common.nsid = cpu_to_le32(ns->head->ns_id); } +static inline blk_status_t nvme_setup_copy_read(struct nvme_ns *ns, struct request *req) +{ + struct bio *bio = req->bio; + struct nvme_copy_token *token = bvec_kmap_local(&bio->bi_io_vec[0]); + + memcpy(token->subsys, "nvme", 4); + token->ns = ns; + token->src_sector = bio->bi_iter.bi_sector; + token->sectors = bio->bi_iter.bi_size >> 9; + + return BLK_STS_OK; +} + +static inline blk_status_t nvme_setup_copy_write(struct nvme_ns *ns, + struct request *req, struct nvme_command *cmnd) +{ + struct nvme_ctrl *ctrl = ns->ctrl; + struct nvme_copy_range *range = NULL; + struct bio *bio = req->bio; + struct nvme_copy_token *token = bvec_kmap_local(&bio->bi_io_vec[0]); + sector_t src_sector, dst_sector, n_sectors; + u64 src_lba, dst_lba, n_lba; + unsigned short nr_range = 1; + u16 control = 0; + u32 dsmgmt = 0; + + if (unlikely(memcmp(token->subsys, "nvme", 4))) + return BLK_STS_NOTSUPP; + if (unlikely(token->ns != ns)) + return BLK_STS_NOTSUPP; + + src_sector = token->src_sector; + dst_sector = bio->bi_iter.bi_sector; + n_sectors = token->sectors; + if (WARN_ON(n_sectors != bio->bi_iter.bi_size >> 9)) + return BLK_STS_NOTSUPP; + + src_lba = nvme_sect_to_lba(ns, src_sector); + dst_lba = nvme_sect_to_lba(ns, dst_sector); + n_lba = nvme_sect_to_lba(ns, n_sectors); + + if (unlikely(nvme_lba_to_sect(ns, src_lba) != src_sector) || + unlikely(nvme_lba_to_sect(ns, dst_lba) != dst_sector) || + unlikely(nvme_lba_to_sect(ns, n_lba) != n_sectors)) + return BLK_STS_NOTSUPP; + + if (WARN_ON(!n_lba)) + return BLK_STS_NOTSUPP; + + if (req->cmd_flags & REQ_FUA) + control |= NVME_RW_FUA; + + if (req->cmd_flags & REQ_FAILFAST_DEV) + control |= NVME_RW_LR; + + memset(cmnd, 0, sizeof(*cmnd)); + cmnd->copy.opcode = nvme_cmd_copy; + cmnd->copy.nsid = cpu_to_le32(ns->head->ns_id); + cmnd->copy.sdlba = cpu_to_le64(dst_lba); + + range = kmalloc_array(nr_range, sizeof(*range), + GFP_ATOMIC | __GFP_NOWARN); + if (!range) + return BLK_STS_RESOURCE; + + range[0].slba = cpu_to_le64(src_lba); + range[0].nlb = cpu_to_le16(n_lba - 1); + + cmnd->copy.nr_range = 0; + + req->special_vec.bv_page = virt_to_page(range); + req->special_vec.bv_offset = offset_in_page(range); + req->special_vec.bv_len = sizeof(*range) * nr_range; + req->rq_flags |= RQF_SPECIAL_PAYLOAD; + + cmnd->copy.control = cpu_to_le16(control); + cmnd->copy.dspec = cpu_to_le32(dsmgmt); + + return BLK_STS_OK; +} + static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req, struct nvme_command *cmnd) { @@ -947,10 +1028,16 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req) ret = nvme_setup_discard(ns, req, cmd); break; case REQ_OP_READ: - ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_read); + if (unlikely(req->cmd_flags & REQ_COPY)) + ret = nvme_setup_copy_read(ns, req); + else + ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_read); break; case REQ_OP_WRITE: - ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_write); + if (unlikely(req->cmd_flags & REQ_COPY)) + ret = nvme_setup_copy_write(ns, req, cmd); + else + ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_write); break; case REQ_OP_ZONE_APPEND: ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_zone_append); @@ -1642,6 +1729,29 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns) blk_queue_max_write_zeroes_sectors(queue, UINT_MAX); } +static void nvme_config_copy(struct gendisk *disk, struct nvme_ns *ns, + struct nvme_id_ns *id) +{ + struct nvme_ctrl *ctrl = ns->ctrl; + struct request_queue *q = disk->queue; + + if (!(ctrl->oncs & NVME_CTRL_ONCS_COPY)) { + blk_queue_max_copy_sectors(q, 0); + blk_queue_max_copy_range_sectors(q, 0); + blk_queue_max_copy_nr_ranges(q, 0); + blk_queue_flag_clear(QUEUE_FLAG_COPY, q); + return; + } + + /* setting copy limits */ + if (blk_queue_flag_test_and_set(QUEUE_FLAG_COPY, q)) + return; + + blk_queue_max_copy_sectors(q, nvme_lba_to_sect(ns, le32_to_cpu(id->mcl))); + blk_queue_max_copy_range_sectors(q, nvme_lba_to_sect(ns, le16_to_cpu(id->mssrl))); + blk_queue_max_copy_nr_ranges(q, id->msrc + 1); +} + static bool nvme_ns_ids_equal(struct nvme_ns_ids *a, struct nvme_ns_ids *b) { return uuid_equal(&a->uuid, &b->uuid) && @@ -1841,6 +1951,7 @@ static void nvme_update_disk_info(struct gendisk *disk, set_capacity_and_notify(disk, capacity); nvme_config_discard(disk, ns); + nvme_config_copy(disk, ns, id); blk_queue_max_write_zeroes_sectors(disk->queue, ns->ctrl->max_zeroes_sectors); } @@ -4833,6 +4944,7 @@ static inline void _nvme_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_download_firmware) != 64); BUILD_BUG_ON(sizeof(struct nvme_format_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_dsm_cmd) != 64); + BUILD_BUG_ON(sizeof(struct nvme_copy_command) != 64); BUILD_BUG_ON(sizeof(struct nvme_write_zeroes_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_abort_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_get_log_page_command) != 64); diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 080f85f4105f..0fea231b7ccb 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -2788,6 +2788,10 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx, if (ret) return ret; + if (unlikely((rq->cmd_flags & REQ_COPY) && (req_op(rq) == REQ_OP_READ))) { + blk_mq_end_request(rq, BLK_STS_OK); + return BLK_STS_OK; + } /* * nvme core doesn't quite treat the rq opaquely. Commands such * as WRITE ZEROES will return a non-zero rq payload_bytes yet diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index a2b53ca63335..dc51fc647f23 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -482,6 +482,13 @@ struct nvme_ns { }; +struct nvme_copy_token { + char subsys[4]; + struct nvme_ns *ns; + u64 src_sector; + u64 sectors; +}; + /* NVMe ns supports metadata actions by the controller (generate/strip) */ static inline bool nvme_ns_has_pi(struct nvme_ns *ns) { diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 3aacf1c0d5a5..b9081c983b6f 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -511,6 +511,14 @@ static inline void nvme_sq_copy_cmd(struct nvme_queue *nvmeq, nvmeq->sq_tail = 0; } +static void nvme_commit_sqdb(struct nvme_queue *nvmeq) +{ + spin_lock(&nvmeq->sq_lock); + if (nvmeq->sq_tail != nvmeq->last_sq_tail) + nvme_write_sq_db(nvmeq, true); + spin_unlock(&nvmeq->sq_lock); +} + static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx) { struct nvme_queue *nvmeq = hctx->driver_data; @@ -918,6 +926,11 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) if (ret) return ret; + if (unlikely((req->cmd_flags & REQ_COPY) && (req_op(req) == REQ_OP_READ))) { + blk_mq_start_request(req); + return BLK_STS_OK; + } + if (blk_rq_nr_phys_segments(req)) { ret = nvme_map_data(dev, req, &iod->cmd); if (ret) @@ -931,6 +944,7 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) } blk_mq_start_request(req); + return BLK_STS_OK; out_unmap_data: nvme_unmap_data(dev, req); @@ -964,6 +978,17 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, ret = nvme_prep_rq(dev, req); if (unlikely(ret)) return ret; + if (unlikely((req->cmd_flags & REQ_COPY) && (req_op(req) == REQ_OP_READ))) { + blk_mq_set_request_complete(req); + blk_mq_end_request(req, BLK_STS_OK); + /* Commit the sq if copy read was the last req in the list, + * as copy read deoesn't update sq db + */ + if (bd->last) + nvme_commit_sqdb(nvmeq); + return ret; + } + spin_lock(&nvmeq->sq_lock); nvme_sq_copy_cmd(nvmeq, &iod->cmd); nvme_write_sq_db(nvmeq, bd->last); diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 5a69a45c5bd6..78af337c51bb 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -2087,6 +2087,12 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx, if (ret) goto unmap_qe; + if (unlikely((rq->cmd_flags & REQ_COPY) && (req_op(rq) == REQ_OP_READ))) { + blk_mq_end_request(rq, BLK_STS_OK); + ret = BLK_STS_OK; + goto unmap_qe; + } + blk_mq_start_request(rq); if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) && diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index ad3a2bf2f1e9..4e4cdcf8210a 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -2394,6 +2394,11 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns, if (ret) return ret; + if (unlikely((rq->cmd_flags & REQ_COPY) && (req_op(rq) == REQ_OP_READ))) { + blk_mq_start_request(req); + return BLK_STS_OK; + } + req->state = NVME_TCP_SEND_CMD_PDU; req->status = cpu_to_le16(NVME_SC_SUCCESS); req->offset = 0; @@ -2462,6 +2467,15 @@ static blk_status_t nvme_tcp_queue_rq(struct blk_mq_hw_ctx *hctx, blk_mq_start_request(rq); + if (unlikely((rq->cmd_flags & REQ_COPY) && (req_op(rq) == REQ_OP_READ))) { + blk_mq_set_request_complete(rq); + blk_mq_end_request(rq, BLK_STS_OK); + /* if copy read is the last req queue tcp reqs */ + if (bd->last && nvme_tcp_queue_more(queue)) + queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work); + return ret; + } + nvme_tcp_queue_request(req, true, bd->last); return BLK_STS_OK; diff --git a/drivers/nvme/host/trace.c b/drivers/nvme/host/trace.c index 2a89c5aa0790..ab72bf546a13 100644 --- a/drivers/nvme/host/trace.c +++ b/drivers/nvme/host/trace.c @@ -150,6 +150,23 @@ static const char *nvme_trace_read_write(struct trace_seq *p, u8 *cdw10) return ret; } +static const char *nvme_trace_copy(struct trace_seq *p, u8 *cdw10) +{ + const char *ret = trace_seq_buffer_ptr(p); + u64 slba = get_unaligned_le64(cdw10); + u8 nr_range = get_unaligned_le16(cdw10 + 8); + u16 control = get_unaligned_le16(cdw10 + 10); + u32 dsmgmt = get_unaligned_le32(cdw10 + 12); + u32 reftag = get_unaligned_le32(cdw10 + 16); + + trace_seq_printf(p, + "slba=%llu, nr_range=%u, ctrl=0x%x, dsmgmt=%u, reftag=%u", + slba, nr_range, control, dsmgmt, reftag); + trace_seq_putc(p, 0); + + return ret; +} + static const char *nvme_trace_dsm(struct trace_seq *p, u8 *cdw10) { const char *ret = trace_seq_buffer_ptr(p); @@ -243,6 +260,8 @@ const char *nvme_trace_parse_nvm_cmd(struct trace_seq *p, return nvme_trace_zone_mgmt_send(p, cdw10); case nvme_cmd_zone_mgmt_recv: return nvme_trace_zone_mgmt_recv(p, cdw10); + case nvme_cmd_copy: + return nvme_trace_copy(p, cdw10); default: return nvme_trace_common(p, cdw10); } diff --git a/include/linux/nvme.h b/include/linux/nvme.h index f626a445d1a8..ec12492b3063 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -316,7 +316,7 @@ struct nvme_id_ctrl { __u8 nvscc; __u8 nwpc; __le16 acwu; - __u8 rsvd534[2]; + __le16 ocfs; __le32 sgls; __le32 mnan; __u8 rsvd544[224]; @@ -344,6 +344,7 @@ enum { NVME_CTRL_ONCS_WRITE_ZEROES = 1 << 3, NVME_CTRL_ONCS_RESERVATIONS = 1 << 5, NVME_CTRL_ONCS_TIMESTAMP = 1 << 6, + NVME_CTRL_ONCS_COPY = 1 << 8, NVME_CTRL_VWC_PRESENT = 1 << 0, NVME_CTRL_OACS_SEC_SUPP = 1 << 0, NVME_CTRL_OACS_NS_MNGT_SUPP = 1 << 3, @@ -393,7 +394,10 @@ struct nvme_id_ns { __le16 npdg; __le16 npda; __le16 nows; - __u8 rsvd74[18]; + __le16 mssrl; + __le32 mcl; + __u8 msrc; + __u8 rsvd91[11]; __le32 anagrpid; __u8 rsvd96[3]; __u8 nsattr; @@ -750,6 +754,7 @@ enum nvme_opcode { nvme_cmd_resv_report = 0x0e, nvme_cmd_resv_acquire = 0x11, nvme_cmd_resv_release = 0x15, + nvme_cmd_copy = 0x19, nvme_cmd_zone_mgmt_send = 0x79, nvme_cmd_zone_mgmt_recv = 0x7a, nvme_cmd_zone_append = 0x7d, @@ -771,7 +776,8 @@ enum nvme_opcode { nvme_opcode_name(nvme_cmd_resv_release), \ nvme_opcode_name(nvme_cmd_zone_mgmt_send), \ nvme_opcode_name(nvme_cmd_zone_mgmt_recv), \ - nvme_opcode_name(nvme_cmd_zone_append)) + nvme_opcode_name(nvme_cmd_zone_append), \ + nvme_opcode_name(nvme_cmd_copy)) @@ -945,6 +951,36 @@ struct nvme_dsm_range { __le64 slba; }; +struct nvme_copy_command { + __u8 opcode; + __u8 flags; + __u16 command_id; + __le32 nsid; + __u64 rsvd2; + __le64 metadata; + union nvme_data_ptr dptr; + __le64 sdlba; + __u8 nr_range; + __u8 rsvd12; + __le16 control; + __le16 rsvd13; + __le16 dspec; + __le32 ilbrt; + __le16 lbat; + __le16 lbatm; +}; + +struct nvme_copy_range { + __le64 rsvd0; + __le64 slba; + __le16 nlb; + __le16 rsvd18; + __le32 rsvd20; + __le32 eilbrt; + __le16 elbat; + __le16 elbatm; +}; + struct nvme_write_zeroes_cmd { __u8 opcode; __u8 flags; @@ -1499,6 +1535,7 @@ struct nvme_command { struct nvme_download_firmware dlfw; struct nvme_format_cmd format; struct nvme_dsm_cmd dsm; + struct nvme_copy_command copy; struct nvme_write_zeroes_cmd write_zeroes; struct nvme_zone_mgmt_send_cmd zms; struct nvme_zone_mgmt_recv_cmd zmr; From patchwork Tue Apr 26 10:12:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69A46C433EF for ; Tue, 26 Apr 2022 12:16:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349874AbiDZMTR (ORCPT ); Tue, 26 Apr 2022 08:19:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349945AbiDZMTF (ORCPT ); Tue, 26 Apr 2022 08:19:05 -0400 Received: from mailout3.samsung.com (mailout3.samsung.com [203.254.224.33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D2DD6153; Tue, 26 Apr 2022 05:15:23 -0700 (PDT) Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout3.samsung.com (KnoxPortal) with ESMTP id 20220426121521epoutp03d7c2068860314753c67dedff1c583af9~pcU6tdR6_1888718887epoutp03A; Tue, 26 Apr 2022 12:15:21 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout3.samsung.com 20220426121521epoutp03d7c2068860314753c67dedff1c583af9~pcU6tdR6_1888718887epoutp03A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975321; bh=ZRb39JG6+hdZiNKzeGtT2bEJiVY/lY5W3itFsdLEbIo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WyAbiANkv51STSozpOiPI0uBNqNE2wQgzJpxiuH1eix01uQdIZakF8fqZQD90+qxl XOyFYt66bbhLUy+aNb5BBiJ5BSZanMR/zDgRQqHmqtqojfvIQKp0VyYvVPgBT58riG o6/SgckQl4qFjrHAtUZmtdBZ9Jz08fxzWzMj3YuM= Received: from epsnrtp4.localdomain (unknown [182.195.42.165]) by epcas5p2.samsung.com (KnoxPortal) with ESMTP id 20220426121520epcas5p2dd6146d0bef54e20540de6f2504020b6~pcU6Z1pVc3196931969epcas5p2N; Tue, 26 Apr 2022 12:15:20 +0000 (GMT) Received: from epsmges5p3new.samsung.com (unknown [182.195.38.183]) by epsnrtp4.localdomain (Postfix) with ESMTP id 4KngnJ3L7Dz4x9Pw; Tue, 26 Apr 2022 12:15:16 +0000 (GMT) Received: from epcas5p3.samsung.com ( [182.195.41.41]) by epsmges5p3new.samsung.com (Symantec Messaging Gateway) with SMTP id 44.11.09762.452E7626; Tue, 26 Apr 2022 21:15:16 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p3.samsung.com (KnoxPortal) with ESMTPA id 20220426102009epcas5p3e5b1ddfd5d3c7200972cecb139650da6~pawV82u7H2815828158epcas5p3M; Tue, 26 Apr 2022 10:20:09 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20220426102009epsmtrp1037d86074cc99f7d34807a2919fd0836~pawV6Wisp2358223582epsmtrp1E; Tue, 26 Apr 2022 10:20:09 +0000 (GMT) X-AuditID: b6c32a4b-213ff70000002622-cf-6267e2548fd3 Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id CE.BA.08924.957C7626; Tue, 26 Apr 2022 19:20:09 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426102004epsmtip1d83fc09a765589a173a68a4e9fcb0cd8~pawQ0HHvp0427604276epsmtip15; Tue, 26 Apr 2022 10:20:04 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Arnav Dawn , Nitesh Shetty , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 06/10] nvmet: add copy command support for bdev and file ns Date: Tue, 26 Apr 2022 15:42:34 +0530 Message-Id: <20220426101241.30100-7-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01TaVBbVRj1vvfyElDwla23MAqNogUMEAj0skm1TOfNYDVV6w9mFAM8AYEQ EyjUcRQaoS1tSUHplLQGVFo2BVnKFtKxoSEllCKNtMIMRTaRxhK2ERk2SQPaf+dbzv2+c+58 HNzByHblJIszGKlYlMolbYmWLi8v3rvjiXH+w81OqN7QjaPOoUUWqh1RkOjC3AqOzDcmWKhY cZGNVvv6cWSctEea2Uss9Ms/uRiaaNzE0NDP7Rjq/K4YQ9W1OgxNV34PkPrbeQytjfHR2NIw gYq19wCaGlRiSDPsgzo1PQQydlwmUdnVKTY6c7+NRNdNGhxV6jcwdEe5RqIifRMLtU3mAtSy WoajrgeDBKozmQl0a9gN5Z1dYaP+dT3rwIu08ddoWjnaR9JF8lk23a4cYdP9DxoI2tiXSTfW nCbppoov6K9+qwS0eiiHpE/c1uH0xYUlkj4nnyXp9rxRFj0/NUzQ5uuDpHB3TEp4EiNKYKQe jDg+PSFZnBjBjX4n9mBsULA/n8cPQfu5HmJRGhPBjXpDyDuUnLplJNfjmCg1cyslFMlkXL9X w6XpmRmMR1K6LCOCy0gSUiUCia9MlCbLFCf6ipmMUL6/f0DQVuOHKUl3e/tJyY3w7NbWYSwH XAkoADYcSAmg+n4rXgBsOQ6UGsCTKjPLGiwA2FVYB6zBIoB/t9dgO5TNs4WYtdABoHn1r21K HgZVq9NbFA6HpHxg7ybHQnCiCFi9vExYenBqjg3XCiaApeBICWFL959sCyYoT1hRrSct2I4K hTMT3SzLO5Dyg4rRXZa0DRUGr+hmMWvLLthTOklYME65Q/m1S481QCrfFppUFkGWTaNgs2KB tGJH+FDfzLZiVzijyN/GWbA1vxyzkr8EsMBgIKyFSDjQuY5ZlsApL1jf4WdNPwdLDHWYdbA9 PLc6ue2KHWxT7eAX4A/15dtz98B7y7nbmIam3lNsq1mFADaV55DngYfyCUHKJwQp/x9dDvAa sIeRyNISGVmQJFDMZP33zfHpaY3g8XV5R7eB8d/nfLUA4wAtgByc62RX4vlRnINdguj4p4w0 PVaamcrItCBoy/Ai3NU5Pn3rPMUZsXxBiL8gODhYEBIYzOfutjMk/iRyoBJFGUwKw0gY6Q4P 49i45mCKqk/CtEx4TEVgVanrmbhy/cmOjTTTzZdzBVdfF8rl655azLQ3KOp9z/MOgkZ3aoD3 uTpgfkTgZpccNvGmorwThUZ+9pre3BfVgE1x5dGGwmMJ8aV9ZiN7ac4kWTGVMhxV2dwr+Uvx gx3yjb3Chf22ET66Ne+Aa3h21lFh3GEpEXpaY3w0oGk13Z79+JS943qKi7MK72wab+W76GI/ 6JY9DChdC7tQd1Tdl43X3vmjZJ/78/S47u1ie2Pee6MdPN3xZ3IqqqOOOLv118f0FN5962aV NuTrZ8cPuuRFHRp9KiCoJvzEDNinOfL06FikLJx3a3qgbvHH6UdNDS9d5h3+hkvIkkR8b1wq E/0Le799yOYEAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA02Sa0xTdxjGd649xdWdVjL+QnahgSwpswjZ5Q0zjn1YcrIFcFlINjZkZT1c tlJIj4hrFi5WmHIZFiKpxc1hlQK6yWDSFloiKJbCGMRalI7LCDDmTECcUVhRZiFL/Pbk/f2e 59PLELJvqHAmV3uA12lVGjkdQnZdkb+462N3dubuy8ORcHHoGgHOiX8oOD9VS0PD3TUClvvm KKirNYkgMDJKgHd+O7iWGikYWy3DYa5jA4eJyw4cnGfqcGg9P4DDotWCQU/TCg7rs3Ewe99P Ql3/OAYLPjMOLn8MOF0eErzdp2g43bwggqqbdhp677gIsLof4/CbeZ0Go7uTAvt8GQZdgdME XJn2kfDTnWUSBv0RUF69JoLRR24qMYrz3nifM8+M0JzRsCTiHOYpETc6/TPJeUcKuY62YzTX ebaEq79lxbieiVKaO/zrAMGZ7t2nuRrDEs05ymcobmXBT3LLvT56X1hayB41r8k9yOti934W knN9eJQu6NtzyGbz46XYufhKTMwg9jW0Uf0tXomFMDLWjqGuC0ZqC+xEzY+uElt5B2p9vCja kgw4MvssTxoMQ7MxaHiDCTqhLIlaHz4kgw7Bmhhk6XXhQbCDTUbjs5VkMJNsNDrb6qaDWcIm oNtz16jgDmJjUe2MNHgWs2+hcwNLm/OyJ8qtwKEtW4o8J+c3Vwj2JWS41Egcx1jzU8j8FPoB w9uwnXyBkJedJ8QVxGv5IqWgyhMKtdnKz/PzOrDNX1Eo7Jiz7a6yH8MZrB9DDCEPlZyIzsqU SdSqr/S8Lj9DV6jhhX4sgiHlYZKxSk+GjM1WHeC/5PkCXvc/xRlxeCkeevFmSpr6a1GLrX6b Izd/9/7kLP2CcWxaFr+mLrblGH6sNhZVBeiwlNR3NTqZNLpeW4aF/+vg2hUO0e+Hc3ziM/pU iUe8d2jVwA7q89++0Tw++WlFRUVZYslR2y+LNQ/e+K44UUjoSpdaUnQlTMb+hJX06c5MoeXI 4PO3Y6f+xpWXnpn84z0PF7iXNjMUafxie/LV5RhBum5J6vsz97js+myqsqa7xdzTWZX0XFME O5n2ScNA2JwTPiwvfL19bNvJY5rav7pf3ohSN71qsuxardZfODVpaiiyxr35TtaDo43FSvvB SMWJVdMLcnv6B4tHyq2vfP9skszjTZd8ZNK3R8lJIUcVpyB0guo/5gYd4ZoDAAA= X-CMS-MailID: 20220426102009epcas5p3e5b1ddfd5d3c7200972cecb139650da6 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426102009epcas5p3e5b1ddfd5d3c7200972cecb139650da6 References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Arnav Dawn Add support for handling target command on target. For bdev-ns we call into blkdev_issue_copy, which the block layer completes by a offloaded copy request to backend bdev or by emulating the request. For file-ns we call vfs_copy_file_range to service our request. Currently target always shows copy capability by setting NVME_CTRL_ONCS_COPY in controller ONCS. Signed-off-by: Arnav Dawn Signed-off-by: Nitesh Shetty Reported-by: kernel test robot --- drivers/nvme/host/tcp.c | 2 +- drivers/nvme/target/admin-cmd.c | 8 +++- drivers/nvme/target/io-cmd-bdev.c | 65 +++++++++++++++++++++++++++++++ drivers/nvme/target/io-cmd-file.c | 49 +++++++++++++++++++++++ 4 files changed, 121 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 4e4cdcf8210a..2c77e5b596bb 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -2395,7 +2395,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns, return ret; if (unlikely((rq->cmd_flags & REQ_COPY) && (req_op(rq) == REQ_OP_READ))) { - blk_mq_start_request(req); + blk_mq_start_request(rq); return BLK_STS_OK; } diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index 397daaf51f1b..db32debdb528 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -431,8 +431,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req) id->nn = cpu_to_le32(NVMET_MAX_NAMESPACES); id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES); id->oncs = cpu_to_le16(NVME_CTRL_ONCS_DSM | - NVME_CTRL_ONCS_WRITE_ZEROES); - + NVME_CTRL_ONCS_WRITE_ZEROES | NVME_CTRL_ONCS_COPY); /* XXX: don't report vwc if the underlying device is write through */ id->vwc = NVME_CTRL_VWC_PRESENT; @@ -534,6 +533,11 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req) if (req->ns->bdev) nvmet_bdev_set_limits(req->ns->bdev, id); + else { + id->msrc = to0based(BIO_MAX_VECS); + id->mssrl = cpu_to_le16(BIO_MAX_VECS << (PAGE_SHIFT - SECTOR_SHIFT)); + id->mcl = cpu_to_le32(le16_to_cpu(id->mssrl) * BIO_MAX_VECS); + } /* * We just provide a single LBA format that matches what the diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 27a72504d31c..18666d36423f 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -47,6 +47,30 @@ void nvmet_bdev_set_limits(struct block_device *bdev, struct nvme_id_ns *id) id->npda = id->npdg; /* NOWS = Namespace Optimal Write Size */ id->nows = to0based(ql->io_opt / ql->logical_block_size); + + /*Copy limits*/ + if (ql->max_copy_sectors) { + id->mcl = cpu_to_le32((ql->max_copy_sectors << 9) / ql->logical_block_size); + id->mssrl = cpu_to_le16((ql->max_copy_range_sectors << 9) / + ql->logical_block_size); + id->msrc = to0based(ql->max_copy_nr_ranges); + } else { + if (ql->zoned == BLK_ZONED_NONE) { + id->msrc = to0based(BIO_MAX_VECS); + id->mssrl = cpu_to_le16( + (BIO_MAX_VECS << PAGE_SHIFT) / ql->logical_block_size); + id->mcl = cpu_to_le32(le16_to_cpu(id->mssrl) * BIO_MAX_VECS); +#ifdef CONFIG_BLK_DEV_ZONED + } else { + /* TODO: get right values for zoned device */ + id->msrc = to0based(BIO_MAX_VECS); + id->mssrl = cpu_to_le16(min((BIO_MAX_VECS << PAGE_SHIFT), + ql->chunk_sectors) / ql->logical_block_size); + id->mcl = cpu_to_le32(min(le16_to_cpu(id->mssrl) * BIO_MAX_VECS, + ql->chunk_sectors)); +#endif + } + } } void nvmet_bdev_ns_disable(struct nvmet_ns *ns) @@ -442,6 +466,43 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) } } +static void nvmet_bdev_execute_copy(struct nvmet_req *req) +{ + struct nvme_copy_range range; + struct range_entry *rlist; + struct nvme_command *cmnd = req->cmd; + sector_t dest, dest_off = 0; + int ret, id, nr_range; + + nr_range = cmnd->copy.nr_range + 1; + dest = le64_to_cpu(cmnd->copy.sdlba) << req->ns->blksize_shift; + rlist = kmalloc_array(nr_range, sizeof(*rlist), GFP_KERNEL); + + for (id = 0 ; id < nr_range; id++) { + ret = nvmet_copy_from_sgl(req, id * sizeof(range), &range, sizeof(range)); + if (ret) + goto out; + + rlist[id].dst = dest + dest_off; + rlist[id].src = le64_to_cpu(range.slba) << req->ns->blksize_shift; + rlist[id].len = (le16_to_cpu(range.nlb) + 1) << req->ns->blksize_shift; + rlist[id].comp_len = 0; + dest_off += rlist[id].len; + } + ret = blkdev_issue_copy(req->ns->bdev, nr_range, rlist, req->ns->bdev, GFP_KERNEL); + if (ret) { + for (id = 0 ; id < nr_range; id++) { + if (rlist[id].len != rlist[id].comp_len) { + req->cqe->result.u32 = cpu_to_le32(id); + break; + } + } + } +out: + kfree(rlist); + nvmet_req_complete(req, errno_to_nvme_status(req, ret)); +} + u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) { switch (req->cmd->common.opcode) { @@ -460,6 +521,10 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_bdev_execute_write_zeroes; return 0; + case nvme_cmd_copy: + req->execute = nvmet_bdev_execute_copy; + return 0; + default: return nvmet_report_invalid_opcode(req); } diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c index f3d58abf11e0..fe26a9120436 100644 --- a/drivers/nvme/target/io-cmd-file.c +++ b/drivers/nvme/target/io-cmd-file.c @@ -338,6 +338,46 @@ static void nvmet_file_dsm_work(struct work_struct *w) } } +static void nvmet_file_copy_work(struct work_struct *w) +{ + struct nvmet_req *req = container_of(w, struct nvmet_req, f.work); + int nr_range; + loff_t pos; + struct nvme_command *cmnd = req->cmd; + int ret = 0, len = 0, src, id; + + nr_range = cmnd->copy.nr_range + 1; + pos = le64_to_cpu(req->cmd->copy.sdlba) << req->ns->blksize_shift; + if (unlikely(pos + req->transfer_len > req->ns->size)) { + nvmet_req_complete(req, errno_to_nvme_status(req, -ENOSPC)); + return; + } + + for (id = 0 ; id < nr_range; id++) { + struct nvme_copy_range range; + + ret = nvmet_copy_from_sgl(req, id * sizeof(range), &range, + sizeof(range)); + if (ret) + goto out; + + len = (le16_to_cpu(range.nlb) + 1) << (req->ns->blksize_shift); + src = (le64_to_cpu(range.slba) << (req->ns->blksize_shift)); + ret = vfs_copy_file_range(req->ns->file, src, req->ns->file, pos, len, 0); +out: + if (ret != len) { + pos += ret; + req->cqe->result.u32 = cpu_to_le32(id); + nvmet_req_complete(req, ret < 0 ? errno_to_nvme_status(req, ret) : + errno_to_nvme_status(req, -EIO)); + return; + + } else + pos += len; +} + nvmet_req_complete(req, ret); + +} static void nvmet_file_execute_dsm(struct nvmet_req *req) { if (!nvmet_check_data_len_lte(req, nvmet_dsm_len(req))) @@ -346,6 +386,12 @@ static void nvmet_file_execute_dsm(struct nvmet_req *req) queue_work(nvmet_wq, &req->f.work); } +static void nvmet_file_execute_copy(struct nvmet_req *req) +{ + INIT_WORK(&req->f.work, nvmet_file_copy_work); + schedule_work(&req->f.work); +} + static void nvmet_file_write_zeroes_work(struct work_struct *w) { struct nvmet_req *req = container_of(w, struct nvmet_req, f.work); @@ -392,6 +438,9 @@ u16 nvmet_file_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_file_execute_write_zeroes; return 0; + case nvme_cmd_copy: + req->execute = nvmet_file_execute_copy; + return 0; default: return nvmet_report_invalid_opcode(req); } From patchwork Tue Apr 26 10:12:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826992 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58989C433FE for ; Tue, 26 Apr 2022 12:16:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349963AbiDZMTm (ORCPT ); Tue, 26 Apr 2022 08:19:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349992AbiDZMTH (ORCPT ); Tue, 26 Apr 2022 08:19:07 -0400 Received: from mailout1.samsung.com (mailout1.samsung.com [203.254.224.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BF1725581; Tue, 26 Apr 2022 05:15:37 -0700 (PDT) Received: from epcas5p4.samsung.com (unknown [182.195.41.42]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20220426121535epoutp01f445d46f6fac83110438845dcfac2b73~pcVH-rpsw2297022970epoutp01L; Tue, 26 Apr 2022 12:15:35 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20220426121535epoutp01f445d46f6fac83110438845dcfac2b73~pcVH-rpsw2297022970epoutp01L DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975335; bh=MKptxWBA+tVkdWVNKpH87jw/lgN/M6LjtF6kPZGPHE8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DZsp145UEfgGicZfckw76cD8oEZx+3FhJ3A76znNT3A4VcF+hgqwRc/nQfxFaBeAM gI0pArX4OwOhc+RhKQkQrfVUSQiEtjbD5bayepgUx6X2/3WoX3xERZ4POfZ6BFDnVF Zj2lIBz4kyveby0YxKJH7NhFLyGVkyIv9GGAnjME= Received: from epsnrtp4.localdomain (unknown [182.195.42.165]) by epcas5p4.samsung.com (KnoxPortal) with ESMTP id 20220426121534epcas5p4f91b136b8273a078b94d6e6397047bb3~pcVG_qzuU0731307313epcas5p49; Tue, 26 Apr 2022 12:15:34 +0000 (GMT) Received: from epsmges5p2new.samsung.com (unknown [182.195.38.177]) by epsnrtp4.localdomain (Postfix) with ESMTP id 4KngnZ1pb4z4x9Pq; Tue, 26 Apr 2022 12:15:30 +0000 (GMT) Received: from epcas5p2.samsung.com ( [182.195.41.40]) by epsmges5p2new.samsung.com (Symantec Messaging Gateway) with SMTP id 1F.4F.09827.262E7626; Tue, 26 Apr 2022 21:15:30 +0900 (KST) Received: from epsmtrp2.samsung.com (unknown [182.195.40.14]) by epcas5p2.samsung.com (KnoxPortal) with ESMTPA id 20220426102017epcas5p295d3b62eaa250765e48c767962cbf08b~pawdE0gpe0348903489epcas5p2B; Tue, 26 Apr 2022 10:20:17 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp2.samsung.com (KnoxPortal) with ESMTP id 20220426102017epsmtrp2684d85a9e0b164e167a5582c0c5870bc~pawdDUMgq1443314433epsmtrp2E; Tue, 26 Apr 2022 10:20:17 +0000 (GMT) X-AuditID: b6c32a4a-b3bff70000002663-4e-6267e262a37d Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id A6.CA.08924.167C7626; Tue, 26 Apr 2022 19:20:17 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426102012epsmtip10e289375d6f4abef2fb525b51597dead~pawYWoSgA0427604276epsmtip19; Tue, 26 Apr 2022 10:20:12 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 07/10] dm: Add support for copy offload. Date: Tue, 26 Apr 2022 15:42:35 +0530 Message-Id: <20220426101241.30100-8-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01TaVBTVxTuW/KSOOI8lsKFKtBYOiLDEg30QgHbEZnX4kLF0cHR0gd5LAWS mIC2Tp1CgbaCrIKD0SnQCgi0hKUIYZel7FCHRaQishWFFmRxIkVMEwKt/875zvedc75z53Iw g262GSdEFMFIRXQYj9iG32m23mPrPxHk76Ac5UBF528YLBpNJuC1Z6sYXLg7yYJpyZlsuNbT h8H+qR2wbv4GC/7+IhqFk2VqFD5oVKKw9sc0FBYUtaJwJv8nBNbkLKLw5Tgfjq+M4DCtaQiB 04NyFNaN2MDaug4c9lffJGBW3jQbJtyvImD9XB0G89teobBX/pKAqW3lLFg1FY3AO2tZGGx+ NIjD4rkFHLaPvAXjrqyyYd96G+uD3VT/gBclH+shqNSYeTallI+yqb5HpTjV3xNJlRVeJqjy W19TV4fzEarmQRRBfdPdilGZSysElRgzT1DKuDEWtTg9glML9YOEt8npUNdghhYyUktGFCAW hoiC3HhePn4H/RydHPi2fGf4Hs9SRIczbjyPw962niFhmhvyLM/TYZEayJuWyXj27q5ScWQE YxkslkW48RiJMEwikNjJ6HBZpCjITsREuPAdHPY5aoifhQY3zi6zJfU7v2hTt+JRyC2TeITL AaQAdGfEofHINo4BWYOAkvRZli5ZQkDK2NPNyjICrqmS8C1JSmHWJqsaAddjKwhdEoeChPa7 mgqHQ5A2oEvN0QqMSBwUqFS4loORHWwQo/wH1RYMyffBixUVWxvjpBUoba3cmKBHuoCbuX+j 2j6AtAfJY/pamKuh57bOozqKPui4PrVBx0gLEFNxA9P2B+QrLphrHmfrNvUALRVTmC42BLNt v27iZmB5vo7QxRdA5bfZqE4ci4D4zs5NmwfAvdr1jSUw0hooqu118C6Q0VmM6gbvAIlrU6gO 1wNVP2zFu8HPiuzN/qZgSBVN6LxQYC7BWwsbkEkImK33TkEs5a/Zkb9mR/7/4GwEK0RMGYks PIiROUr2iZgL/z1ygDi8DNn4Vns/rkLGHz+za0JQDtKEAA7GM9LLsAr0N9AT0l9eZKRiP2lk GCNrQhw1507FzN4MEGv+pSjCjy9wdhA4OTkJnPc78Xkmel1BJbQBGURHMKEMI2GkWzqUwzWL QiU1pPvx8vRRU8/mhj1YqEDmm/W2h/nt82fG9QcOiWpUI97oZPNITMOliXdKFMNc/vYlrvDp 8Yy8owq1vqtroLFPwffuoh7DFCtWpcXJe38cK9hPpySaH0n7So9yPVC5/Pi5rUp5cMK8pfpV uqz6097nZy0u3SeKzmV6dZ9tX756ZlfxzDFjqGyMOhXevDPh3Xmu0fByzuKc8HJup6BsrfDJ RexzZ7VPF7069aH/CSZn4OQRlrWpWLU+8YQR5MkJz4e+aUmnYYVb+XbpX7/4SIYaTmDnxDO9 Vz5qiX4YGIrj6irfU7Hq0jbYIFAg7X8eLfnOONjG5XCT8PYnitqxN1ptebgsmObvxaQy+l+5 AHPB3wQAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA02SX0xTZxjG851zes4pCjsWZj8VXFLCTTtBCIYv4gpmWzgicbsgajSCVY/F SUvTA86xP5QxL9rpEBBlhaWKCLYoRpwK0lag1ooKJfJHZANGpGMZGYiQFFekoyVLvPvleX7v c/XSuKhYsJ4+ps7ntGpFroQMIe44JBs3KVzKQ5v9be+hG48f4qhxpJRE51+9wdFMx0sBKi+t opCv242jvokwZJuuFqDehWIMvWz2Y+hFeyuGrLXlGDI3OjE02XAZoLZLsxhaHI9H4/PDBCrv HATIM2DEkG1Yhqy2LgL13ashkaneQ6Efn7eQyD5lw1GDawlDPcZFEpW5bglQy0QxQHd8Jhw5 RgcI1DQ1Q6BHwxvQqdNvKOR+6xKkRrN9/TtZ41g3yZaVTFNsq3GEYt2jNwm2r7uAbbboSfZW XRFbMdQA2LYXOpL9/qkTZ6tez5PsmZJpkm09NSZgZz3DBDtjHyA/F+8L2XaEyz12gtPGyQ+G 5LT/PUdp7JEnXX4noQN1YgMQ0pBJhGctJkGARUwLgE33wEq+Dta/fYCvcDg0L01SBhCy7JRg cPHuJGYANE0yMvjETwecCIaAZq+XCDg4M0vBX+wLwdFwJhkuzHupABNMDLzpvEsEOJTZCmuu /BPcgUwcLB1bE4iFy/oV53QwFi0rQ76TK/Ya2PXzRPASZz6AJber8bOAMb5TGd+pLgLMAtZx Gl6lVPHxmgQ192Usr1DxBWpl7OE8VTMIfolU2gKsllexnQCjQSeANC6JCK2MOXpIFHpE8VUh p83L1hbkcnwn2EATEnFor6ErW8QoFfnccY7TcNr/W4wWrtdhxyfLUqSvh2Bb7x6J7UKkJe1X z47GytUWaY+8ntOJIyQHePYMWSTdZNJ8XJ7veyI08R2+RKlAt9Y4nivZuzEq+tzvO6Iof2ZU Rnhyv7pan56V6DCTes8nNUcHk0eSrXtkcnfV7rXFMWl5lipvxzNHZebjuYz0urCipf0/Dci2 /JmVsN1fmXE1KSEpu3a7+uuEP5La3x83mCrcf60Ced+kp58bqmjaZZpJS7WrT1yLPLi/lhm9 Oqj3bvt3Vxy8uNnzQ0pPqvm7B4Udp8XXn97/9iO9PGN3TXth/9ZGu3JWHi3iVgtzFLKwyJ3c +bkPHc8ffWZNWTWVqfn0t6wGVZPmCwnB5yjipbiWV/wHTxSdjJQDAAA= X-CMS-MailID: 20220426102017epcas5p295d3b62eaa250765e48c767962cbf08b X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426102017epcas5p295d3b62eaa250765e48c767962cbf08b References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Before enabling copy for dm target, check if underlying devices and dm target support copy. Avoid split happening inside dm target. Fail early if the request needs split, currently splitting copy request is not supported. Signed-off-by: Nitesh Shetty Reported-by: kernel test robot --- drivers/md/dm-table.c | 45 +++++++++++++++++++++++++++++++++++ drivers/md/dm.c | 6 +++++ include/linux/device-mapper.h | 5 ++++ 3 files changed, 56 insertions(+) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index a37c7b763643..b7574f179ed6 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1893,6 +1893,38 @@ static bool dm_table_supports_nowait(struct dm_table *t) return true; } +static int device_not_copy_capable(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) +{ + struct request_queue *q = bdev_get_queue(dev->bdev); + + return !blk_queue_copy(q); +} + +static bool dm_table_supports_copy(struct dm_table *t) +{ + struct dm_target *ti; + unsigned int i; + + for (i = 0; i < dm_table_get_num_targets(t); i++) { + ti = dm_table_get_target(t, i); + + if (!ti->copy_offload_supported) + return false; + + /* + * target provides copy support (as implied by setting 'copy_offload_supported') + * and it relies on _all_ data devices having copy support. + */ + if (ti->copy_offload_supported && + (!ti->type->iterate_devices || + ti->type->iterate_devices(ti, device_not_copy_capable, NULL))) + return false; + } + + return true; +} + static int device_not_discard_capable(struct dm_target *ti, struct dm_dev *dev, sector_t start, sector_t len, void *data) { @@ -1981,6 +2013,19 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, q->limits.discard_misaligned = 0; } + if (!dm_table_supports_copy(t)) { + blk_queue_flag_clear(QUEUE_FLAG_COPY, q); + /* Must also clear copy limits... */ + q->limits.max_copy_sectors = 0; + q->limits.max_hw_copy_sectors = 0; + q->limits.max_copy_range_sectors = 0; + q->limits.max_hw_copy_range_sectors = 0; + q->limits.max_copy_nr_ranges = 0; + q->limits.max_hw_copy_nr_ranges = 0; + } else { + blk_queue_flag_set(QUEUE_FLAG_COPY, q); + } + if (!dm_table_supports_secure_erase(t)) q->limits.max_secure_erase_sectors = 0; diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 7e3b5bdcf520..b995de127093 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1595,6 +1595,12 @@ static blk_status_t __split_and_process_bio(struct clone_info *ci) else if (unlikely(ci->is_abnormal_io)) return __process_abnormal_io(ci, ti); + if ((unlikely(op_is_copy(ci->bio->bi_opf)) && + max_io_len(ti, ci->sector) < ci->sector_count)) { + DMERR("%s: Error IO size(%u) is greater than maximum target size(%llu)\n", + __func__, ci->sector_count, max_io_len(ti, ci->sector)); + return -EIO; + } /* * Only support bio polling for normal IO, and the target io is * exactly inside the dm_io instance (verified in dm_poll_dm_io) diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index c2a3758c4aaa..9304e640c9b9 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -362,6 +362,11 @@ struct dm_target { * after returning DM_MAPIO_SUBMITTED from its map function. */ bool accounts_remapped_io:1; + + /* + * copy offload is supported + */ + bool copy_offload_supported:1; }; void *dm_per_bio_data(struct bio *bio, size_t data_size); From patchwork Tue Apr 26 10:12:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826988 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 212D1C433F5 for ; Tue, 26 Apr 2022 12:16:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349898AbiDZMTZ (ORCPT ); Tue, 26 Apr 2022 08:19:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350021AbiDZMTI (ORCPT ); Tue, 26 Apr 2022 08:19:08 -0400 Received: from mailout1.samsung.com (mailout1.samsung.com [203.254.224.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 173442A272; Tue, 26 Apr 2022 05:15:41 -0700 (PDT) Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20220426121539epoutp019241d41ee270efd3efaf29231340203f~pcVMInrPh2441224412epoutp01x; Tue, 26 Apr 2022 12:15:39 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20220426121539epoutp019241d41ee270efd3efaf29231340203f~pcVMInrPh2441224412epoutp01x DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975339; bh=aldYG758I5J0H6y47s6CjsKzxu4fKQN/27Pbe1yKBJk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dMw1cAz19iWnNW4W6kX1MNQVffeqEiDSkyypInBoo4DxTYGH8Sv5+TCpXNSoHMx0R 9xasddjYRQGTu4sMZ4H+2b9+MBWht7nqUZWgWMqWOQoGpjVxWqdBpxqzXmYk2uUlis DW6+d4yop2QOSGNa0Gb8Vv25SQj+YudS3CznhhRg= Received: from epsnrtp1.localdomain (unknown [182.195.42.162]) by epcas5p2.samsung.com (KnoxPortal) with ESMTP id 20220426121539epcas5p2393b817f760d43547ab7887c4d7f4f04~pcVLv4dvy0308203082epcas5p2T; Tue, 26 Apr 2022 12:15:39 +0000 (GMT) Received: from epsmges5p2new.samsung.com (unknown [182.195.38.174]) by epsnrtp1.localdomain (Postfix) with ESMTP id 4Kngng3J0wz4x9Pp; Tue, 26 Apr 2022 12:15:35 +0000 (GMT) Received: from epcas5p1.samsung.com ( [182.195.41.39]) by epsmges5p2new.samsung.com (Symantec Messaging Gateway) with SMTP id 83.5F.09827.762E7626; Tue, 26 Apr 2022 21:15:35 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p2.samsung.com (KnoxPortal) with ESMTPA id 20220426102025epcas5p299d9a88c30db8b9a04a05c57dc809ff7~pawkbJJwV0348803488epcas5p2P; Tue, 26 Apr 2022 10:20:25 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20220426102025epsmtrp1948e220cdc94a40774a02893817ca764~pawkYqjmJ2358223582epsmtrp1Z; Tue, 26 Apr 2022 10:20:25 +0000 (GMT) X-AuditID: b6c32a4a-b3bff70000002663-5a-6267e267ac47 Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id FD.CA.08924.867C7626; Tue, 26 Apr 2022 19:20:25 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426102019epsmtip1dae65d97eba440d2158f5c1c914957da~pawfVkJ0m0427604276epsmtip1-; Tue, 26 Apr 2022 10:20:19 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 08/10] dm: Enable copy offload for dm-linear target Date: Tue, 26 Apr 2022 15:42:36 +0530 Message-Id: <20220426101241.30100-9-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01TazBcZxjOuezZZWY5bvXRdqKrJoPBLpZPRHQqk55ZnWHaTpuYTHVxshS7 O7ur2k6aLOsypKmSoSxJVNIq2iiWumZKKm5RRdxW3Ip0rMy6VJBKotaRNv+e93ne53sv37wc zLqP7ciJk6pohVScwCPM8Ybbrkc8JH9KovhTg3awuvcOBqumcghYsPoYgyvt8yyYl1PIhjv9 AxgcXrCAbcZiFvxjOwWF87W7KJz4tQmFrWV5KKyo6kThX+XXEdjy7RoKn8wJ4NyGHod5HaMI XBzRorBN7w5b23pwONxcQsBr3y+y4cWxRgLeWm7DYHnXMxT+rn1CwNyuOhZsXEhBYMPONQze nh7B4c3lFRx261+G6V8+ZsOBp12sN5yp4XuhlHamn6ByNUY21aSdYlMD0zU4NdyfRNVWZhFU 3Y0L1OXxcoRqmVATVOrdTowqXN8gqEsaI0E1pc+wqLVFPU6t3Bohwu0j4o/F0uIYWuFES6Nl MXFSSRAv9N3IkEihH1/gIQiA/jwnqTiRDuKdeDvc42Rcwt4OeU6fiBOS9qhwsVLJ8zp+TCFL UtFOsTKlKohHy2MS5L5yT6U4UZkklXhKadVRAZ/vLdxL/Cg+tk6XypKnsD7NqTKw1Ughno1w OID0BRtGr2zEnGNNtiBgrHoNY4J1BPykMSBM8DcCOrfb9xSzfYf+QQaLEZoRcPlK175gTaaj IKvGxvQsQbqDvl2OibYlcVCxtYWb8jGyhw00Tf+gJsGGPAmqBw1sE8ZJF5B5vXQfc8mjYCtt 6KA9L5AzY2WizchA8F2nEWVSrEBP0QJuwhh5GGjqi/e7BmSqOShUP2Mz3hOgdNyJ6dkGGLp0 bAY7gqWcjAOcDH7JKEUZbxoCsnt7cUYIBoOtT1HTOxjpCqqbvRj6VZDfexNl6lqASzsLKMNz QePV59gZ/FhdSjDYAYxupRxgCszqpg8W+hUCjLkP0K8RJ+0L82hfmEf7f+lSBKtEHGi5MlFC K4Vybymd/N8fR8sSa5H9q3ITNSJzs6ueHQjKQToQwMF4ttx8l7NR1twY8Wef0wpZpCIpgVZ2 IMK9fedijnbRsr2zlKoiBb4BfF8/Pz/fAB8/Ac+e2yf5WWxNSsQqOp6m5bTiuQ/lmDmqUZ3X UiWVGqN6ae3c8ddXXew2xmS1bgNoWdwZ3ZBQNLfzPnAffOtGZRgImPwi5b3z03xDzYcexenK b3wM0OrjrXuj7FNnstSnze52xEnTRPfdI4Y1h4e876DJY5yzXOeS8TmBRZTqSrTnwxWbN3+T tCOSsvXgNc6mJWu1z5UOtD21VK/b+SFN5vvK6kCz8uFsgoOdM3/ykLd+OrU44MLpbDPHEu9M YYTNJhkaErag3ZhYEuavvLN+vmjQkvdo0/5IiM7BcshaoxcGt1TURo64FsWI/MMO3e8WiVIK zsU/6i6YDbTk+7iGN1hm1l9NHFcHrTW+xvGZ95/cbQxNvbj8wTYPV8aKBW6YQin+F161QYPe BAAA X-Brightmail-Tracker: H4sIAAAAAAAAA02SXUyTZxiG837/MMs+a6evMnSpcoIB1gXZK3V1yRb9CDqnLjtw2WjBb5Ws xaYV53YCtCApzjDqdFglyChU6CobCBalTJGC/FSclUlLEDthLOoKdS6UFOpWyBLP7tzXlTvP wcPgQj25jsnLP8Jr8xUqMRVLtN8Ur0/O61PmvHn1H4CaB3pxZBuvoNCZ2Xkczdx4RCJTRRWN wu5hHHkm45AzcI5Ed0LFGHrU8gJD3usdGOr8wYShRpsLQ9PWOoCu1QYxtOCXIP9zH4FM3b8B NDVixpDTtxl1OvsJ5Ll6nkI1DVM0OnHfQaGuJ04cWfsiGLptXqBQZV8riRyTxQC1h2twdPPB CIEuPZkh0C1fPCr9Zp5Gw4t95LsbOc+9LM484aa4SkOA5jrM4zQ3/OBngvO4C7iWJiPFtVoK uVOjVsBd8xZRnH7IhXNVz55T3ElDgOI6SidILjjlI7iZrhHqwzUHYrcd5FV5R3ltqkwee6j1 sp7UFJPHKmyP6SJQRZSDGAayadD3x3GyHMQyQtYB4O/hp/QyWAsbFnvw5bwKNkam6WXJgMHI oIUqBwxDsZvh4Asm6ohYAjbOzRFRB2eDNKzuCpFRsIrdAZt/fbw0SrCJsKzuwlIWsBlwruQu Ed2BbCqsmFgZrWNYKax3BbBoLfxPGQ0fW7ZXwv6zk0s34+wGaGg7h38LWPNLyPwSugCwJrCW 1+jUSrVOonkrn/8yRadQ6wrylSm5h9UtYOlNkpIcoLNpNqUbYAzoBpDBxSLB6cTPc4SCg4qv vua1h7O1BSpe1w3iGUK8RnCnvD9byCoVR/gveF7Da/+nGBOzrghL3zMq3y9f3KlPk/35074f JUMJphLl9VceytwDG82+o6eT4ryyh2cuLuRuSaz92/sGO5bZU/+dKuGDXTX96lddnpC9MH0b NquqS977feWQPes9aZlxcS5+/x4Tk/paW7pmYJN8t6DEklNT3XCg1sha39lwYjpjt3P7VPWO U2qLfWubwTXe25ucmhHwD6Q9PfnZhMglT7jt/yU07e69Etw+GrKprbukq312ScaKrcJOiyyx 7Nk91eW81883/qVYHbw0jz4qzTZmtndlNuvpwU/3Hv8kOcvG1LvfvihyFDNWyZaxyIor0ob3 C7136/yi3DH2lrQjEh6O69nJ29cb7398Q0zoDikkSbhWp/gXLU8g4JUDAAA= X-CMS-MailID: 20220426102025epcas5p299d9a88c30db8b9a04a05c57dc809ff7 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426102025epcas5p299d9a88c30db8b9a04a05c57dc809ff7 References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Setting copy_offload_supported flag to enable offload. Signed-off-by: Nitesh Shetty --- drivers/md/dm-linear.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c index 0a6abbbe3745..3b8de6d5ca9c 100644 --- a/drivers/md/dm-linear.c +++ b/drivers/md/dm-linear.c @@ -61,6 +61,7 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv) ti->num_discard_bios = 1; ti->num_secure_erase_bios = 1; ti->num_write_zeroes_bios = 1; + ti->copy_offload_supported = 1; ti->private = lc; return 0; From patchwork Tue Apr 26 10:12:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23BFCC433EF for ; Tue, 26 Apr 2022 12:16:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349909AbiDZMT0 (ORCPT ); Tue, 26 Apr 2022 08:19:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350039AbiDZMTJ (ORCPT ); Tue, 26 Apr 2022 08:19:09 -0400 Received: from mailout2.samsung.com (mailout2.samsung.com [203.254.224.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96BA01118; Tue, 26 Apr 2022 05:15:46 -0700 (PDT) Received: from epcas5p4.samsung.com (unknown [182.195.41.42]) by mailout2.samsung.com (KnoxPortal) with ESMTP id 20220426121544epoutp023d4a47c7253b8e148e35e01c8dfca890~pcVQvJ7SA1189711897epoutp02T; Tue, 26 Apr 2022 12:15:44 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20220426121544epoutp023d4a47c7253b8e148e35e01c8dfca890~pcVQvJ7SA1189711897epoutp02T DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975344; bh=WSP771IGNKVC6Bi5A3FA+JWtjotATxJ9HybAAr3/Z3Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b+rTlSTs3Vd01esw2G4KB65hbJSvXCzC7DXIdPRRhXZVweGNAqSFlG3m2afIntTL8 WRB+Ebh41/zIlDeCObKovr1avKbAsRp6pnxbwUGc/jhyZuSZszb6LLuylGYo7i9dan 9rpYqnoIP6AK/1+zpowSZdf8kN/fjNK+eYjTbTSM= Received: from epsnrtp1.localdomain (unknown [182.195.42.162]) by epcas5p3.samsung.com (KnoxPortal) with ESMTP id 20220426121544epcas5p3e0af99d7a9b14fd41b06061a4f524197~pcVQIL9T80902809028epcas5p3f; Tue, 26 Apr 2022 12:15:44 +0000 (GMT) Received: from epsmges5p1new.samsung.com (unknown [182.195.38.181]) by epsnrtp1.localdomain (Postfix) with ESMTP id 4Kngnm2lbGz4x9Pp; Tue, 26 Apr 2022 12:15:40 +0000 (GMT) Received: from epcas5p4.samsung.com ( [182.195.41.42]) by epsmges5p1new.samsung.com (Symantec Messaging Gateway) with SMTP id 25.33.10063.C62E7626; Tue, 26 Apr 2022 21:15:40 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p1.samsung.com (KnoxPortal) with ESMTPA id 20220426102033epcas5p137171ff842e8b0a090d2708cfc0e3249~pawsPKyj51103811038epcas5p15; Tue, 26 Apr 2022 10:20:33 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20220426102033epsmtrp1dd5a60b96661707779518939e7adba16~pawsNt80p2263822638epsmtrp1c; Tue, 26 Apr 2022 10:20:33 +0000 (GMT) X-AuditID: b6c32a49-4b5ff7000000274f-00-6267e26ccb6c Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id 82.DA.08924.177C7626; Tue, 26 Apr 2022 19:20:33 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426102027epsmtip186cc016ae7c8201df031be9faf3b2758~pawm2aWcX1140711407epsmtip1g; Tue, 26 Apr 2022 10:20:27 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, SelvaKumar S , Arnav Dawn , Nitesh Shetty , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 09/10] dm kcopyd: use copy offload support Date: Tue, 26 Apr 2022 15:42:37 +0530 Message-Id: <20220426101241.30100-10-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01Te1BUdRSe+9i7F2rxgpQ/oYZtHWxAXluAPwzKJqSb9gcOw6jNFF7guhCw u7OPQLMCiYcoEpQ7upCQEZvAQLDxfkyAiEAIRCDQ8FiFwdwEFCawBbZdLpT/fec75zvPOSTm MMJ3ImOlKlYhZeJFhC1e2+Hm7hl/TxLp8+2MAFb23MJg89gSD5ZN5BBQs/gUgwtt93kwL+cK H5r6+jE4NGMHW+bzeXBgNQWF96vNKBz7pQGFzdfzUHijrBOFc7rvEdj03WMUrhnE0LA8jsO8 9hEEzg5rUdgyvg82t3TjcKixgICFJbN8eOFuPQFbjS0Y1HVtoPCOdo2AuV16HqyfSUFgrakQ gx2TwzisMC7g0LjaTcDb484w7eJTPuxf7+IddKWHfj9Ca6f6CDo3dZ5PN2gn+HT/ZBVOD/Wp 6erS8wStL/6C/npUh9BNY8kEfe7XToy+8mSZoLNT5wm6IW2KRz+eHcfphdZhInTXB3GBMSwT zSqErDRKFh0rlQSJjoRFvBPh5+8j9hQHwP0ioZRJYINEwe+HeobExlu2KRJ+wsSrLVQoo1SK vN8MVMjUKlYYI1OqgkSsPDpe7iv3UjIJSrVU4iVlVQfEPj6v+VkCT8bF/DY3yJc/ECbld98j kpHzzlmIDQkoXzCbk4lnIbakA9WEALMpi88ZTxCQ988AxhlLCHh4uwbblmQPj6OcoxEBc5qS LSMNBef6UizJSJKg9oFeM2kVOFI4uLGyslkDo66SYP0vM8/q2EkFAUNLFWGNxylXUHnN1koL qDeAca1+Mw2gvEHOlL2VtrHQP3TOo1yIPei+OoNbMUa5gNSa/M1GAZVlC0zFeoTTBoMCA8v1 vBM87PqZz2En8GdO+hZOBHXpRSin/RIBWT09OOd4Cww2r6PWPBjlBiobvTn6ZXC5pwLl6tqB bNMMyvECUH9tG+8B5ZVFBId3g5GVlC1MA0OGeWu7lxBQoTHgXyFC7TPzaJ+ZR/t/6SIEK0V2 s3JlgoRV+snFUjbxvyNHyRKqkc0Hc3+vHpmYXvRqR1ASaUcAiYkcBZddT0U6CKKZ02dYhSxC oY5nle2In2XduZjTC1Eyy4dKVRFi3wAfX39/f9+A1/3Fol2CXslPjAMlYVRsHMvKWcW2DiVt nJLRsGBZSEb7weqK8YTMZXtmWlTp6J4QVHErwMZDfb6NydE8P3p4tiEucjpoR3RmeMD+heMe RonZPdao2ztZl+F91C7w0E3RSwayuNRG1+WiFu8VhLYFHnIzLJZ5lm98jtX9rffofG6VFX7U rpms1XffOUblvqgbXHIqb1wZ1I586sJEGK/XPNCmkM5nlktOa9KPfYZpX9GcBWF7dCOPmFDb 0eiBhoIO18Rp28wfg0+0ne3I1LTm8y88itKnH1Z+6OrTv2rSF120af0G9la5uWGxRarkEzs+ bgufDtaH3H03KRydP3DUYOg9JU5qSy/v2qgo7Dg58Xbo8clXeX/czDak+IlwZQwjdscUSuZf OL7eR+kEAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA02Sf0wTdxiHcz96vTbBHNXhF5DNNOl+IKII6juGzsiWXWaWkS1Z5pKtnvQo YltKDzbYzEQLf4CaThzqWlTmhggoBtRSJiUbWGuVClKrUINYhOBAqQgLOlqYlSzxv0+e58n7 10sTsn2iGHqHLp836DiNnJKS1k756yvznOrtq4/1xcC5a1cIaOufEkHDgImCw0+eExD464EI KkxHxTDr7ibAM7wI7BMWEfQ824PDg+Z5HPr/bMWh7WQFDnUNDhxGa3/D4NKvkzgE/Ungn/aR UNFxG4MRrxkHu28FtNldJHj+qKLgxKkRMey7Y6OgfdxOQK1zDocb5iAFB53nRWAb3oOBdfYE AZ33vCQ0jgdIGH/mouCqLxZK9z8XQ3fIKdqkYD23trDmQTfFHjROiNlW84CY7b7XRLIedwHb XF9Gsed/380e6qvF2Ev9xRS7t8tBsEefTlPsAeMExbaWDorYyREfyQbavVTG0q+kaSpes+Nb 3rBq4zZpdu/oTbH+4fJCi2uIKsbKYssxCY2YFHTA68PLMSktY2wY8g+FRAsiGp0KXSYW9mJU NzcqXoiMOAr0B18ImqaYFej6PB1uljAkqpuZIcMNwXTSaP+Fxy8PLWY2IL+9iQr3JKNA545L wziCeQ+NB21kGCNmFTINRoax5AWucUzgYSxjUlHfbOFCHYlcvwyT4U0wbyDjRQvxE8aYX1Hm V1Q1htdj0bxe0Kq1QpJ+jY7/LlHgtEKBTp2Ymattxl7+S3y8DWurf5LYgeE01oEhmpAviahU ZG2XRai4ou95Q67SUKDhhQ4sliblSyN6yl1KGaPm8vmdPK/nDf9bnJbEFOPk/YHjd3usVq8s mD6X9dambcBWbYaaI3FnVdmWssov1kXXarSjk6rXejWVqTmNTnXX5753L48Mfr0OlH3vBzz/ /JBoTBeSqw7/nB53syjv0Fxx/Lw8WVJ+rPrMR8q8klu7Htotn1JpjxUBa1a1pIVO+FKTUERd 6BxOc3OO5f/eYU5/8OORiqlpzzty3d/tky3JY28SWz5ryNVECyau5PTUWOsjtGvR3rXXSq9e DJXlzk+Rb5c0j93OaQx98/HufGdkVEFy3Mn1M2eVCam9zqFQ1saelsycMxtUm8dq/E97o7a2 dF1xGGFl1NZPMt0Zs9x9U9P6ZRkp8rU+fRFdmPKhQk4K2VxSPGEQuP8AZHO9ap4DAAA= X-CMS-MailID: 20220426102033epcas5p137171ff842e8b0a090d2708cfc0e3249 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426102033epcas5p137171ff842e8b0a090d2708cfc0e3249 References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: SelvaKumar S Introduce copy_jobs to use copy-offload, if supported by underlying devices otherwise fall back to existing method. run_copy_jobs() calls block layer copy offload API, if both source and destination request queue are same and support copy offload. On successful completion, destination regions copied count is made zero, failed regions are processed via existing method. Signed-off-by: SelvaKumar S Signed-off-by: Arnav Dawn Signed-off-by: Nitesh Shetty --- drivers/md/dm-kcopyd.c | 55 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 49 insertions(+), 6 deletions(-) diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index 37b03ab7e5c9..214fadd6d71f 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -74,18 +74,20 @@ struct dm_kcopyd_client { atomic_t nr_jobs; /* - * We maintain four lists of jobs: + * We maintain five lists of jobs: * - * i) jobs waiting for pages - * ii) jobs that have pages, and are waiting for the io to be issued. - * iii) jobs that don't need to do any IO and just run a callback - * iv) jobs that have completed. + * i) jobs waiting to try copy offload + * ii) jobs waiting for pages + * iii) jobs that have pages, and are waiting for the io to be issued. + * iv) jobs that don't need to do any IO and just run a callback + * v) jobs that have completed. * - * All four of these are protected by job_lock. + * All five of these are protected by job_lock. */ spinlock_t job_lock; struct list_head callback_jobs; struct list_head complete_jobs; + struct list_head copy_jobs; struct list_head io_jobs; struct list_head pages_jobs; }; @@ -579,6 +581,42 @@ static int run_io_job(struct kcopyd_job *job) return r; } +static int run_copy_job(struct kcopyd_job *job) +{ + int r, i, count = 0; + struct range_entry range; + + struct request_queue *src_q, *dest_q; + + for (i = 0; i < job->num_dests; i++) { + range.dst = job->dests[i].sector << SECTOR_SHIFT; + range.src = job->source.sector << SECTOR_SHIFT; + range.len = job->source.count << SECTOR_SHIFT; + + src_q = bdev_get_queue(job->source.bdev); + dest_q = bdev_get_queue(job->dests[i].bdev); + + if (src_q != dest_q || !blk_queue_copy(src_q)) + break; + + r = blkdev_issue_copy(job->source.bdev, 1, &range, job->dests[i].bdev, GFP_KERNEL); + if (r) + break; + + job->dests[i].count = 0; + count++; + } + + if (count == job->num_dests) { + push(&job->kc->complete_jobs, job); + } else { + push(&job->kc->pages_jobs, job); + r = 0; + } + + return r; +} + static int run_pages_job(struct kcopyd_job *job) { int r; @@ -659,6 +697,7 @@ static void do_work(struct work_struct *work) spin_unlock_irq(&kc->job_lock); blk_start_plug(&plug); + process_jobs(&kc->copy_jobs, kc, run_copy_job); process_jobs(&kc->complete_jobs, kc, run_complete_job); process_jobs(&kc->pages_jobs, kc, run_pages_job); process_jobs(&kc->io_jobs, kc, run_io_job); @@ -676,6 +715,8 @@ static void dispatch_job(struct kcopyd_job *job) atomic_inc(&kc->nr_jobs); if (unlikely(!job->source.count)) push(&kc->callback_jobs, job); + else if (job->source.bdev->bd_disk == job->dests[0].bdev->bd_disk) + push(&kc->copy_jobs, job); else if (job->pages == &zero_page_list) push(&kc->io_jobs, job); else @@ -916,6 +957,7 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *thro spin_lock_init(&kc->job_lock); INIT_LIST_HEAD(&kc->callback_jobs); INIT_LIST_HEAD(&kc->complete_jobs); + INIT_LIST_HEAD(&kc->copy_jobs); INIT_LIST_HEAD(&kc->io_jobs); INIT_LIST_HEAD(&kc->pages_jobs); kc->throttle = throttle; @@ -971,6 +1013,7 @@ void dm_kcopyd_client_destroy(struct dm_kcopyd_client *kc) BUG_ON(!list_empty(&kc->callback_jobs)); BUG_ON(!list_empty(&kc->complete_jobs)); + WARN_ON(!list_empty(&kc->copy_jobs)); BUG_ON(!list_empty(&kc->io_jobs)); BUG_ON(!list_empty(&kc->pages_jobs)); destroy_workqueue(kc->kcopyd_wq); From patchwork Tue Apr 26 10:12:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 12826990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F086C4332F for ; Tue, 26 Apr 2022 12:16:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349941AbiDZMTd (ORCPT ); Tue, 26 Apr 2022 08:19:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350058AbiDZMTN (ORCPT ); Tue, 26 Apr 2022 08:19:13 -0400 Received: from mailout4.samsung.com (mailout4.samsung.com [203.254.224.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09EEE205CC; Tue, 26 Apr 2022 05:15:53 -0700 (PDT) Received: from epcas5p3.samsung.com (unknown [182.195.41.41]) by mailout4.samsung.com (KnoxPortal) with ESMTP id 20220426121551epoutp04f35c1f556ef3533f1e480850be0b488a~pcVWuKQQl2393923939epoutp04t; Tue, 26 Apr 2022 12:15:51 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20220426121551epoutp04f35c1f556ef3533f1e480850be0b488a~pcVWuKQQl2393923939epoutp04t DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1650975351; bh=vVNRB09IFD7VCsO478m75U1Fq+vWg8jECNR0w5IEHBc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Zkq1UAYxnDBNQf6YgncmlqRr6wNFpF5bqEcUzvWxh3xr9FRrbWufmYx61KT2efymh T1pPq40UACmSBAAxP7QZgZE++a1vXw0tECovn9QlhD/wN+/I+RnK71HAOV4p9Xa3fa zWVog1tAPCIcVJGTlo3+tiKISw6VbxAAGNeMsH00= Received: from epsnrtp2.localdomain (unknown [182.195.42.163]) by epcas5p4.samsung.com (KnoxPortal) with ESMTP id 20220426121550epcas5p45d4f028970d166726a1fe0bc79253a92~pcVV6ylFb0477704777epcas5p4f; Tue, 26 Apr 2022 12:15:50 +0000 (GMT) Received: from epsmges5p1new.samsung.com (unknown [182.195.38.179]) by epsnrtp2.localdomain (Postfix) with ESMTP id 4Kngns3Xpfz4x9Pv; Tue, 26 Apr 2022 12:15:45 +0000 (GMT) Received: from epcas5p1.samsung.com ( [182.195.41.39]) by epsmges5p1new.samsung.com (Symantec Messaging Gateway) with SMTP id B8.33.10063.172E7626; Tue, 26 Apr 2022 21:15:45 +0900 (KST) Received: from epsmtrp2.samsung.com (unknown [182.195.40.14]) by epcas5p2.samsung.com (KnoxPortal) with ESMTPA id 20220426102042epcas5p201aa0d9143d7bc650ae7858383b69288~paw0RoFuv0811508115epcas5p2r; Tue, 26 Apr 2022 10:20:42 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp2.samsung.com (KnoxPortal) with ESMTP id 20220426102042epsmtrp289cd93d9b6a7708cb4c090ec40bd3351~paw0QFI-I1443114431epsmtrp2j; Tue, 26 Apr 2022 10:20:42 +0000 (GMT) X-AuditID: b6c32a49-4cbff7000000274f-0d-6267e271bfff Received: from epsmtip1.samsung.com ( [182.195.34.30]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id A5.DA.08924.A77C7626; Tue, 26 Apr 2022 19:20:42 +0900 (KST) Received: from test-zns.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20220426102036epsmtip1a53597986c8ac116a03cc60b915b16b8~pawunNcb80427604276epsmtip1D; Tue, 26 Apr 2022 10:20:36 +0000 (GMT) From: Nitesh Shetty Cc: chaitanyak@nvidia.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, dm-devel@redhat.com, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, axboe@kernel.dk, msnitzer@redhat.com, bvanassche@acm.org, martin.petersen@oracle.com, hare@suse.de, kbusch@kernel.org, hch@lst.de, Frederick.Knight@netapp.com, osandov@fb.com, lsf-pc@lists.linux-foundation.org, djwong@kernel.org, josef@toxicpanda.com, clm@fb.com, dsterba@suse.com, tytso@mit.edu, jack@suse.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Arnav Dawn , Alasdair Kergon , Mike Snitzer , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Damien Le Moal , Naohiro Aota , Johannes Thumshirn , Alexander Viro , linux-kernel@vger.kernel.org Subject: [PATCH v4 10/10] fs: add support for copy file range in zonefs Date: Tue, 26 Apr 2022 15:42:38 +0530 Message-Id: <20220426101241.30100-11-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20220426101241.30100-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01TaVBbZRTlLXkJzMA8UiofUKf4aMsAskQBPwqUduyU10WL2rGCtZjCIzBA EhNibUcHKEUttKXQlomBaSikIq0sssmSoIIQCbJUFgFliYWq4LAUhqVsEhK0/86ce86999xv Pg7GbWPbc2KECYxEyI+jCAu8usnF2f2DPwTnvIr6bWCprgWD6oE5FnwwlEHA7JllDE7/8IgF szLkbLjS3onB7jErqJnKYcGupWQUPirfQOHA97UoVOdnobDoQTMK/ywsQGD93VkUrup5UD8/ iMOsxj4EjvcqUKgZdINqTSsOu+tyCaj8cpwN03+tIWDDpAaDhdp1FHYoVgmYqa1gwZqxZAQ2 DffisGRyGoc/DTrA1KvLbNi5pmUddKK7e47TipF2gs5MmWLTtYohNt05/A1Od7fL6PL7Vwi6 QpVI3+wvROj6gSSCvvRzM0bLn8wT9LWUKYKuTR1h0bPjgzg93dBLhNiGxQZEM/xIRuLICCNE kTFCQSB1/K3wV8N9fL147jw/+ArlKOTHM4HU4RMh7kdi4jZvSDl+yI+TbVIhfKmU8jwQIBHJ EhjHaJE0IZBixJFxYm+xh5QfL5UJBR5CJmE/z8vrJZ9N4fux0QtqJSbeOPaRaqiFnYQ8DUhD zDmA9Abqlk48DbHgcMl6BMz3PWYbClzyCQJ0GcHGwhwC6tVyYttxJ12JGgt1CFDJk032VBTM 5qZjaQiHQ5BuoG2DYzDYkDgoWlzc0mCklg06hpVbmh1kMEidDjZocHIvyB+vQw3YkvQHlRO9 hEECSE+QMWJtoM036XvNUyaJNWj9Ygw3YIzcDVKqcjBDe0Cum4O1pUHcuOhh0Nb9C9uId4AJ baUJ24O5KY0pzHnw7ad5qNF8GQFpOp3JHAQeqtdQwxIY6QJK6zyN9PPgtq4ENQ62AtdWxlAj bwlq7mxjJ/B1aZ6pvx3oW0w2ZaHBzRHT3a4joEL+HesG4qh4Jo/imTyK/yfnIdh9xI4RS+MF jNRHzBMy5/974whRfDmy9atcj9YgQ6MzHo0IykEaEcDBKBvL23ujznEtI/kXLjISUbhEFsdI GxGfzXtnYvY7I0Sb31KYEM7z9vPy9vX19fZ72ZdH2Vq2Ccr4XFLAT2BiGUbMSLZ9KMfcPgml eHYWew5q94jetKrSZ5Q9LW8o8J0PclCWY6LsfQ0nLtCpJeVBjwtcS2bD5OyPzZaeY1X/GHT6 4ap5oyJFll+UODlxeqi3SE6NFXOjRhc2OgYEHT1Wbv98PuNyZeeB0TrV/iidgyyt6o2z2Xoc 63p7punvjurQLo9Z/9XQyrjduSfffb3ZQ5d/fZnSnCJCnZNGVN32v5k5a17wqp/+69BVkDio dH4vtOLwVG5smtasc7361hnl5RcXw0qie7POqvvjGX3xjVqXhnt67i5rcOmdr/wX8mcXPps4 Vpx31N9h7eKtldBT5MlDZZ3moT37dinsUj9xyrmb3Op/5jXbcEZj8buKReHSaD7PFZNI+f8C 1ZDWMN4EAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA02Sa0xbZRyHfc85PT2AdIcyxwsYiMUZhQAy2fJXkHgh42QkBKPZJmZCtx07 5Nb0FMb4MIGGGFscl0VlZQ5kjKvOUDotrGVbKTBUZKRWRxNaCRDQJe26jTE2Ltqhyb49+T3P xx9DSqtEEUx+sZpXFcsLZXQg9cOwLCq+Ykxx+GXDCA3f/zRKgnn6rgh6Z+po+PL2Kgnea3Mi aKxrEsOjiUkS7PMSsHiaRXDjQRUBc4ZNAqavDhBgbmskoLt3hIDFzvMILn/jI2BtNglm7zkp aLT+jmDBoSfA4owDs2WcAvvgWRpaOhbEoPvDRMPQLQsJnWMbBPyqX6OhYaxfBKb5KgTDLgcF F295KbjujISa2lUxTK6Pid6I4ey/ZXJ69wTNNWg8Ym5APyPmJl19FGefKOUMPZ/RXH/7J9zp m52IuzxdSXPVv4yQXNOdezT3ucZDcwM1bhHnW3BSnHfIQWeH5QSmHuUL88t4VWJaXuCx++YW Urm5r7x9ZlRciR6malEAg9lkfE7XQmhRICNlTQjbfNXklgjHHeu2/zgUd28sirciDYH/ql5H WsQwNBuHf95k/M12lsLdKyuUvyFZrxi7Ll153ISyGbjGm+FvKHYnblsYJPwczKZg498O2p9g NhHXuUP8c8C/84URD+Gfpexr+Oaj8q06BI+fmaf8TLLRWHOpmaxHrP4JpX9CtSKiB4XzSqFI USQkKXcV88cTBHmRUFqsSDhSUmRAj18SG2tC5p7bCVZEMMiKMEPKtgd/sfOjw9Lgo/ITFbyq JFdVWsgLVhTJULKw4Bva8Vwpq5Cr+QKeV/Kq/y3BBERUEh0fprd/nSrKyGmm1Ceidy8HWc7E tkztD3sozzr9VJbv02uz2xL4/gOSV9aSrYb5ko+9cwVDatuztS/saEjLDBjYdc6z/DS5Nz27 QrIculRW8OaPte8Mcs/vWYnLe+a6zn5qT3T9sC4lf2rwW1eE0UVGdTXp0t96nZAoR7uC4tts q/qyu+6TXZlLkRsNuYkxTUfwDqP2vcYrh1b6LsQw4VPPHdyW20hr4uD9g2siEr/oMxiTXYYo 88l9IYv4fN+BoJL6kAcp7t680GxBkvFqTunbG87v/nzXbsnau/t+kOmOML60aVR3XBTSyuNb K/Nfsp1t/SpJUnX8g1O25UMOxf6IqzJKOCZPiiVVgvwf4mBy4ZQDAAA= X-CMS-MailID: 20220426102042epcas5p201aa0d9143d7bc650ae7858383b69288 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20220426102042epcas5p201aa0d9143d7bc650ae7858383b69288 References: <20220426101241.30100-1-nj.shetty@samsung.com> To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Arnav Dawn copy_file_range is implemented using copy offload, copy offloading to device is always enabled. To disable copy offloading mount with "no_copy_offload" mount option. At present copy offload is only used, if the source and destination files are on same block device, otherwise copy file range is completed by generic copy file range. copy file range implemented as following: - write pending writes on the src and dest files - drop page cache for dest file if its conv zone - copy the range using offload - update dest file info For all failure cases we fallback to generic file copy range At present this implementation does not support conv aggregation Signed-off-by: Arnav Dawn --- fs/zonefs/super.c | 178 ++++++++++++++++++++++++++++++++++++++++++++- fs/zonefs/zonefs.h | 1 + 2 files changed, 178 insertions(+), 1 deletion(-) diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index b3b0b71fdf6c..60563b592bf2 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -901,6 +901,7 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from) else ret = iomap_dio_rw(iocb, from, &zonefs_iomap_ops, &zonefs_write_dio_ops, 0, 0); + if (zi->i_ztype == ZONEFS_ZTYPE_SEQ && (ret > 0 || ret == -EIOCBQUEUED)) { if (ret > 0) @@ -1189,6 +1190,171 @@ static int zonefs_file_release(struct inode *inode, struct file *file) return 0; } +static int zonefs_is_file_size_ok(struct inode *src_inode, struct inode *dst_inode, + loff_t src_off, loff_t dst_off, size_t len) +{ + loff_t size, endoff; + + size = i_size_read(src_inode); + /* Don't copy beyond source file EOF. */ + if (src_off + len > size) { + zonefs_err(src_inode->i_sb, "Copy beyond EOF (%llu + %zu > %llu)\n", + src_off, len, size); + return -EOPNOTSUPP; + } + + endoff = dst_off + len; + if (inode_newsize_ok(dst_inode, endoff)) + return -EOPNOTSUPP; + + + return 0; +} +static ssize_t __zonefs_send_copy(struct zonefs_inode_info *src_zi, loff_t src_off, + struct zonefs_inode_info *dst_zi, loff_t dst_off, size_t len) +{ + struct block_device *src_bdev = src_zi->i_vnode.i_sb->s_bdev; + struct block_device *dst_bdev = dst_zi->i_vnode.i_sb->s_bdev; + struct range_entry *rlist; + int ret = -EIO; + + rlist = kmalloc(sizeof(*rlist), GFP_KERNEL); + rlist[0].dst = (dst_zi->i_zsector << SECTOR_SHIFT) + dst_off; + rlist[0].src = (src_zi->i_zsector << SECTOR_SHIFT) + src_off; + rlist[0].len = len; + rlist[0].comp_len = 0; + ret = blkdev_issue_copy(src_bdev, 1, rlist, dst_bdev, GFP_KERNEL); + if (ret) { + if (rlist[0].comp_len != len) { + ret = rlist[0].comp_len; + kfree(rlist); + return ret; + } + } + kfree(rlist); + return len; +} +static ssize_t __zonefs_copy_file_range(struct file *src_file, loff_t src_off, + struct file *dst_file, loff_t dst_off, + size_t len, unsigned int flags) +{ + struct inode *src_inode = file_inode(src_file); + struct inode *dst_inode = file_inode(dst_file); + struct zonefs_inode_info *src_zi = ZONEFS_I(src_inode); + struct zonefs_inode_info *dst_zi = ZONEFS_I(dst_inode); + struct block_device *src_bdev = src_inode->i_sb->s_bdev; + struct block_device *dst_bdev = dst_inode->i_sb->s_bdev; + struct super_block *src_sb = src_inode->i_sb; + struct zonefs_sb_info *src_sbi = ZONEFS_SB(src_sb); + struct super_block *dst_sb = dst_inode->i_sb; + struct zonefs_sb_info *dst_sbi = ZONEFS_SB(dst_sb); + ssize_t ret = -EIO, bytes; + + if (src_bdev != dst_bdev) { + zonefs_err(src_sb, "Copying files across two devices\n"); + return -EXDEV; + } + + /* + * Some of the checks below will return -EOPNOTSUPP, + * which will force a generic copy + */ + + if (!(src_sbi->s_mount_opts & ZONEFS_MNTOPT_COPY_FILE) + || !(dst_sbi->s_mount_opts & ZONEFS_MNTOPT_COPY_FILE)) + return -EOPNOTSUPP; + + /* Start by sync'ing the source and destination files ifor conv zones */ + if (src_zi->i_ztype == ZONEFS_ZTYPE_CNV) { + ret = file_write_and_wait_range(src_file, src_off, (src_off + len)); + if (ret < 0) { + zonefs_err(src_sb, "failed to write source file (%zd)\n", ret); + goto out; + } + } + if (dst_zi->i_ztype == ZONEFS_ZTYPE_CNV) { + ret = file_write_and_wait_range(dst_file, dst_off, (dst_off + len)); + if (ret < 0) { + zonefs_err(dst_sb, "failed to write destination file (%zd)\n", ret); + goto out; + } + } + mutex_lock(&dst_zi->i_truncate_mutex); + if (len > dst_zi->i_max_size - dst_zi->i_wpoffset) { + /* Adjust length */ + len -= dst_zi->i_max_size - dst_zi->i_wpoffset; + if (len <= 0) { + mutex_unlock(&dst_zi->i_truncate_mutex); + return -EOPNOTSUPP; + } + } + if (dst_off != dst_zi->i_wpoffset) { + mutex_unlock(&dst_zi->i_truncate_mutex); + return -EOPNOTSUPP; /* copy not at zone write ptr */ + } + mutex_lock(&src_zi->i_truncate_mutex); + ret = zonefs_is_file_size_ok(src_inode, dst_inode, src_off, dst_off, len); + if (ret < 0) { + mutex_unlock(&src_zi->i_truncate_mutex); + mutex_unlock(&dst_zi->i_truncate_mutex); + goto out; + } + mutex_unlock(&src_zi->i_truncate_mutex); + + /* Drop dst file cached pages for a conv zone*/ + if (dst_zi->i_ztype == ZONEFS_ZTYPE_CNV) { + ret = invalidate_inode_pages2_range(dst_inode->i_mapping, + dst_off >> PAGE_SHIFT, + (dst_off + len) >> PAGE_SHIFT); + if (ret < 0) { + zonefs_err(dst_sb, "Failed to invalidate inode pages (%zd)\n", ret); + ret = 0; + } + } + bytes = __zonefs_send_copy(src_zi, src_off, dst_zi, dst_off, len); + ret += bytes; + + file_update_time(dst_file); + zonefs_update_stats(dst_inode, dst_off + bytes); + zonefs_i_size_write(dst_inode, dst_off + bytes); + dst_zi->i_wpoffset += bytes; + mutex_unlock(&dst_zi->i_truncate_mutex); + + + + /* + * if we still have some bytes left, do splice copy + */ + if (bytes && (bytes < len)) { + zonefs_info(src_sb, "Final partial copy of %zu bytes\n", len); + bytes = do_splice_direct(src_file, &src_off, dst_file, + &dst_off, len, flags); + if (bytes > 0) + ret += bytes; + else + zonefs_info(src_sb, "Failed partial copy (%zd)\n", bytes); + } + +out: + + return ret; +} + +static ssize_t zonefs_copy_file_range(struct file *src_file, loff_t src_off, + struct file *dst_file, loff_t dst_off, + size_t len, unsigned int flags) +{ + ssize_t ret; + + ret = __zonefs_copy_file_range(src_file, src_off, dst_file, dst_off, + len, flags); + + if (ret == -EOPNOTSUPP || ret == -EXDEV) + ret = generic_copy_file_range(src_file, src_off, dst_file, + dst_off, len, flags); + return ret; +} + static const struct file_operations zonefs_file_operations = { .open = zonefs_file_open, .release = zonefs_file_release, @@ -1200,6 +1366,7 @@ static const struct file_operations zonefs_file_operations = { .splice_read = generic_file_splice_read, .splice_write = iter_file_splice_write, .iopoll = iocb_bio_iopoll, + .copy_file_range = zonefs_copy_file_range, }; static struct kmem_cache *zonefs_inode_cachep; @@ -1262,7 +1429,7 @@ static int zonefs_statfs(struct dentry *dentry, struct kstatfs *buf) enum { Opt_errors_ro, Opt_errors_zro, Opt_errors_zol, Opt_errors_repair, - Opt_explicit_open, Opt_err, + Opt_explicit_open, Opt_no_copy_offload, Opt_err, }; static const match_table_t tokens = { @@ -1271,6 +1438,7 @@ static const match_table_t tokens = { { Opt_errors_zol, "errors=zone-offline"}, { Opt_errors_repair, "errors=repair"}, { Opt_explicit_open, "explicit-open" }, + { Opt_no_copy_offload, "no_copy_offload" }, { Opt_err, NULL} }; @@ -1280,6 +1448,7 @@ static int zonefs_parse_options(struct super_block *sb, char *options) substring_t args[MAX_OPT_ARGS]; char *p; + sbi->s_mount_opts |= ZONEFS_MNTOPT_COPY_FILE; if (!options) return 0; @@ -1310,6 +1479,9 @@ static int zonefs_parse_options(struct super_block *sb, char *options) case Opt_explicit_open: sbi->s_mount_opts |= ZONEFS_MNTOPT_EXPLICIT_OPEN; break; + case Opt_no_copy_offload: + sbi->s_mount_opts &= ~ZONEFS_MNTOPT_COPY_FILE; + break; default: return -EINVAL; } @@ -1330,6 +1502,8 @@ static int zonefs_show_options(struct seq_file *seq, struct dentry *root) seq_puts(seq, ",errors=zone-offline"); if (sbi->s_mount_opts & ZONEFS_MNTOPT_ERRORS_REPAIR) seq_puts(seq, ",errors=repair"); + if (sbi->s_mount_opts & ZONEFS_MNTOPT_COPY_FILE) + seq_puts(seq, ",copy_offload"); return 0; } @@ -1769,6 +1943,8 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent) atomic_set(&sbi->s_active_seq_files, 0); sbi->s_max_active_seq_files = bdev_max_active_zones(sb->s_bdev); + /* set copy support by default */ + sbi->s_mount_opts |= ZONEFS_MNTOPT_COPY_FILE; ret = zonefs_read_super(sb); if (ret) return ret; diff --git a/fs/zonefs/zonefs.h b/fs/zonefs/zonefs.h index 4b3de66c3233..efa6632c4b6a 100644 --- a/fs/zonefs/zonefs.h +++ b/fs/zonefs/zonefs.h @@ -162,6 +162,7 @@ enum zonefs_features { (ZONEFS_MNTOPT_ERRORS_RO | ZONEFS_MNTOPT_ERRORS_ZRO | \ ZONEFS_MNTOPT_ERRORS_ZOL | ZONEFS_MNTOPT_ERRORS_REPAIR) #define ZONEFS_MNTOPT_EXPLICIT_OPEN (1 << 4) /* Explicit open/close of zones on open/close */ +#define ZONEFS_MNTOPT_COPY_FILE (1 << 5) /* enable copy file range offload to kernel */ /* * In-memory Super block information.