From patchwork Tue Apr 11 08:10:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Anuj Gupta X-Patchwork-Id: 13207177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FC68C77B6F for ; Tue, 11 Apr 2023 08:20:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230369AbjDKIUR (ORCPT ); Tue, 11 Apr 2023 04:20:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230342AbjDKITp (ORCPT ); Tue, 11 Apr 2023 04:19:45 -0400 Received: from mailout4.samsung.com (mailout4.samsung.com [203.254.224.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2D3040E7 for ; Tue, 11 Apr 2023 01:19:28 -0700 (PDT) Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout4.samsung.com (KnoxPortal) with ESMTP id 20230411081927epoutp045a162325202d44a9aad06462eb9e928b~U043oqTnV3236432364epoutp04X for ; Tue, 11 Apr 2023 08:19:27 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20230411081927epoutp045a162325202d44a9aad06462eb9e928b~U043oqTnV3236432364epoutp04X DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1681201167; bh=vB3u7+eUdlx2QSFI1UvvfPgkWkEEXblxw0rGBh0MicI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PtZWg5YwJ+7PZDmpoeZUH1Sxrthn8f3+YOJKYrZztrCK73Pp3WYZBetmF05c5w17l nc0eD2xM0eiGBBusSgXnoqAx9qst/YsjDOAtVzvcjW6n3SMSBBewrFg1QgCgiwh6yi A61rxGBIKh9O8Kxog91EhMjPXEGYJmnR7D1ENKwA= Received: from epsnrtp3.localdomain (unknown [182.195.42.164]) by epcas5p3.samsung.com (KnoxPortal) with ESMTP id 20230411081926epcas5p348239af51fa255c7f70502c4d871ffee~U042-cFFT2309423094epcas5p35; Tue, 11 Apr 2023 08:19:26 +0000 (GMT) Received: from epsmges5p3new.samsung.com (unknown [182.195.38.178]) by epsnrtp3.localdomain (Postfix) with ESMTP id 4Pwdzc4GLtz4x9Q7; Tue, 11 Apr 2023 08:19:24 +0000 (GMT) Received: from epcas5p1.samsung.com ( [182.195.41.39]) by epsmges5p3new.samsung.com (Symantec Messaging Gateway) with SMTP id E2.CC.09987.C0815346; Tue, 11 Apr 2023 17:19:24 +0900 (KST) Received: from epsmtrp2.samsung.com (unknown [182.195.40.14]) by epcas5p3.samsung.com (KnoxPortal) with ESMTPA id 20230411081323epcas5p3372cfe0159cfd9da0948d607aa548405~U0zkr0JcO2478124781epcas5p3q; Tue, 11 Apr 2023 08:13:23 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp2.samsung.com (KnoxPortal) with ESMTP id 20230411081323epsmtrp2c44dfad17473186ac55fc49d3f0da82b~U0zkqj3wd2545725457epsmtrp23; Tue, 11 Apr 2023 08:13:23 +0000 (GMT) X-AuditID: b6c32a4b-a67fd70000002703-20-6435180c6dda Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id 94.1F.08279.3A615346; Tue, 11 Apr 2023 17:13:23 +0900 (KST) Received: from green5.sa.corp.samsungelectronics.net (unknown [107.110.206.5]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230411081319epsmtip29a1c8e1a5c6587a749693064e745c356~U0zhGRB2A1199111991epsmtip2R; Tue, 11 Apr 2023 08:13:19 +0000 (GMT) From: Anuj Gupta To: Jens Axboe , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , James Smart , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: bvanassche@acm.org, hare@suse.de, ming.lei@redhat.com, dlemoal@kernel.org, anuj20.g@samsung.com, joshi.k@samsung.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , =?utf-8?q?Javi?= =?utf-8?q?er_Gonz=C3=A1lez?= , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v9 5/9] nvme: add copy offload support Date: Tue, 11 Apr 2023 13:40:32 +0530 Message-Id: <20230411081041.5328-6-anuj20.g@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230411081041.5328-1-anuj20.g@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01Ta0xTZxjOd045FDLY4aafbGNdlw3QUFop3YfI2OLtRMhCYky2xVkbegJI aZueMpz7MbYiAgqt5RLX4hTmUEAEgQmI5VIQuciUoYw7DGEYQOQS0QUYoxzc/Pc8z/e8eZ/3 /fJycddz9p7cGKWW1ihlCj7hyLnZ5Ovt9wYMlAvXGnmotL0FRz8YVnFUPKQn0HTTAkA5c3/j aLQ+FFlmzXaor6EGQ7fzjRgqLL6Dodq8eQw9Hly0R3fWnhLIaO0BaOKRCUOW/h3otqWNg7pv 5RLoYsGEPbJm6jBUPf49QDeXL+Lo+vQzDmrtfwvdX71r98k2qvthGGUa6SSoGtOQPXV/+AaH 6u6Mp8qLUgmq4vJ3VG1fIkGl62YJ6lndI4LKqCwCVEXHt9RiuRdVPv4Ui3D+MnZ3NC2T0xoe rYxUyWOUUSH8sEPSPdJAiVDkJwpCH/F5SlkcHcLfGx7htz9Gsb4GPu9rmSJ+XYqQMQzf/+Pd GlW8luZFqxhtCJ9WyxVqsVrAyOKYeGWUQElrd4mEwp2B68ZjsdHPr1Xj6uqEE+a8LpAI7h1L A1wuJMXw4R8H0oAj15WsBTCpYQRjyQKAw6tjBEuWALw6kGyfBhw2Kuo7ngD2wQKgpSAVZ4kO g+biMdzmIkhv2Dx5asPlTibjcH4ilWMjOGnE4eUrZ4DN5UZK4ExZJWHDHPIDWDZVyLFhJxLB Gd08YBP6Q/2Ii012IIPgyoUUgrW4wLYfxzfsOPku1P1q3kgBySoHmP37KIfNuheulLwgWOwG p+5Wbs7gCRdnLZt6FHzZPYGxWA11LXWAxaHwVLset2XASV9Yesufld+B2e3XMbavM0xfHt8s dYLVP73CfHi6MHcTQ2j5LXETUzCpbtqO3dZZAM/9WWVvADzTa/OYXpvH9H/rSwAvAttoNRMX RTOB6gAlnfDfN0eq4srBxm1sD6sGY6NzAivAuMAKIBfnuzs9XwuQuzrJZd+cpDUqqSZeQTNW ELi+73O4p0ekav24lFqpSBwkFEskEnFQgETE3+rkHdIW6UpGybR0LE2rac2rOozr4JmI2Xkc 7DGc9fr56Iy15MMtqoaErJy4AWpHaU7Wpfkrht4HD5ybJKIvapprn7xcazZ91Vqu542sFBkj u9BxH3Lw5HGnmjJ0oXu5ct+BwTNzUcmfSmdG5+tbG9Oc8yIs+pbTFm1s//m+pab0LdckAzVT HVOZB9/f6RPe0du4Xy/xriyrOkQLdg15aZnD/aY5o++LfGbfvYUjnfmWnqvue5KotwvCw/rU hs5lj0VD53nHkPeCMZVYkJLbpZvzm1z6/PBnN9b+8fQ6Mlv0S+Fjx9CsPnmo0bx10C1YvTZz lHQRnPDhZCiE2rTgdn22R69iuMSamJnx10KvOWWyouvNOiWulvI5TLRMtB3XMLJ/AS6VHhik BAAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrLIsWRmVeSWpSXmKPExsWy7bCSvO5iMdMUg92XBSzWnzrGbNE04S+z xeq7/WwWrw9/YrSY9uEns8WD/fYWe9/NZrW4eWAnk8WeRZOYLFauPspksXvhRyaLx3c+s1sc /f+WzWLSoWuMFk+vzmKy2HtL22LP3pMsFpd3zWGzmL/sKbvFocnNTBY7njQyWmz7PZ/ZYt3r 9ywWJ25JW5z/e5zVQdLj8hVvj1n3z7J57Jx1l93j/L2NLB6Xz5Z6bFrVyeaxeUm9x+6bDWwe vc3v2Dze77vK5tG3ZRWjx+bT1R6fN8l5bHrylimAL4rLJiU1J7MstUjfLoEr4+uaHcwFO8or Zi+8yNjAeCahi5GTQ0LARGL/6ReMXYxcHEICuxkldp/9wwKRkJA49XIZI4QtLLHy33N2iKJG Jok7E2YwgyTYBNQljjxvBesWEZjALHHpfgMbiMMssIBZYuKq02wgVcICZhJvNmwBs1kEVCU2 vFoJtoJXwELiTfNHoG4OoBX6Ev33BUHCnAKWEn/mdrCBhIWASpbsgaoWlDg58wkLSJgZaO/6 eUIgYWYBeYnmrbOZJzAKzkJSNQuhahaSqgWMzKsYJVMLinPTc4sNCwzzUsv1ihNzi0vz0vWS 83M3MYJTgJbmDsbtqz7oHWJk4mA8xCjBwawkwvv1v3GKEG9KYmVValF+fFFpTmrxIUZpDhYl cd4LXSfjhQTSE0tSs1NTC1KLYLJMHJxSDUziD4xzj/WXTfROC9XL0L7xIHFB1sKDzRUe1059 KFR905P6iSU5sWFpCMeFD1cuzA8JUjuxMGQr5+qdkdUfJEuF3r1wafd/36hYXpKe2S1qezX4 R+IPsxOrOJ/N2RJ392lliZtGQrz3443H/6swaVg/NDDWFPyRGDPVLaTzrrfggylzDa+Y2lw6 dDZoCZ+DkcSGfxW7V/Dx2J06JWSglmwYoZV2JTi3Jfjt9aBLVoV2xvVn9O2XyXjHfGfsZazx O7/r553Pr9xjntxv3ekbk+GT0ua7esZinZ+L5l/hbMzQfPD5fFOq7WKvYxvmTTomvMZqvk+u ZfWzyo/B4k/ni22+8ChYuv3vXoEqrfeyD5VYijMSDbWYi4oTAYLRc19wAwAA X-CMS-MailID: 20230411081323epcas5p3372cfe0159cfd9da0948d607aa548405 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230411081323epcas5p3372cfe0159cfd9da0948d607aa548405 References: <20230411081041.5328-1-anuj20.g@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Nitesh Shetty For device supporting native copy, nvme driver receives read and write request with BLK_COPY op flags. For read request the nvme driver populates the payload with source information. For write request the driver converts it to nvme copy command using the source information in the payload and submits to the device. current design only supports single source range. This design is courtesy Mikulas Patocka's token based copy trace event support for nvme_copy_cmd. Set the device copy limits to queue limits. Signed-off-by: Kanchan Joshi Signed-off-by: Nitesh Shetty Signed-off-by: Javier González Signed-off-by: Anuj Gupta --- drivers/nvme/host/constants.c | 1 + drivers/nvme/host/core.c | 106 +++++++++++++++++++++++++++++++++- drivers/nvme/host/fc.c | 5 ++ drivers/nvme/host/nvme.h | 7 +++ drivers/nvme/host/pci.c | 27 ++++++++- drivers/nvme/host/rdma.c | 7 +++ drivers/nvme/host/tcp.c | 16 +++++ drivers/nvme/host/trace.c | 19 ++++++ include/linux/nvme.h | 43 +++++++++++++- 9 files changed, 223 insertions(+), 8 deletions(-) diff --git a/drivers/nvme/host/constants.c b/drivers/nvme/host/constants.c index bc523ca02254..01be882b726f 100644 --- a/drivers/nvme/host/constants.c +++ b/drivers/nvme/host/constants.c @@ -19,6 +19,7 @@ static const char * const nvme_ops[] = { [nvme_cmd_resv_report] = "Reservation Report", [nvme_cmd_resv_acquire] = "Reservation Acquire", [nvme_cmd_resv_release] = "Reservation Release", + [nvme_cmd_copy] = "Copy Offload", [nvme_cmd_zone_mgmt_send] = "Zone Management Send", [nvme_cmd_zone_mgmt_recv] = "Zone Management Receive", [nvme_cmd_zone_append] = "Zone Management Append", diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 3ffdc80ebb6c..745c512c324b 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -754,6 +754,80 @@ static inline void nvme_setup_flush(struct nvme_ns *ns, cmnd->common.nsid = cpu_to_le32(ns->head->ns_id); } +static inline blk_status_t nvme_setup_copy_read(struct nvme_ns *ns, + struct request *req) +{ + struct bio *bio = req->bio; + struct nvme_copy_token *token = bvec_kmap_local(&bio->bi_io_vec[0]); + + memcpy(token->subsys, "nvme", 4); + token->ns = ns; + token->src_sector = bio->bi_iter.bi_sector; + token->sectors = bio->bi_iter.bi_size >> 9; + + return BLK_STS_OK; +} + +static inline blk_status_t nvme_setup_copy_write(struct nvme_ns *ns, + struct request *req, struct nvme_command *cmnd) +{ + struct nvme_copy_range *range = NULL; + struct bio *bio = req->bio; + struct nvme_copy_token *token = bvec_kmap_local(&bio->bi_io_vec[0]); + sector_t src_sector, dst_sector, n_sectors; + u64 src_lba, dst_lba, n_lba; + unsigned short nr_range = 1; + u16 control = 0; + + if (unlikely(memcmp(token->subsys, "nvme", 4))) + return BLK_STS_NOTSUPP; + if (unlikely(token->ns != ns)) + return BLK_STS_NOTSUPP; + + src_sector = token->src_sector; + dst_sector = bio->bi_iter.bi_sector; + n_sectors = token->sectors; + if (WARN_ON(n_sectors != bio->bi_iter.bi_size >> 9)) + return BLK_STS_NOTSUPP; + + src_lba = nvme_sect_to_lba(ns, src_sector); + dst_lba = nvme_sect_to_lba(ns, dst_sector); + n_lba = nvme_sect_to_lba(ns, n_sectors); + + if (WARN_ON(!n_lba)) + return BLK_STS_NOTSUPP; + + if (req->cmd_flags & REQ_FUA) + control |= NVME_RW_FUA; + + if (req->cmd_flags & REQ_FAILFAST_DEV) + control |= NVME_RW_LR; + + memset(cmnd, 0, sizeof(*cmnd)); + cmnd->copy.opcode = nvme_cmd_copy; + cmnd->copy.nsid = cpu_to_le32(ns->head->ns_id); + cmnd->copy.sdlba = cpu_to_le64(dst_lba); + + range = kmalloc_array(nr_range, sizeof(*range), + GFP_ATOMIC | __GFP_NOWARN); + if (!range) + return BLK_STS_RESOURCE; + + range[0].slba = cpu_to_le64(src_lba); + range[0].nlb = cpu_to_le16(n_lba - 1); + + cmnd->copy.nr_range = 0; + + req->special_vec.bv_page = virt_to_page(range); + req->special_vec.bv_offset = offset_in_page(range); + req->special_vec.bv_len = sizeof(*range) * nr_range; + req->rq_flags |= RQF_SPECIAL_PAYLOAD; + + cmnd->copy.control = cpu_to_le16(control); + + return BLK_STS_OK; +} + static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req, struct nvme_command *cmnd) { @@ -988,10 +1062,16 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req) ret = nvme_setup_discard(ns, req, cmd); break; case REQ_OP_READ: - ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_read); + if (unlikely(req->cmd_flags & REQ_COPY)) + ret = nvme_setup_copy_read(ns, req); + else + ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_read); break; case REQ_OP_WRITE: - ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_write); + if (unlikely(req->cmd_flags & REQ_COPY)) + ret = nvme_setup_copy_write(ns, req, cmd); + else + ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_write); break; case REQ_OP_ZONE_APPEND: ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_zone_append); @@ -1698,6 +1778,26 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns) blk_queue_max_write_zeroes_sectors(queue, UINT_MAX); } +static void nvme_config_copy(struct gendisk *disk, struct nvme_ns *ns, + struct nvme_id_ns *id) +{ + struct nvme_ctrl *ctrl = ns->ctrl; + struct request_queue *q = disk->queue; + + if (!(ctrl->oncs & NVME_CTRL_ONCS_COPY)) { + blk_queue_max_copy_sectors_hw(q, 0); + blk_queue_flag_clear(QUEUE_FLAG_COPY, q); + return; + } + + /* setting copy limits */ + if (blk_queue_flag_test_and_set(QUEUE_FLAG_COPY, q)) + return; + + blk_queue_max_copy_sectors_hw(q, + nvme_lba_to_sect(ns, le16_to_cpu(id->mssrl))); +} + static bool nvme_ns_ids_equal(struct nvme_ns_ids *a, struct nvme_ns_ids *b) { return uuid_equal(&a->uuid, &b->uuid) && @@ -1897,6 +1997,7 @@ static void nvme_update_disk_info(struct gendisk *disk, set_capacity_and_notify(disk, capacity); nvme_config_discard(disk, ns); + nvme_config_copy(disk, ns, id); blk_queue_max_write_zeroes_sectors(disk->queue, ns->ctrl->max_zeroes_sectors); } @@ -5346,6 +5447,7 @@ static inline void _nvme_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_download_firmware) != 64); BUILD_BUG_ON(sizeof(struct nvme_format_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_dsm_cmd) != 64); + BUILD_BUG_ON(sizeof(struct nvme_copy_command) != 64); BUILD_BUG_ON(sizeof(struct nvme_write_zeroes_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_abort_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_get_log_page_command) != 64); diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 2ed75923507d..db2e22b4ca7f 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -2807,6 +2807,11 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx, if (ret) return ret; + if (unlikely((rq->cmd_flags & REQ_COPY) && + (req_op(rq) == REQ_OP_READ))) { + blk_mq_end_request(rq, BLK_STS_OK); + return BLK_STS_OK; + } /* * nvme core doesn't quite treat the rq opaquely. Commands such * as WRITE ZEROES will return a non-zero rq payload_bytes yet diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index bf46f122e9e1..257f91ee1f2d 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -500,6 +500,13 @@ struct nvme_ns { }; +struct nvme_copy_token { + char subsys[4]; + struct nvme_ns *ns; + u64 src_sector; + u64 sectors; +}; + /* NVMe ns supports metadata actions by the controller (generate/strip) */ static inline bool nvme_ns_has_pi(struct nvme_ns *ns) { diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 282d808400c5..9fdf56256cdb 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -496,16 +496,19 @@ static inline void nvme_sq_copy_cmd(struct nvme_queue *nvmeq, nvmeq->sq_tail = 0; } -static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx) +static inline void nvme_commit_sq_db(struct nvme_queue *nvmeq) { - struct nvme_queue *nvmeq = hctx->driver_data; - spin_lock(&nvmeq->sq_lock); if (nvmeq->sq_tail != nvmeq->last_sq_tail) nvme_write_sq_db(nvmeq, true); spin_unlock(&nvmeq->sq_lock); } +static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx) +{ + nvme_commit_sq_db(hctx->driver_data); +} + static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req, int nseg) { @@ -849,6 +852,12 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) if (ret) return ret; + if (unlikely((req->cmd_flags & REQ_COPY) && + (req_op(req) == REQ_OP_READ))) { + blk_mq_start_request(req); + return BLK_STS_OK; + } + if (blk_rq_nr_phys_segments(req)) { ret = nvme_map_data(dev, req, &iod->cmd); if (ret) @@ -895,6 +904,18 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, ret = nvme_prep_rq(dev, req); if (unlikely(ret)) return ret; + if (unlikely((req->cmd_flags & REQ_COPY) && + (req_op(req) == REQ_OP_READ))) { + blk_mq_set_request_complete(req); + blk_mq_end_request(req, BLK_STS_OK); + /* Commit the sq if copy read was the last req in the list, + * as copy read deoesn't update sq db + */ + if (bd->last) + nvme_commit_sq_db(nvmeq); + return ret; + } + spin_lock(&nvmeq->sq_lock); nvme_sq_copy_cmd(nvmeq, &iod->cmd); nvme_write_sq_db(nvmeq, bd->last); diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index bbad26b82b56..a8bf2a87f42a 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -2043,6 +2043,13 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx, nvme_start_request(rq); + if (unlikely((rq->cmd_flags & REQ_COPY) && + (req_op(rq) == REQ_OP_READ))) { + blk_mq_end_request(rq, BLK_STS_OK); + ret = BLK_STS_OK; + goto unmap_qe; + } + if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) && queue->pi_support && (c->common.opcode == nvme_cmd_write || diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 49c9e7bc9116..aaeb761c56a1 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -2370,6 +2370,11 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns, if (ret) return ret; + if (unlikely((rq->cmd_flags & REQ_COPY) && + (req_op(rq) == REQ_OP_READ))) { + return BLK_STS_OK; + } + req->state = NVME_TCP_SEND_CMD_PDU; req->status = cpu_to_le16(NVME_SC_SUCCESS); req->offset = 0; @@ -2438,6 +2443,17 @@ static blk_status_t nvme_tcp_queue_rq(struct blk_mq_hw_ctx *hctx, nvme_start_request(rq); + if (unlikely((rq->cmd_flags & REQ_COPY) && + (req_op(rq) == REQ_OP_READ))) { + blk_mq_set_request_complete(rq); + blk_mq_end_request(rq, BLK_STS_OK); + /* if copy read is the last req queue tcp reqs */ + if (bd->last && nvme_tcp_queue_more(queue)) + queue_work_on(queue->io_cpu, nvme_tcp_wq, + &queue->io_work); + return ret; + } + nvme_tcp_queue_request(req, true, bd->last); return BLK_STS_OK; diff --git a/drivers/nvme/host/trace.c b/drivers/nvme/host/trace.c index 1c36fcedea20..da4a7494e5a7 100644 --- a/drivers/nvme/host/trace.c +++ b/drivers/nvme/host/trace.c @@ -150,6 +150,23 @@ static const char *nvme_trace_read_write(struct trace_seq *p, u8 *cdw10) return ret; } +static const char *nvme_trace_copy(struct trace_seq *p, u8 *cdw10) +{ + const char *ret = trace_seq_buffer_ptr(p); + u64 slba = get_unaligned_le64(cdw10); + u8 nr_range = get_unaligned_le16(cdw10 + 8); + u16 control = get_unaligned_le16(cdw10 + 10); + u32 dsmgmt = get_unaligned_le32(cdw10 + 12); + u32 reftag = get_unaligned_le32(cdw10 + 16); + + trace_seq_printf(p, + "slba=%llu, nr_range=%u, ctrl=0x%x, dsmgmt=%u, reftag=%u", + slba, nr_range, control, dsmgmt, reftag); + trace_seq_putc(p, 0); + + return ret; +} + static const char *nvme_trace_dsm(struct trace_seq *p, u8 *cdw10) { const char *ret = trace_seq_buffer_ptr(p); @@ -243,6 +260,8 @@ const char *nvme_trace_parse_nvm_cmd(struct trace_seq *p, return nvme_trace_zone_mgmt_send(p, cdw10); case nvme_cmd_zone_mgmt_recv: return nvme_trace_zone_mgmt_recv(p, cdw10); + case nvme_cmd_copy: + return nvme_trace_copy(p, cdw10); default: return nvme_trace_common(p, cdw10); } diff --git a/include/linux/nvme.h b/include/linux/nvme.h index 779507ac750b..6582b26e532c 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -337,7 +337,7 @@ struct nvme_id_ctrl { __u8 nvscc; __u8 nwpc; __le16 acwu; - __u8 rsvd534[2]; + __le16 ocfs; __le32 sgls; __le32 mnan; __u8 rsvd544[224]; @@ -365,6 +365,7 @@ enum { NVME_CTRL_ONCS_WRITE_ZEROES = 1 << 3, NVME_CTRL_ONCS_RESERVATIONS = 1 << 5, NVME_CTRL_ONCS_TIMESTAMP = 1 << 6, + NVME_CTRL_ONCS_COPY = 1 << 8, NVME_CTRL_VWC_PRESENT = 1 << 0, NVME_CTRL_OACS_SEC_SUPP = 1 << 0, NVME_CTRL_OACS_NS_MNGT_SUPP = 1 << 3, @@ -414,7 +415,10 @@ struct nvme_id_ns { __le16 npdg; __le16 npda; __le16 nows; - __u8 rsvd74[18]; + __le16 mssrl; + __le32 mcl; + __u8 msrc; + __u8 rsvd91[11]; __le32 anagrpid; __u8 rsvd96[3]; __u8 nsattr; @@ -796,6 +800,7 @@ enum nvme_opcode { nvme_cmd_resv_report = 0x0e, nvme_cmd_resv_acquire = 0x11, nvme_cmd_resv_release = 0x15, + nvme_cmd_copy = 0x19, nvme_cmd_zone_mgmt_send = 0x79, nvme_cmd_zone_mgmt_recv = 0x7a, nvme_cmd_zone_append = 0x7d, @@ -819,7 +824,8 @@ enum nvme_opcode { nvme_opcode_name(nvme_cmd_resv_release), \ nvme_opcode_name(nvme_cmd_zone_mgmt_send), \ nvme_opcode_name(nvme_cmd_zone_mgmt_recv), \ - nvme_opcode_name(nvme_cmd_zone_append)) + nvme_opcode_name(nvme_cmd_zone_append), \ + nvme_opcode_name(nvme_cmd_copy)) @@ -996,6 +1002,36 @@ struct nvme_dsm_range { __le64 slba; }; +struct nvme_copy_command { + __u8 opcode; + __u8 flags; + __u16 command_id; + __le32 nsid; + __u64 rsvd2; + __le64 metadata; + union nvme_data_ptr dptr; + __le64 sdlba; + __u8 nr_range; + __u8 rsvd12; + __le16 control; + __le16 rsvd13; + __le16 dspec; + __le32 ilbrt; + __le16 lbat; + __le16 lbatm; +}; + +struct nvme_copy_range { + __le64 rsvd0; + __le64 slba; + __le16 nlb; + __le16 rsvd18; + __le32 rsvd20; + __le32 eilbrt; + __le16 elbat; + __le16 elbatm; +}; + struct nvme_write_zeroes_cmd { __u8 opcode; __u8 flags; @@ -1757,6 +1793,7 @@ struct nvme_command { struct nvme_download_firmware dlfw; struct nvme_format_cmd format; struct nvme_dsm_cmd dsm; + struct nvme_copy_command copy; struct nvme_write_zeroes_cmd write_zeroes; struct nvme_zone_mgmt_send_cmd zms; struct nvme_zone_mgmt_recv_cmd zmr;