From patchwork Sun Dec 13 05:50:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11970481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 129F8C433FE for ; Sun, 13 Dec 2020 05:51:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D7FBF2312D for ; Sun, 13 Dec 2020 05:51:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730426AbgLMFve (ORCPT ); Sun, 13 Dec 2020 00:51:34 -0500 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:51958 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730373AbgLMFve (ORCPT ); Sun, 13 Dec 2020 00:51:34 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1607839497; x=1639375497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UfKvG3DF5+e1xbSx0w/6Rpv8SlZKd4h4pbL+RDgW3Js=; b=pueUe7sT9IGQ++BwrHntdVZMthLoUJGiqr/ty8C6bfv7TAyIHaBRMxhn diP4wWYCqprsm8NRDD3zW/06JW6jEtfMNy39UoNQ5Bhl2kFarIb8EQDb2 qvOX/dmDEd1+BO7IfBj9VQuBHmiv1ku83lXdJy4RYgL6qfjwfztkCT1st ToWBaWncl6zsp/rIrG4FdcGetHdOtEtiENn42IdJmnNQxcKvOi4SSNbeX vOb554MzIg72smD2wQEcQaZPrwFnaYXJdnYDVYSQE71S/jyU5oRvL4yHl jcnizSZv0nP9fxlcu7qWh+HtqiNj0fDrVHltCX4VOek6v1TOEIckRRkTh w==; IronPort-SDR: 9H2OnUf+yRw0BrWKO9JxVg5eiaFUzJs18vH/1qz4M67iId1WoyMR2JRvAPGiHF6+qJ8OIISuoj nzTi6qd9gfNwvbYZpICs/LV93s/ErW+bk3dbyZbe64Ig0Ly6eODeiKD0qngT35Q3UF5Sg6fsL6 fEKjkWz7EIvBMNp/lvpFcmxmMe7p9MR8glAax8quJLV7n92FGSrJQoyAGccKGUqD+fl7ZOW0WC 1Q/zg76Cc96UL/il7+zfAGUrqpVaqroNKGKVPXudLB9EXGXpWqRZw9xOmZHbSKVIHmTiRFq2fs baQ= X-IronPort-AV: E=Sophos;i="5.78,415,1599494400"; d="scan'208";a="258770345" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Dec 2020 14:03:18 +0800 IronPort-SDR: 5RzE2Q1lt5fGkkI2LjqWxa7hFl0M9X7x1bAkNa20ExBoSFZt3hc/qx4bNA6acKMDw7h7ot7WCd uIhBF87NZOAUvptQF3n6bj/Zf1MA4sLpwzE31it7szFUee24AliL1/uMKJ+Iub0oUINocTp0Cl 6YfMOvPx3QEFXlYcNVRiXL8foYlYSax1fKsgCy9Rv7n9v6Z6Uf6rIggRaJ1ihuxQdSeBCOXwQY 1JCDbAWaqLjmBOVvP3ZrrCrRHMRVteon+RU9o4BPCXV6jxmD5Ifkk65YPTxn6KvjbT6tQW7tuv xqJTjVJk57iLZe9sKT2q2Wux Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2020 21:34:17 -0800 IronPort-SDR: ohj9NsNmuMQLzfYUIUh2nT/CRG9S+5u6OxzAE7/lgGoo5dGPUv5E5WxW8Y2+UbMKSt4nByGjX3 VXpgVSNqq08wYox7b+su32+oCh8F1ngqXdsjnm4+YjDwzn7pNBfEucn2i97vqkOQAxMdAHx4rg jxPrA8zT8sz1ZT4nGUcEUC24R+wAB1wNNXWuTVddzOX1aQR9iqAkETwcTVNfe23cCzsReUw2Bw 2QfhY2AbubFs0HssdNbtLiEx8lYyovYByqA2O17gBd3QiTSo1ANDzmFOwtYF+xJa7SQE59BAwb 36g= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip01.wdc.com with ESMTP; 12 Dec 2020 21:50:28 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V6 1/6] block: export bio_add_hw_pages() Date: Sat, 12 Dec 2020 21:50:12 -0800 Message-Id: <20201213055017.7141-2-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> References: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org To implement the NVMe Zone Append command on the NVMeOF target side for generic Zoned Block Devices with NVMe Zoned Namespaces interface, we need to build the bios with hardware limitations, i.e. we use bio_add_hw_page() with queue_max_zone_append_sectors() instead of bio_add_page(). Without this API being exported NVMeOF target will require to use bio_add_hw_page() caller bio_iov_iter_get_pages(). That results in extra work which is inefficient. Export the API so that NVMeOF ZBD over ZNS backend can use it to build Zone Append bios. Signed-off-by: Chaitanya Kulkarni --- block/bio.c | 1 + block/blk.h | 4 ---- include/linux/blkdev.h | 4 ++++ 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/block/bio.c b/block/bio.c index fa01bef35bb1..eafd97c6c7fd 100644 --- a/block/bio.c +++ b/block/bio.c @@ -826,6 +826,7 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, bio->bi_iter.bi_size += len; return len; } +EXPORT_SYMBOL(bio_add_hw_page); /** * bio_add_pc_page - attempt to add page to passthrough bio diff --git a/block/blk.h b/block/blk.h index e05507a8d1e3..1fdb8d5d8590 100644 --- a/block/blk.h +++ b/block/blk.h @@ -428,8 +428,4 @@ static inline void part_nr_sects_write(struct hd_struct *part, sector_t size) #endif } -int bio_add_hw_page(struct request_queue *q, struct bio *bio, - struct page *page, unsigned int len, unsigned int offset, - unsigned int max_sectors, bool *same_page); - #endif /* BLK_INTERNAL_H */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 05b346a68c2e..2bdaa7cacfa3 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -2023,4 +2023,8 @@ int fsync_bdev(struct block_device *bdev); struct super_block *freeze_bdev(struct block_device *bdev); int thaw_bdev(struct block_device *bdev, struct super_block *sb); +int bio_add_hw_page(struct request_queue *q, struct bio *bio, + struct page *page, unsigned int len, unsigned int offset, + unsigned int max_sectors, bool *same_page); + #endif /* _LINUX_BLKDEV_H */ From patchwork Sun Dec 13 05:50:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11970487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD4FFC4361B for ; Sun, 13 Dec 2020 05:52:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9480222D71 for ; Sun, 13 Dec 2020 05:52:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726541AbgLMFwH (ORCPT ); Sun, 13 Dec 2020 00:52:07 -0500 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:4942 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726556AbgLMFwF (ORCPT ); Sun, 13 Dec 2020 00:52:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1607838724; x=1639374724; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yOc8tZLY9aTLtJqoqcpsKtJ8jkH3U8ob/P06alAcQUU=; b=EqbPYqkBjPVToZ+rJMCvZUwU8uDxYOBDxcQ7AtMi6bNeGWgDW1NfnVyL +rDSwCah4v65POFnUehF4doXQycScYvXpYNYTHrrwxpjGYpodnkORHCh7 4Ta6yLoOkdbFEd9Dch4Viym3csDXyTIsSRPi3g5prRLq7mFRmx6my/6V2 g0Atcew/FT1Lctw7XnqsQgMnA2WmgXZMbRyTtCH6PjLhyWKx6TxHQaXKI 6YBmQ7dFzLnTkuWgG6wTCWfQN6keE56nuKX8QUpOBORst6xpFRHNSY0oB zAMCnjKnspiLxLk4E08MfAQ/7qsuRSpn7wmaHUODBypAJl9elgn/d6Rry Q==; IronPort-SDR: AF+lkTVvMAtZnKskUK1m6UPtNOPbzIJlRRZDgO2hG0yVNAB4yUTVxcIdYRQqnDA72eebiIPVMC W+NaUlfIYlsSRvyX8+LgETS/9rpOHfcKLvip8csIuLH7ff1zCgNO9E89xdWIkT15mJ9jJMvbJY vneftO1Uz5ys5E8VQZM3mj5JoxfIr2XL8I9lZnXGucRfnRwxvo9AhbYkl7snVUSQTm5XslOw5v r+MfEZRWIfHvyKmNfbzcA0SkVfQjcnqu8vOHSoijc6V5kpsJ/tc6SeXQRUw8k5tX2wod9dGL32 Spk= X-IronPort-AV: E=Sophos;i="5.78,415,1599494400"; d="scan'208";a="265203443" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Dec 2020 13:50:38 +0800 IronPort-SDR: IPQ1U8AuYrCBtcfC/mpT4WB2BGkNTPKKZ3ktyXM1igGqxUArWKlUBnJq0y1I5QKqHwmOcon1mI ahcaT6dxFq3z4Ypz0iQ377QXcBddtTtVlU3TV/FikxOavPLRUDc/sF5uSW6YNCnTBNRuS/ZV8Z DHTLE1T00vQd7DmPQaorVBDiA7+mJZDY3Hl9yvgsRn1BMXbVDgVlbIF8CLA7pugm+h4TX8pzRR IK8oHb/jXfOlL4ZlC9jwtOgEMJTUSmWzOKX9fWw1vXpDGcBDU5NTWjjRHDx86YjM1C6Gj1D0NA Hw8FbV/bu7mqkzLlEIr8FWzT Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2020 21:34:27 -0800 IronPort-SDR: Rq29Z41v6bhT35mkqiAaWLroxMzffXCIvBTBplW7hOo6BhlFdgz7wfw9pUWdkPUTc2MckfGmkq m/dR0pek1JebsG0jZk0X6+Hx7WgJT65J7jweF8mwZGAbHlGG2uI3eYTBDAqDvzlmBSnFFJClCU fP5v5WVPhPAwDSQDMfZS5Kk5AGBGqm+CsfgZGTFLrH60xbC8YJxar08aPaS0fyy1dI0TfDZec7 PPYIeWq2UliJ3u48O1934O5pY0a1SbxET7irQNB3VKNuwHmyrkQ0k8CP4E8PGPmJK0sr6SStXR Cbc= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip01.wdc.com with ESMTP; 12 Dec 2020 21:50:38 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V6 2/6] nvmet: add lba to sect conversion helpers Date: Sat, 12 Dec 2020 21:50:13 -0800 Message-Id: <20201213055017.7141-3-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> References: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In this preparation patch we add helpers to convert lbas to sectors and sectors to lba. This is needed to eliminate code duplication in the ZBD backend. Use these helpers in the block device backennd. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/io-cmd-bdev.c | 8 +++----- drivers/nvme/target/nvmet.h | 10 ++++++++++ 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 125dde3f410e..23095bdfce06 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -256,8 +256,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) if (is_pci_p2pdma_page(sg_page(req->sg))) op |= REQ_NOMERGE; - sector = le64_to_cpu(req->cmd->rw.slba); - sector <<= (req->ns->blksize_shift - 9); + sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { bio = &req->b.inline_bio; @@ -345,7 +344,7 @@ static u16 nvmet_bdev_discard_range(struct nvmet_req *req, int ret; ret = __blkdev_issue_discard(ns->bdev, - le64_to_cpu(range->slba) << (ns->blksize_shift - 9), + nvmet_lba_to_sect(ns, range->slba), le32_to_cpu(range->nlb) << (ns->blksize_shift - 9), GFP_KERNEL, 0, bio); if (ret && ret != -EOPNOTSUPP) { @@ -414,8 +413,7 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) if (!nvmet_check_transfer_len(req, 0)) return; - sector = le64_to_cpu(write_zeroes->slba) << - (req->ns->blksize_shift - 9); + sector = nvmet_lba_to_sect(req->ns, write_zeroes->slba); nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length) + 1) << (req->ns->blksize_shift - 9)); diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 592763732065..8776dd1a0490 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -603,4 +603,14 @@ static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns) return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple); } +static inline __le64 nvmet_sect_to_lba(struct nvmet_ns *ns, sector_t sect) +{ + return cpu_to_le64(sect >> (ns->blksize_shift - SECTOR_SHIFT)); +} + +static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba) +{ + return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT); +} + #endif /* _NVMET_H */ From patchwork Sun Dec 13 05:50:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11970483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EB26C433FE for ; Sun, 13 Dec 2020 05:51:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6789F2312D for ; Sun, 13 Dec 2020 05:51:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730582AbgLMFvx (ORCPT ); Sun, 13 Dec 2020 00:51:53 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:4890 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730543AbgLMFvx (ORCPT ); Sun, 13 Dec 2020 00:51:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1607838713; x=1639374713; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mQ0mk9gOb3GueQDoqDxI+x/4JSzU9z3SWA8da78K+fM=; b=Ke6/shty3LopIGawnm0FVaDn9RMbxfvXh3Nq4qjnroMGwKk3Z/ZQfM4B W1C/70qrVkvIpKpwmV64Zw3hnSGqcY2fBSdreMzP5bnvv9oCU8Cin4Uo1 cFTebsNluaSs/ffgzh4QhCRJaowp8TjkyhE0VEutUHTnIPv+sZRP82sYP 1w+ZL5zrV2qNLjmTik4mdHhU1QMloNzaAPm3jvglNLecK/7iwBWbLe8Q4 JmN3fKnBrx9Pq2kbxdoaVBmHIoYb9An97Rxm8FvvMDr+EvbSeaYnr+LNK GZAk+Xxv5UK2ESZQvQrdEAZoQU9Hul1EB4f1gOmSYLjq0zkMdaHdPMnEg A==; IronPort-SDR: Xa4viV49TvJph7yqwaCQzlYKzB785q5aWlNkOWowb4FoToda3uxGYDf5paWhF/6/gPJeFpX93w HmGVB8caM7Rw98zxJ5o5+S4UBUYCtsJdKnfcDiv+yzoJfv2bSq6l6MaILF/zFr5tonG1uA/yBu ib2BLuh3X+zi34L6lsLgWEOQTGUUQ3v/gLvEKDEXdkXR/hQB+0gLnuaIEMUwY13SImlLXeLfJl ey9n13HSNBgIKwltq5Z0nwD8h2DFKLOCsGwamBNqkSmGkYOddmbM/LBepL2NF+vuEOopm0Lr7+ dXQ= X-IronPort-AV: E=Sophos;i="5.78,415,1599494400"; d="scan'208";a="155065875" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Dec 2020 13:50:48 +0800 IronPort-SDR: A5y8RYkaEOlF1ejcIfawxT4zyF2Ogd3wJRjSGSRRvyKeZKi+17T5tiEAnWUsk9PCw8Yn7+9psI ANRDjuGqWZqKNK13xDRD8R/22i91FC2P/BtQlNExGX6QMs/cXY9fqqOxO2ZlJgYerYyM/rX1CD /B2IxzM74TKehW+qcEXIxX7A73TqyQ5D4JI+twG+BBrIyLLwNK8EHLIuK/IvbIOFWo3iHD1V/l nF97DyPOKs8o8silGcC2E7t6e791fFGHPG13si/ogHvcGDBj7tn/sUEs2ymsKmn8V85JAsPO45 VW/GC8tb+KspENfM0xpVR8m1 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2020 21:34:36 -0800 IronPort-SDR: qbe7RJ22xQkaea7wiU/EEUFX/xgf07fHXqyTyqNYO/9JWJQGdSkEkmqE5BgXDaEk4gJjXnLF6d JptKMS4LwkxRohmk6vmusopoHPbKC0WcxoGnv4oLnQG/kVPQBePqLeekDtv1etmgGHYyr4H8FY LpI080nNvl8cqe6X7JQscQhQx3yOEXN8KU3BQu7gVWlweMi8Bj9msGCRcNR/hHabgmGPhtH+fD nLHjdINwQqFCyyVBlGe6xBxf4rQ6mzYzUPrfA7AQ4U0wgnzxoVXowB6JNqoKvhLm1M+ugVJCaq paI= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip01.wdc.com with ESMTP; 12 Dec 2020 21:50:47 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V6 3/6] nvmet: add NVM command set identifier support Date: Sat, 12 Dec 2020 21:50:14 -0800 Message-Id: <20201213055017.7141-4-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> References: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org NVMe TP 4056 allows controller to support different command sets. NVMeoF target currently only supports namespaces that contain traditional logical blocks that may be randomly read and written. In some applications there is value in exposing namespaces that contain logical blocks that have special access rules (e.g. sequentially write required namespace such as Zoned Namespace (ZNS)). In order to support the Zoned Block Devices (ZBD) backend, controller needs to have support for ZNS Command Set Identifier (CSI). In this preparation patch we adjust the code such that it can now support different command sets. We update the namespace data structure to store the CSI value which defaults to NVME_CSI_NVM which represents traditional logical blocks namespace type. The CSI support is required to implement the ZBD backend over NVMe ZNS interface, since ZNS commands belongs to different command set than the default one. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/admin-cmd.c | 33 ++++++++++++++++++++------------- drivers/nvme/target/core.c | 13 ++++++++++++- drivers/nvme/target/nvmet.h | 1 + 3 files changed, 33 insertions(+), 14 deletions(-) diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index 74620240ac47..f4c0f3aca485 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -176,19 +176,26 @@ static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req) if (!log) goto out; - log->acs[nvme_admin_get_log_page] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_identify] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_abort_cmd] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_set_features] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_get_features] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_async_event] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_keep_alive] = cpu_to_le32(1 << 0); - - log->iocs[nvme_cmd_read] = cpu_to_le32(1 << 0); - log->iocs[nvme_cmd_write] = cpu_to_le32(1 << 0); - log->iocs[nvme_cmd_flush] = cpu_to_le32(1 << 0); - log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0); - log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0); + switch (req->cmd->get_log_page.csi) { + case NVME_CSI_NVM: + log->acs[nvme_admin_get_log_page] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_identify] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_abort_cmd] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_set_features] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_get_features] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_async_event] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_keep_alive] = cpu_to_le32(1 << 0); + + log->iocs[nvme_cmd_read] = cpu_to_le32(1 << 0); + log->iocs[nvme_cmd_write] = cpu_to_le32(1 << 0); + log->iocs[nvme_cmd_flush] = cpu_to_le32(1 << 0); + log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0); + log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0); + break; + default: + status = NVME_SC_INVALID_LOG_PAGE; + break; + } status = nvmet_copy_to_sgl(req, 0, log, sizeof(*log)); diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 8ce4d59cc9e7..672e4009f8d6 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -681,6 +681,7 @@ struct nvmet_ns *nvmet_ns_alloc(struct nvmet_subsys *subsys, u32 nsid) uuid_gen(&ns->uuid); ns->buffered_io = false; + ns->csi = NVME_CSI_NVM; return ns; } @@ -1103,6 +1104,16 @@ static inline u8 nvmet_cc_iocqes(u32 cc) return (cc >> NVME_CC_IOCQES_SHIFT) & 0xf; } +static inline bool nvmet_cc_css_check(u8 cc_css) +{ + switch (cc_css <<= NVME_CC_CSS_SHIFT) { + case NVME_CC_CSS_NVM: + return true; + default: + return false; + } +} + static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl) { lockdep_assert_held(&ctrl->lock); @@ -1111,7 +1122,7 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl) nvmet_cc_iocqes(ctrl->cc) != NVME_NVM_IOCQES || nvmet_cc_mps(ctrl->cc) != 0 || nvmet_cc_ams(ctrl->cc) != 0 || - nvmet_cc_css(ctrl->cc) != 0) { + !nvmet_cc_css_check(nvmet_cc_css(ctrl->cc))) { ctrl->csts = NVME_CSTS_CFS; return; } diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 8776dd1a0490..476b3cd91c65 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -81,6 +81,7 @@ struct nvmet_ns { struct pci_dev *p2p_dev; int pi_type; int metadata_size; + u8 csi; }; static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item) From patchwork Sun Dec 13 05:50:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11970485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07ACAC4361B for ; Sun, 13 Dec 2020 05:52:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C085F22D71 for ; Sun, 13 Dec 2020 05:52:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726607AbgLMFwF (ORCPT ); Sun, 13 Dec 2020 00:52:05 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:35692 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726541AbgLMFwD (ORCPT ); Sun, 13 Dec 2020 00:52:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1607838723; x=1639374723; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GAbfP2oLI2Z/QYjVPbY7uTpkm9vSNyt2wU0OMfplrkI=; b=gATkIt9do2/TKnrvdwlYZpHuGktpwu4bBc/WAODvN8IU+QbIXcoveDpr zkDTWPkwrydEPj0C5jAihu62J2gWxVBDSen5RNzfjOlTnXyUwu3M5YRDa z7YB0/2OzEu4qmkt3L53BHaGq3WR161JoHW1piSGF6EcloKoBktd215JY ccKknk5NGoQzZdNjSTeuiP+LUHX0bXLle9NWfIZ7ZB32toySn2JbmtRTi /bRGzaoEPcZehYmyf2P7smtj0F6c5ml6O+WrI8NBawev5rGx7Cs3t2BB1 P3o+bQwWnkyhBALXQPsLuNPcJkW3V9bLzKFk0h9y034MZdJZqZyzvegdT w==; IronPort-SDR: HdWskkbQW8qDlAGys5aMvKYIVLOLWyX79o97H+qnFhlLwOnY5jP/8qXUyj/oTvWXP2DbwDqiZi MByl+SAVsICJ+iB4gvyk6NRZfCDsd3+YDjczqWhFHBD7+6sPaog+++/V80qTLcy3QXE+dogWP3 zlXRVpCNXZ8SfoRFiZSSnhBlyg5/b3ZS3hJGCD07BNyfANId1nCDMmNw7RuW9OjYUTDYxitr1w zGZqBubZbjIDcnj2ddGYREUkHA2CaBy9FbjrxjtEKRuxVD7wWa0+/OA7HGjEk8Kk2fcrH342W1 hbk= X-IronPort-AV: E=Sophos;i="5.78,415,1599494400"; d="scan'208";a="159483869" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Dec 2020 13:50:57 +0800 IronPort-SDR: dWvQx4GC1EY3GNT365+D8S9yCOMQQR1Lp+0grEwUfx09b1GsLYDb2nQ891RAe0lCyIoGKJGnk9 QY8aIxEdNAwXyznsnlmJdxgInkNSeQ7yUXoSVNbQbS4DZInVM2NULnB+DPRv6i5IsuTolTQU6z 58kWEtAo7dz6/I6W2l/09epyVKRqlQIc9i1pa27rZIQw960po8thxh/Y9krFg13ucY5QbyLReR Lpm2nTU/Q0nm/xYfDdmO59r69XQYTzvIYH5Uuw/zGEWkJoD218L4ln7B95UJj5GMfYy+40gpUW iylccJ+fjJLNriu+hyK9ocdl Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2020 21:34:46 -0800 IronPort-SDR: CHzLq1QMtun+tQa1nnNxE95fzNkKXawhZTwYUsfxzmVSI1jWwmexI/YDOMbYI0UDC7nj/QZOU6 CBIeNKEq6EMmdTcETiPMGCEAupT1X4tuUUbHXWTM0VlTDwilraL1Z6f8TV1vPW9452ShHl7apu jZQkfQOQY7U7QA4tS2x9DLAve6X9UdFbgjGeEsve65tI0Gvvnfag8TzjiV3BPpsoAHbyZ/0crk w2+Ha1rKMKAaXudw5U2zC10XgwP7h1jfs70ozU7dku7xB1EHYZKThavv09YYphL8MRinMioQWM OOU= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip01.wdc.com with ESMTP; 12 Dec 2020 21:50:57 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V6 4/6] nvmet: add ZBD over ZNS backend support Date: Sat, 12 Dec 2020 21:50:15 -0800 Message-Id: <20201213055017.7141-5-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> References: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org NVMe TP 4053 – Zoned Namespaces (ZNS) allows host software to communicate with a non-volatile memory subsystem using zones for NVMe protocol based controllers. NVMeOF already support the ZNS NVMe Protocol compliant devices on the target in the passthru mode. There are Generic zoned block devices like Shingled Magnetic Recording (SMR) HDDs which are not based on the NVMe protocol. This patch adds ZNS backend to support the ZBDs for NVMeOF target. This support includes implementing the new command set NVME_CSI_ZNS, adding different command handlers for ZNS command set such as NVMe Identify Controller, NVMe Identify Namespace, NVMe Zone Append, NVMe Zone Management Send and NVMe Zone Management Receive. With new command set identifier we also update the target command effects logs to reflect the ZNS compliant commands. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/Makefile | 1 + drivers/nvme/target/admin-cmd.c | 26 +++ drivers/nvme/target/core.c | 1 + drivers/nvme/target/io-cmd-bdev.c | 33 ++- drivers/nvme/target/nvmet.h | 38 ++++ drivers/nvme/target/zns.c | 357 ++++++++++++++++++++++++++++++ 6 files changed, 448 insertions(+), 8 deletions(-) create mode 100644 drivers/nvme/target/zns.c diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile index ebf91fc4c72e..9837e580fa7e 100644 --- a/drivers/nvme/target/Makefile +++ b/drivers/nvme/target/Makefile @@ -12,6 +12,7 @@ obj-$(CONFIG_NVME_TARGET_TCP) += nvmet-tcp.o nvmet-y += core.o configfs.o admin-cmd.o fabrics-cmd.o \ discovery.o io-cmd-file.o io-cmd-bdev.o nvmet-$(CONFIG_NVME_TARGET_PASSTHRU) += passthru.o +nvmet-$(CONFIG_BLK_DEV_ZONED) += zns.o nvme-loop-y += loop.o nvmet-rdma-y += rdma.o nvmet-fc-y += fc.o diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index f4c0f3aca485..6f5279b15aa6 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -192,6 +192,15 @@ static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req) log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0); log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0); break; + case NVME_CSI_ZNS: + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) { + u32 *iocs = log->iocs; + + iocs[nvme_cmd_zone_append] = cpu_to_le32(1 << 0); + iocs[nvme_cmd_zone_mgmt_send] = cpu_to_le32(1 << 0); + iocs[nvme_cmd_zone_mgmt_recv] = cpu_to_le32(1 << 0); + } + break; default: status = NVME_SC_INVALID_LOG_PAGE; break; @@ -614,6 +623,7 @@ static u16 nvmet_copy_ns_identifier(struct nvmet_req *req, u8 type, u8 len, static void nvmet_execute_identify_desclist(struct nvmet_req *req) { + u16 nvme_cis_zns = NVME_CSI_ZNS; u16 status = 0; off_t off = 0; @@ -638,6 +648,14 @@ static void nvmet_execute_identify_desclist(struct nvmet_req *req) if (status) goto out; } + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) { + if (req->ns->csi == NVME_CSI_ZNS) + status = nvmet_copy_ns_identifier(req, NVME_NIDT_CSI, + NVME_NIDT_CSI_LEN, + &nvme_cis_zns, &off); + if (status) + goto out; + } if (sg_zero_buffer(req->sg, req->sg_cnt, NVME_IDENTIFY_DATA_SIZE - off, off) != NVME_IDENTIFY_DATA_SIZE - off) @@ -655,8 +673,16 @@ static void nvmet_execute_identify(struct nvmet_req *req) switch (req->cmd->identify.cns) { case NVME_ID_CNS_NS: return nvmet_execute_identify_ns(req); + case NVME_ID_CNS_CS_NS: + if (req->cmd->identify.csi == NVME_CSI_ZNS) + return nvmet_execute_identify_cns_cs_ns(req); + break; case NVME_ID_CNS_CTRL: return nvmet_execute_identify_ctrl(req); + case NVME_ID_CNS_CS_CTRL: + if (req->cmd->identify.csi == NVME_CSI_ZNS) + return nvmet_execute_identify_cns_cs_ctrl(req); + break; case NVME_ID_CNS_NS_ACTIVE_LIST: return nvmet_execute_identify_nslist(req); case NVME_ID_CNS_NS_DESC_LIST: diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 672e4009f8d6..17a99c7134dc 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -1107,6 +1107,7 @@ static inline u8 nvmet_cc_iocqes(u32 cc) static inline bool nvmet_cc_css_check(u8 cc_css) { switch (cc_css <<= NVME_CC_CSS_SHIFT) { + case NVME_CC_CSS_CSI: case NVME_CC_CSS_NVM: return true; default: diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 23095bdfce06..6178ef643962 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -63,6 +63,14 @@ static void nvmet_bdev_ns_enable_integrity(struct nvmet_ns *ns) } } +void nvmet_bdev_ns_disable(struct nvmet_ns *ns) +{ + if (ns->bdev) { + blkdev_put(ns->bdev, FMODE_WRITE | FMODE_READ); + ns->bdev = NULL; + } +} + int nvmet_bdev_ns_enable(struct nvmet_ns *ns) { int ret; @@ -86,15 +94,15 @@ int nvmet_bdev_ns_enable(struct nvmet_ns *ns) if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY_T10)) nvmet_bdev_ns_enable_integrity(ns); - return 0; -} - -void nvmet_bdev_ns_disable(struct nvmet_ns *ns) -{ - if (ns->bdev) { - blkdev_put(ns->bdev, FMODE_WRITE | FMODE_READ); - ns->bdev = NULL; + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && bdev_is_zoned(ns->bdev)) { + if (!nvmet_bdev_zns_enable(ns)) { + nvmet_bdev_ns_disable(ns); + return -EINVAL; + } + ns->csi = NVME_CSI_ZNS; } + + return 0; } void nvmet_bdev_ns_revalidate(struct nvmet_ns *ns) @@ -448,6 +456,15 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_bdev_execute_write_zeroes; return 0; + case nvme_cmd_zone_append: + req->execute = nvmet_bdev_execute_zone_append; + return 0; + case nvme_cmd_zone_mgmt_recv: + req->execute = nvmet_bdev_execute_zone_mgmt_recv; + return 0; + case nvme_cmd_zone_mgmt_send: + req->execute = nvmet_bdev_execute_zone_mgmt_send; + return 0; default: pr_err("unhandled cmd %d on qid %d\n", cmd->common.opcode, req->sq->qid); diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 476b3cd91c65..7361665585a2 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -252,6 +252,10 @@ struct nvmet_subsys { unsigned int admin_timeout; unsigned int io_timeout; #endif /* CONFIG_NVME_TARGET_PASSTHRU */ + +#ifdef CONFIG_BLK_DEV_ZONED + u8 zasl; +#endif /* CONFIG_BLK_DEV_ZONED */ }; static inline struct nvmet_subsys *to_subsys(struct config_item *item) @@ -614,4 +618,38 @@ static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba) return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT); } +#ifdef CONFIG_BLK_DEV_ZONED +bool nvmet_bdev_zns_enable(struct nvmet_ns *ns); +void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req); +void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req); +void nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req); +void nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req); +void nvmet_bdev_execute_zone_append(struct nvmet_req *req); +#else /* CONFIG_BLK_DEV_ZONED */ +static inline bool nvmet_bdev_zns_enable(struct nvmet_ns *ns) +{ + return false; +} +static inline void +nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req) +{ +} +static inline void +nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req) +{ +} +static inline void +nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req) +{ +} +static inline void +nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req) +{ +} +static inline void +nvmet_bdev_execute_zone_append(struct nvmet_req *req) +{ +} +#endif /* CONFIG_BLK_DEV_ZONED */ + #endif /* _NVMET_H */ diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c new file mode 100644 index 000000000000..3f9a5ac6a6c5 --- /dev/null +++ b/drivers/nvme/target/zns.c @@ -0,0 +1,357 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NVMe ZNS-ZBD command implementation. + * Copyright (c) 2020-2021 HGST, a Western Digital Company. + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include +#include +#include "nvmet.h" + +/* + * We set the Memory Page Size Minimum (MPSMIN) for target controller to 0 + * which gets added by 12 in the nvme_enable_ctrl() which results in 2^12 = 4k + * as page_shift value. When calculating the ZASL use shift by 12. + */ +#define NVMET_MPSMIN_SHIFT 12 + +static u16 nvmet_bdev_zns_checks(struct nvmet_req *req) +{ + u16 status = 0; + + if (!bdev_is_zoned(req->ns->bdev)) { + status = NVME_SC_INVALID_NS | NVME_SC_DNR; + goto out; + } + + if (req->cmd->zmr.zra != NVME_ZRA_ZONE_REPORT) { + status = NVME_SC_INVALID_FIELD; + goto out; + } + + if (req->cmd->zmr.zrasf != NVME_ZRASF_ZONE_REPORT_ALL) { + status = NVME_SC_INVALID_FIELD; + goto out; + } + + if (req->cmd->zmr.pr != NVME_REPORT_ZONE_PARTIAL) + status = NVME_SC_INVALID_FIELD; + +out: + return status; +} + +/* + * ZNS related command implementation and helpers. + */ + +static inline u8 nvmet_zasl(unsigned int zone_append_sects) +{ + /* + * Zone Append Size Limit is the value experessed in the units + * of minimum memory page size (i.e. 12) and is reported power of 2. + */ + return ilog2((zone_append_sects << 9) >> NVMET_MPSMIN_SHIFT); +} + +static inline bool nvmet_zns_update_zasl(struct nvmet_ns *ns) +{ + struct request_queue *q = ns->bdev->bd_disk->queue; + u8 zasl = nvmet_zasl(queue_max_zone_append_sectors(q)); + + if (ns->subsys->zasl) + return ns->subsys->zasl < zasl ? false : true; + + ns->subsys->zasl = zasl; + return true; +} + +static int nvmet_bdev_validate_zns_zones_cb(struct blk_zone *z, + unsigned int idx, void *data) +{ + struct blk_zone *zone = data; + + memcpy(zone, z, sizeof(struct blk_zone)); + + return 0; +} + +static bool nvmet_bdev_has_conv_zones(struct block_device *bdev) +{ + struct blk_zone zone; + int reported_zones; + unsigned int zno; + + if (bdev->bd_disk->queue->conv_zones_bitmap) + return false; + + for (zno = 0; zno < blk_queue_nr_zones(bdev->bd_disk->queue); zno++) { + reported_zones = blkdev_report_zones(bdev, + zno * bdev_zone_sectors(bdev), 1, + nvmet_bdev_validate_zns_zones_cb, + &zone); + + if (reported_zones != 1) + return true; + + if (zone.type == BLK_ZONE_TYPE_CONVENTIONAL) + return true; + } + + return false; +} + +bool nvmet_bdev_zns_enable(struct nvmet_ns *ns) +{ + if (nvmet_bdev_has_conv_zones(ns->bdev)) + return false; + + /* + * For ZBC and ZAC devices, writes into sequential zones must be aligned + * to the device physical block size. So use this value as the logical + * block size to avoid errors. + */ + ns->blksize_shift = blksize_bits(bdev_physical_block_size(ns->bdev)); + + if (!nvmet_zns_update_zasl(ns)) + return false; + + return !(get_capacity(ns->bdev->bd_disk) & + (bdev_zone_sectors(ns->bdev) - 1)); +} + +/* + * ZNS related Admin and I/O command handlers. + */ +void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req) +{ + u8 zasl = req->sq->ctrl->subsys->zasl; + struct nvmet_ctrl *ctrl = req->sq->ctrl; + struct nvme_id_ctrl_zns *id; + u16 status; + + id = kzalloc(sizeof(*id), GFP_KERNEL); + if (!id) { + status = NVME_SC_INTERNAL; + goto out; + } + + if (ctrl->ops->get_mdts) + id->zasl = min_t(u8, ctrl->ops->get_mdts(ctrl), zasl); + else + id->zasl = zasl; + + status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); + + kfree(id); +out: + nvmet_req_complete(req, status); +} + +void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req) +{ + struct nvme_id_ns_zns *id_zns; + u16 status = 0; + u64 zsze; + + if (le32_to_cpu(req->cmd->identify.nsid) == NVME_NSID_ALL) { + req->error_loc = offsetof(struct nvme_identify, nsid); + status = NVME_SC_INVALID_NS | NVME_SC_DNR; + goto out; + } + + id_zns = kzalloc(sizeof(*id_zns), GFP_KERNEL); + if (!id_zns) { + status = NVME_SC_INTERNAL; + goto out; + } + + req->ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid); + if (!req->ns) { + status = NVME_SC_INTERNAL; + goto done; + } + + if (!bdev_is_zoned(req->ns->bdev)) { + req->error_loc = offsetof(struct nvme_identify, nsid); + status = NVME_SC_INVALID_NS | NVME_SC_DNR; + goto done; + } + + nvmet_ns_revalidate(req->ns); + zsze = (bdev_zone_sectors(req->ns->bdev) << 9) >> + req->ns->blksize_shift; + id_zns->lbafe[0].zsze = cpu_to_le64(zsze); + id_zns->mor = cpu_to_le32(bdev_max_open_zones(req->ns->bdev)); + id_zns->mar = cpu_to_le32(bdev_max_active_zones(req->ns->bdev)); + +done: + status = nvmet_copy_to_sgl(req, 0, id_zns, sizeof(*id_zns)); + kfree(id_zns); +out: + nvmet_req_complete(req, status); +} + +struct nvmet_report_zone_data { + struct nvmet_ns *ns; + struct nvme_zone_report *rz; +}; + +static int nvmet_bdev_report_zone_cb(struct blk_zone *z, unsigned int idx, + void *data) +{ + struct nvmet_report_zone_data *report_zone_data = data; + struct nvme_zone_descriptor *entries = report_zone_data->rz->entries; + struct nvmet_ns *ns = report_zone_data->ns; + + entries[idx].zcap = nvmet_sect_to_lba(ns, z->capacity); + entries[idx].zslba = nvmet_sect_to_lba(ns, z->start); + entries[idx].wp = nvmet_sect_to_lba(ns, z->wp); + entries[idx].za = z->reset ? 1 << 2 : 0; + entries[idx].zt = z->type; + entries[idx].zs = z->cond << 4; + + return 0; +} + +void nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req) +{ + sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->zmr.slba); + u32 bufsize = (le32_to_cpu(req->cmd->zmr.numd) + 1) << 2; + struct nvmet_report_zone_data data = { .ns = req->ns }; + unsigned int nr_zones; + int reported_zones; + u16 status; + + nr_zones = (bufsize - sizeof(struct nvme_zone_report)) / + sizeof(struct nvme_zone_descriptor); + + status = nvmet_bdev_zns_checks(req); + if (status) + goto out; + + data.rz = __vmalloc(bufsize, GFP_KERNEL | __GFP_NORETRY | __GFP_ZERO); + if (!data.rz) { + status = NVME_SC_INTERNAL; + goto out; + } + + reported_zones = blkdev_report_zones(req->ns->bdev, sect, nr_zones, + nvmet_bdev_report_zone_cb, + &data); + if (reported_zones < 0) { + status = NVME_SC_INTERNAL; + goto out_free_report_zones; + } + + data.rz->nr_zones = cpu_to_le64(reported_zones); + + status = nvmet_copy_to_sgl(req, 0, data.rz, bufsize); + +out_free_report_zones: + kvfree(data.rz); +out: + nvmet_req_complete(req, status); +} + +void nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req) +{ + sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->zms.slba); + sector_t nr_sect = bdev_zone_sectors(req->ns->bdev); + enum req_opf op = REQ_OP_LAST; + u16 status = NVME_SC_SUCCESS; + int ret; + + if (req->cmd->zms.select_all) + nr_sect = get_capacity(req->ns->bdev->bd_disk); + + switch (req->cmd->zms.zsa) { + case NVME_ZONE_OPEN: + op = REQ_OP_ZONE_OPEN; + break; + case NVME_ZONE_CLOSE: + op = REQ_OP_ZONE_CLOSE; + break; + case NVME_ZONE_FINISH: + op = REQ_OP_ZONE_FINISH; + break; + case NVME_ZONE_RESET: + op = REQ_OP_ZONE_RESET; + break; + default: + status = NVME_SC_INVALID_FIELD; + goto out; + } + + ret = blkdev_zone_mgmt(req->ns->bdev, op, sect, nr_sect, GFP_KERNEL); + if (ret) + status = NVME_SC_INTERNAL; +out: + nvmet_req_complete(req, status); +} + +void nvmet_bdev_execute_zone_append(struct nvmet_req *req) +{ + sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); + struct request_queue *q = req->ns->bdev->bd_disk->queue; + unsigned int max_sects = queue_max_zone_append_sectors(q); + u16 status = NVME_SC_SUCCESS; + unsigned int total_len = 0; + struct scatterlist *sg; + int ret = 0, sg_cnt; + struct bio *bio; + + if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req))) + return; + + if (!req->sg_cnt) { + nvmet_req_complete(req, 0); + return; + } + + if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { + bio = &req->b.inline_bio; + bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); + } else { + bio = bio_alloc(GFP_KERNEL, req->sg_cnt); + } + + bio_set_dev(bio, req->ns->bdev); + bio->bi_iter.bi_sector = sect; + bio->bi_opf = REQ_OP_ZONE_APPEND | REQ_SYNC | REQ_IDLE; + if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA)) + bio->bi_opf |= REQ_FUA; + + for_each_sg(req->sg, sg, req->sg_cnt, sg_cnt) { + struct page *p = sg_page(sg); + unsigned int l = sg->length; + unsigned int o = sg->offset; + bool same_page = false; + + ret = bio_add_hw_page(q, bio, p, l, o, max_sects, &same_page); + if (ret != sg->length) { + status = NVME_SC_INTERNAL; + goto out_bio_put; + } + if (same_page) + put_page(p); + + total_len += sg->length; + } + + if (total_len != nvmet_rw_data_len(req)) { + status = NVME_SC_INTERNAL | NVME_SC_DNR; + goto out_bio_put; + } + + ret = submit_bio_wait(bio); + status = ret < 0 ? NVME_SC_INTERNAL : status; + + req->cqe->result.u64 = nvmet_sect_to_lba(req->ns, + bio->bi_iter.bi_sector); + +out_bio_put: + if (bio != &req->b.inline_bio) + bio_put(bio); + nvmet_req_complete(req, status); +} From patchwork Sun Dec 13 05:50:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11970491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3A05C433FE for ; Sun, 13 Dec 2020 05:52:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 952E22312D for ; Sun, 13 Dec 2020 05:52:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726617AbgLMFwe (ORCPT ); Sun, 13 Dec 2020 00:52:34 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:4890 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726369AbgLMFwd (ORCPT ); Sun, 13 Dec 2020 00:52:33 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1607838753; x=1639374753; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fsUzpqMK2kJ7sFLk1LyyNvbRs9Aq1ZwOj7Q3hAK3fLU=; b=QIKrC5d+xDdrK0vrNSv7R4BavB2Yfo6xmv6vPcH43F8+8RgLDn1KVl3P WiWdtzkgLGp56IWAZOH9q1GLY1uWC/X/NwYyNeD9VVYgBhLQdz58Bu9Di vYUWLAxrk25+6QzUCwNFQrX6x3T7yTXLuAgB8vhCGIz5bx4C88z0a/vSp e/SayDeE+U2p8px4/9qoWcuxr0Y8WXb311eqYd/R+Toxx0M6Kfw+DiHiO siHrMgcZwTBjEfFOTVBX2HLsd7t/gw12M6Z7piH1C/Eb6pq5EK/5BZjjZ TaHvPcXB8MytEjti4zFaqjrNihcbN6X9BRL0TDsnRgyoqBBGuh7U6o0iU Q==; IronPort-SDR: sUWvVvrjR3vvNmbhDN01UNDjEuLPFNsp7J2sulNN6ITS0cOZXNtEhPsK0B9OQ03ZwgdeCqT+nb qVarX5XS8nu5CpO3RY0kfvJbbJ3PGGtDbWRTgLbLuwYyjKYp7Dqhqoo0wdxTnxte2KFKvMOI86 xySD+EA4SNJBc3mVRP02gzSoW6PDOnxMGNSlb8kRq9+51uPWXOcOl8eyoRBoNtsfW/szHTi1j+ 5APS9B11O4Y59HWeu8PPpLvECWzIszfl6s1pzWNzNwWqpbUf/s/bP0eui13iZqHdocvfNUVu9p 7jE= X-IronPort-AV: E=Sophos;i="5.78,415,1599494400"; d="scan'208";a="155065886" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 13 Dec 2020 13:51:07 +0800 IronPort-SDR: CxD4XTMYC2UhUZKe+R6BeZ3E4nrRhC2QAFAeMsN7M9tKhzMaUa+vuOIN4IKm8YVZYtqPxhCett LukQQxn62L8th++744D/O2MmZnBq86rHGWj+dJREcD4WPoEjpuE9fBrcuWFBToK+xJoSmwBU36 BmMpEzkqtAaB+5wVpWr/Gyhlg3r3sE62A4Rpcc4XGlg2d7dFz/TgikXBjWpr5cYz8t4Sp+jq86 XEi2zKR7vV0yRqbm1jbVHjjc+kaPw0paUvzyrwBg0JcNGWdHF3jYtvFuye7O7px2OVa9rMA3gq n1j3/AmRNV8H/uhdCggvAdZO Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2020 21:36:29 -0800 IronPort-SDR: ujqV4VBBz1EWImNreZgSTJiWSprCOP+Edif9CfLHtcOSpf/lkC0VY4gpgsR+BklaywtFXinufH C7XaopUMiRHb95T44V1JKjEbJ5amRFj9oA/XfpYZ41sq/+eObPMAa9xBBF22Hi9u46/PC8DWBk V9wiVP58es5CW/cl4BXwd6reVk9a13JDotxoAzcqPjZZrT7J+GWNDSdTinCcoWIlNEG63nT6QF ry8H8TZnpEXM0a5uUA0Wxk6T/5NIWaKT4cBtMca7vOVGEE0U785sxdWY0FWRW92UTH9b5Bd6Lh u3o= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip01.wdc.com with ESMTP; 12 Dec 2020 21:51:07 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V6 5/6] nvmet: add bio get helper for different backends Date: Sat, 12 Dec 2020 21:50:16 -0800 Message-Id: <20201213055017.7141-6-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> References: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org With the addition of the zns backend now we have three different backends with inline bio optimization. That leads to having duplicate code in for allocating or initializing the bio in all three backends: generic bdev, passsthru, and generic zns. Add a helper function to reduce the duplicate code such that helper function accepts the bi_end_io callback which gets initialize for the non-inline bio_alloc() case. This is due to the special case needed for the passthru backend non-inline bio allocation bio_alloc() where we set the bio->bi_end_io = bio_put, having this parameter avoids the extra branch in the passthru fast path. For rest of the backends, we set the same bi_end_io callback for inline and non-inline cases, that is for generic bdev we set to nvmet_bio_done() and for generic zns we set to NULL.                             Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/io-cmd-bdev.c | 7 +------ drivers/nvme/target/nvmet.h | 16 ++++++++++++++++ drivers/nvme/target/passthru.c | 8 +------- drivers/nvme/target/zns.c | 8 +------- 4 files changed, 19 insertions(+), 20 deletions(-) diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 6178ef643962..72746e29cb0d 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -266,12 +266,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); - if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { - bio = &req->b.inline_bio; - bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); - } else { - bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES)); - } + bio = nvmet_req_bio_get(req, NULL); bio_set_dev(bio, req->ns->bdev); bio->bi_iter.bi_sector = sector; bio->bi_private = req; diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 7361665585a2..3fc84f79cce1 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -652,4 +652,20 @@ nvmet_bdev_execute_zone_append(struct nvmet_req *req) } #endif /* CONFIG_BLK_DEV_ZONED */ +static inline struct bio *nvmet_req_bio_get(struct nvmet_req *req, + bio_end_io_t *bi_end_io) +{ + struct bio *bio; + + if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { + bio = &req->b.inline_bio; + bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); + return bio; + } + + bio = bio_alloc(GFP_KERNEL, req->sg_cnt); + bio->bi_end_io = bi_end_io; + return bio; +} + #endif /* _NVMET_H */ diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index b9776fc8f08f..54f765b566ee 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -194,13 +194,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) if (req->sg_cnt > BIO_MAX_PAGES) return -EINVAL; - if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { - bio = &req->p.inline_bio; - bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); - } else { - bio = bio_alloc(GFP_KERNEL, min(req->sg_cnt, BIO_MAX_PAGES)); - bio->bi_end_io = bio_put; - } + bio = nvmet_req_bio_get(req, bio_put); bio->bi_opf = req_op(rq); for_each_sg(req->sg, sg, req->sg_cnt, i) { diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c index 3f9a5ac6a6c5..058da4a2012b 100644 --- a/drivers/nvme/target/zns.c +++ b/drivers/nvme/target/zns.c @@ -309,13 +309,7 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req) return; } - if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { - bio = &req->b.inline_bio; - bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); - } else { - bio = bio_alloc(GFP_KERNEL, req->sg_cnt); - } - + bio = nvmet_req_bio_get(req, NULL); bio_set_dev(bio, req->ns->bdev); bio->bi_iter.bi_sector = sect; bio->bi_opf = REQ_OP_ZONE_APPEND | REQ_SYNC | REQ_IDLE; From patchwork Sun Dec 13 05:50:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11970489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7E8CC433FE for ; Sun, 13 Dec 2020 05:52:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9309D2312D for ; Sun, 13 Dec 2020 05:52:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726556AbgLMFwP (ORCPT ); Sun, 13 Dec 2020 00:52:15 -0500 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:51958 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726369AbgLMFwO (ORCPT ); Sun, 13 Dec 2020 00:52:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1607839558; x=1639375558; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6iQxxapLrqBA7SoY0rjMc7pwZ2Vn0eqmBYQq+bt9gtU=; b=pauQCJ+xrfHqXF1RH6QKaq3JHSSXm3S2wQptH/bBBRvsh9nimwlP7kgZ WoOSKLaUCJSBgCoioJWKHCC4vrmiop+4ElVK42WsT+tY0VwmUuSWWMA23 XghbQZCigvdW11rcKIANMqMcWmV6U4k7REUMRhuGoqe/Us5edoog2aN3i Y3dzmQZtLw4pBhuQycWgVe9YPPUbUEMihwtzCmtzUGL0vjBJ+E88KD0cL Ni2IJT3AVq2rBQEuH1SWls8K5CjdNDMaoECynFwelIszL3p021tkiC4vM KpjKw3+MKIoQkKwo7GmWzjykCvKaNZoxXj8BRqp7suEdaC8uVfEihULZA g==; IronPort-SDR: MN1o6Ean7u276msvD665gcw1QGBx8BQVOPSgBiIY6qpMH6u9YhfGuNvwlWOLveUZLDNpLTizlF 0hthM0F1pczILqNb6xx6LIZPQUUTv4SaoG180lJ0I3U6Y1vPhtO+MG0xNYn6hST9rgWxjN3oL0 DfHctIz+lTaC+6tRVEderOnhdIUa3CoO3tjpbIA8sdvdZX8loBsXtpjXVHhKfIBNEG5APUf9XF Pp7xCvHIgi+KrU35L6tb+sw7zlQRTO7Ia/Ikjcj++S5E5zxEb9DCblgJ3MuGWabHy5u8QLfYf7 9Cw= X-IronPort-AV: E=Sophos;i="5.78,415,1599494400"; d="scan'208";a="258770362" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Dec 2020 14:04:31 +0800 IronPort-SDR: oS4tXCd272kFwUdQZtfsvjVUN68CTyl6JeOS4z0s2eC64hP1exxdVtis45OCvcWzg3fS0Dh5/p ILira0nxLAlYt8oHsbFwG/7dFBDlVLTVohA+2lkl3Bz/tEiG0jPbJ6pGsuNJcGdkKzMV+8CNFg KZvadzgwy1zOolpuiH6K0XfK7kb8qHOqSDOEarOgnKa/LxkxfSJO22/pSzHglJshMjchVQz/hH qh0ju/Rq17DCJJGvacqv0lAtG+dgGNA9LnKwClQ4tH2YQOieEgpL+uM9RLahUHot14wGzSl+am T0Sv5TOQlCI2aNUzDLgI3cEK Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2020 21:35:06 -0800 IronPort-SDR: blFYTv0Uo9gZGR0KtydYgcZBR6w+VIk2TS6ks+xbU0VwGquUjtHt6fIfuAXnIEwYueV91yJCEp IMnErq9Zoq0VypRFL87A/VvN0NH9DMJw80lrlPiLAPLOHh7O9mDPVIDqaSzqIOYofxW6VNawBF 9hPqX3eDBSB/VPrNCwXpZLz8zg4OG+jFH/iskXV7O25gnzKKny6HcE4pFv0NJ+fWMuE2rBkIbZ wXqh9qQKd/p/sAlK0nzoqecSOVfh6wjnSEPcZXu4ikkpmo+hvEZo7eXMIYYlCL1Lw7gmivLZxY joo= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip01.wdc.com with ESMTP; 12 Dec 2020 21:51:17 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V6 6/6] nvmet: add bio put helper for different backends Date: Sat, 12 Dec 2020 21:50:17 -0800 Message-Id: <20201213055017.7141-7-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> References: <20201213055017.7141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org With the addition of zns backend now we have three different backends with inline bio optimization. That leads to having duplicate code in for freeing the bio in all three backends: generic bdev, passsthru and generic zns. Add a helper function to avoid the duplicate code and update the respective backends. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/io-cmd-bdev.c | 3 +-- drivers/nvme/target/nvmet.h | 6 ++++++ drivers/nvme/target/passthru.c | 3 +-- drivers/nvme/target/zns.c | 3 +-- 4 files changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 72746e29cb0d..6ffd84a620e7 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -172,8 +172,7 @@ static void nvmet_bio_done(struct bio *bio) struct nvmet_req *req = bio->bi_private; nvmet_req_complete(req, blk_to_nvme_status(req, bio->bi_status)); - if (bio != &req->b.inline_bio) - bio_put(bio); + nvmet_req_bio_put(req, bio); } #ifdef CONFIG_BLK_DEV_INTEGRITY diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 3fc84f79cce1..e770086b5890 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -668,4 +668,10 @@ static inline struct bio *nvmet_req_bio_get(struct nvmet_req *req, return bio; } +static inline void nvmet_req_bio_put(struct nvmet_req *req, struct bio *bio) +{ + if (bio != &req->b.inline_bio) + bio_put(bio); +} + #endif /* _NVMET_H */ diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 54f765b566ee..a4a73d64c603 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -200,8 +200,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) for_each_sg(req->sg, sg, req->sg_cnt, i) { if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length, sg->offset) < sg->length) { - if (bio != &req->p.inline_bio) - bio_put(bio); + nvmet_req_bio_put(req, bio); return -EINVAL; } } diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c index 058da4a2012b..2b5c04e56097 100644 --- a/drivers/nvme/target/zns.c +++ b/drivers/nvme/target/zns.c @@ -345,7 +345,6 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req) bio->bi_iter.bi_sector); out_bio_put: - if (bio != &req->b.inline_bio) - bio_put(bio); + nvmet_req_bio_put(req, bio); nvmet_req_complete(req, status); }