Message ID | 20210112042623.6316-1-chaitanya.kulkarni@wdc.com (mailing list archive) |
---|---|
Headers | show |
Series | nvmet: add ZBD backend support | expand |
Damien, On 1/11/21 20:26, Chaitanya Kulkarni wrote: > Hi, > > NVMeOF Host is capable of handling the NVMe Protocol based Zoned Block > Devices (ZBD) in the Zoned Namespaces (ZNS) mode with the passthru > backend. There is no support for a generic block device backend to > handle the ZBD devices which are not NVMe protocol compliant. > > This adds support to export the ZBDs (which are not NVMe drives) to host > the from target via NVMeOF using the host side ZNS interface. > > The patch series is generated in bottom-top manner where, it first adds > prep patch and ZNS command-specific handlers on the top of genblk and > updates the data structures, then one by one it wires up the admin cmds > in the order host calls them in namespace initializing sequence. Once > everything is ready, it wires-up the I/O command handlers. See below for > patch-series overview. > > All the testcases are passing for the ZoneFS where ZBD exported with > NVMeOF backed by null_blk ZBD and null_blk ZBD without NVMeOF. Adding > test result below. > > Note: This patch-series is based on the earlier posted patch series :- > > [PATCH V2 0/4] nvmet: admin-cmd related cleanups and a fix > http://lists.infradead.org/pipermail/linux-nvme/2021-January/021729.html > > -ck thanks a lot or your comments, I'll send a V10 with fixes for your comments.
Christoph/Damien, On 1/11/21 8:26 PM, Chaitanya Kulkarni wrote: > Hi, > > NVMeOF Host is capable of handling the NVMe Protocol based Zoned Block > Devices (ZBD) in the Zoned Namespaces (ZNS) mode with the passthru > backend. There is no support for a generic block device backend to > handle the ZBD devices which are not NVMe protocol compliant. > > This adds support to export the ZBDs (which are not NVMe drives) to host > the from target via NVMeOF using the host side ZNS interface. > > The patch series is generated in bottom-top manner where, it first adds > prep patch and ZNS command-specific handlers on the top of genblk and > updates the data structures, then one by one it wires up the admin cmds > in the order host calls them in namespace initializing sequence. Once > everything is ready, it wires-up the I/O command handlers. See below for > patch-series overview. > > All the testcases are passing for the ZoneFS where ZBD exported with > NVMeOF backed by null_blk ZBD and null_blk ZBD without NVMeOF. Adding > test result below. > > Note: This patch-series is based on the earlier posted patch series :- > > [PATCH V2 0/4] nvmet: admin-cmd related cleanups and a fix > http://lists.infradead.org/pipermail/linux-nvme/2021-January/021729.html > > -ck > > Changes from V8:- > > 1. Rebase and retest on latest nvme-5.11. > 2. Export ctrl->cap csi support only if CONFIG_BLK_DEV_ZONE is set. > 3. Add a fix to admin ns-desc list handler for handling default csi. > I can see that Damien's granularity series is in the linux-block tree, I'm planning to send v10 of this series given that it also has a block layer patch [1] should I use the linux-block/for-next or linux-nvme/nvme-5.12 ? [1] [PATCH V9 1/9] block: export bio_add_hw_pages()
On Wed, Feb 10, 2021 at 10:42:42PM +0000, Chaitanya Kulkarni wrote: > I can see that Damien's granularity series is in the linux-block tree, I'm > planning to send v10 of this series given that it also has a block layer > patch > [1] should I use the linux-block/for-next or linux-nvme/nvme-5.12 ? I'd just wait for -rc1.