mbox series

[v8,00/10] Introduce VDUSE - vDPA Device in Userspace

Message ID 20210615141331.407-1-xieyongji@bytedance.com (mailing list archive)
Headers show
Series Introduce VDUSE - vDPA Device in Userspace | expand

Message

Yongji Xie June 15, 2021, 2:13 p.m. UTC
This series introduces a framework that makes it possible to implement
software-emulated vDPA devices in userspace. And to make it simple, the
emulated vDPA device's control path is handled in the kernel and only the
data path is implemented in the userspace.

Since the emuldated vDPA device's control path is handled in the kernel,
a message mechnism is introduced to make userspace be aware of the data
path related changes. Userspace can use read()/write() to receive/reply
the control messages.

In the data path, the core is mapping dma buffer into VDUSE daemon's
address space, which can be implemented in different ways depending on
the vdpa bus to which the vDPA device is attached.

In virtio-vdpa case, we implements a MMU-based on-chip IOMMU driver with
bounce-buffering mechanism to achieve that. And in vhost-vdpa case, the dma
buffer is reside in a userspace memory region which can be shared to the
VDUSE userspace processs via transferring the shmfd.

The details and our user case is shown below:

------------------------    -------------------------   ----------------------------------------------
|            Container |    |              QEMU(VM) |   |                               VDUSE daemon |
|       ---------      |    |  -------------------  |   | ------------------------- ---------------- |
|       |dev/vdx|      |    |  |/dev/vhost-vdpa-x|  |   | | vDPA device emulation | | block driver | |
------------+-----------     -----------+------------   -------------+----------------------+---------
            |                           |                            |                      |
            |                           |                            |                      |
------------+---------------------------+----------------------------+----------------------+---------
|    | block device |           |  vhost device |            | vduse driver |          | TCP/IP |    |
|    -------+--------           --------+--------            -------+--------          -----+----    |
|           |                           |                           |                       |        |
| ----------+----------       ----------+-----------         -------+-------                |        |
| | virtio-blk driver |       |  vhost-vdpa driver |         | vdpa device |                |        |
| ----------+----------       ----------+-----------         -------+-------                |        |
|           |      virtio bus           |                           |                       |        |
|   --------+----+-----------           |                           |                       |        |
|                |                      |                           |                       |        |
|      ----------+----------            |                           |                       |        |
|      | virtio-blk device |            |                           |                       |        |
|      ----------+----------            |                           |                       |        |
|                |                      |                           |                       |        |
|     -----------+-----------           |                           |                       |        |
|     |  virtio-vdpa driver |           |                           |                       |        |
|     -----------+-----------           |                           |                       |        |
|                |                      |                           |    vdpa bus           |        |
|     -----------+----------------------+---------------------------+------------           |        |
|                                                                                        ---+---     |
-----------------------------------------------------------------------------------------| NIC |------
                                                                                         ---+---
                                                                                            |
                                                                                   ---------+---------
                                                                                   | Remote Storages |
                                                                                   -------------------

We make use of it to implement a block device connecting to
our distributed storage, which can be used both in containers and
VMs. Thus, we can have an unified technology stack in this two cases.

To test it with null-blk:

  $ qemu-storage-daemon \
      --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
      --monitor chardev=charmonitor \
      --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0 \
      --export type=vduse-blk,id=test,node-name=disk0,writable=on,name=vduse-null,num-queues=16,queue-size=128

The qemu-storage-daemon can be found at https://github.com/bytedance/qemu/tree/vduse

To make the userspace VDUSE processes such as qemu-storage-daemon able to
be run by an unprivileged user. We did some works on virtio driver to avoid
trusting device, including:

  - validating the used length: 

    * https://lore.kernel.org/lkml/20210531135852.113-1-xieyongji@bytedance.com/
    * https://lore.kernel.org/lkml/20210525125622.1203-1-xieyongji@bytedance.com/

  - validating the device config:
    
    * https://lore.kernel.org/lkml/20210615104810.151-1-xieyongji@bytedance.com/

  - validating the device response:

    * https://lore.kernel.org/lkml/20210615105218.214-1-xieyongji@bytedance.com/

Since I'm not sure if I missing something during auditing, especially on some
virtio device drivers that I'm not familiar with, we limit the supported device
type to virtio block device currently. The support for other device types can be
added after the security issue of corresponding device driver is clarified or
fixed in the future.

Future work:
  - Improve performance
  - Userspace library (find a way to reuse device emulation code in qemu/rust-vmm)
  - Support more device types

V7 to V8:
- Rebased to newest kernel tree
- Rework VDUSE driver to handle the device's control path in kernel
- Limit the supported device type to virtio block device
- Export free_iova_fast()
- Remove the virtio-blk and virtio-scsi patches (will send them alone)
- Remove all module parameters
- Use the same MAJOR for both control device and VDUSE devices
- Avoid eventfd cleanup in vduse_dev_release()

V6 to V7:
- Export alloc_iova_fast()
- Add get_config_size() callback
- Add some patches to avoid trusting virtio devices
- Add limited device emulation
- Add some documents
- Use workqueue to inject config irq
- Add parameter on vq irq injecting
- Rename vduse_domain_get_mapping_page() to vduse_domain_get_coherent_page()
- Add WARN_ON() to catch message failure
- Add some padding/reserved fields to uAPI structure
- Fix some bugs
- Rebase to vhost.git

V5 to V6:
- Export receive_fd() instead of __receive_fd()
- Factor out the unmapping logic of pa and va separatedly
- Remove the logic of bounce page allocation in page fault handler
- Use PAGE_SIZE as IOVA allocation granule
- Add EPOLLOUT support
- Enable setting API version in userspace
- Fix some bugs

V4 to V5:
- Remove the patch for irq binding
- Use a single IOTLB for all types of mapping
- Factor out vhost_vdpa_pa_map()
- Add some sample codes in document
- Use receice_fd_user() to pass file descriptor
- Fix some bugs

V3 to V4:
- Rebase to vhost.git
- Split some patches
- Add some documents
- Use ioctl to inject interrupt rather than eventfd
- Enable config interrupt support
- Support binding irq to the specified cpu
- Add two module parameter to limit bounce/iova size
- Create char device rather than anon inode per vduse
- Reuse vhost IOTLB for iova domain
- Rework the message mechnism in control path

V2 to V3:
- Rework the MMU-based IOMMU driver
- Use the iova domain as iova allocator instead of genpool
- Support transferring vma->vm_file in vhost-vdpa
- Add SVA support in vhost-vdpa
- Remove the patches on bounce pages reclaim

V1 to V2:
- Add vhost-vdpa support
- Add some documents
- Based on the vdpa management tool
- Introduce a workqueue for irq injection
- Replace interval tree with array map to store the iova_map

Xie Yongji (10):
  iova: Export alloc_iova_fast() and free_iova_fast();
  file: Export receive_fd() to modules
  eventfd: Increase the recursion depth of eventfd_signal()
  vhost-iotlb: Add an opaque pointer for vhost IOTLB
  vdpa: Add an opaque pointer for vdpa_config_ops.dma_map()
  vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap()
  vdpa: Support transferring virtual addressing during DMA mapping
  vduse: Implement an MMU-based IOMMU driver
  vduse: Introduce VDUSE - vDPA Device in Userspace
  Documentation: Add documentation for VDUSE

 Documentation/userspace-api/index.rst              |    1 +
 Documentation/userspace-api/ioctl/ioctl-number.rst |    1 +
 Documentation/userspace-api/vduse.rst              |  222 +++
 drivers/iommu/iova.c                               |    2 +
 drivers/vdpa/Kconfig                               |   10 +
 drivers/vdpa/Makefile                              |    1 +
 drivers/vdpa/ifcvf/ifcvf_main.c                    |    2 +-
 drivers/vdpa/mlx5/net/mlx5_vnet.c                  |    2 +-
 drivers/vdpa/vdpa.c                                |    9 +-
 drivers/vdpa/vdpa_sim/vdpa_sim.c                   |    8 +-
 drivers/vdpa/vdpa_user/Makefile                    |    5 +
 drivers/vdpa/vdpa_user/iova_domain.c               |  545 ++++++++
 drivers/vdpa/vdpa_user/iova_domain.h               |   73 +
 drivers/vdpa/vdpa_user/vduse_dev.c                 | 1453 ++++++++++++++++++++
 drivers/vdpa/virtio_pci/vp_vdpa.c                  |    2 +-
 drivers/vhost/iotlb.c                              |   20 +-
 drivers/vhost/vdpa.c                               |  148 +-
 fs/eventfd.c                                       |    2 +-
 fs/file.c                                          |    6 +
 include/linux/eventfd.h                            |    5 +-
 include/linux/file.h                               |    7 +-
 include/linux/vdpa.h                               |   21 +-
 include/linux/vhost_iotlb.h                        |    3 +
 include/uapi/linux/vduse.h                         |  143 ++
 24 files changed, 2641 insertions(+), 50 deletions(-)
 create mode 100644 Documentation/userspace-api/vduse.rst
 create mode 100644 drivers/vdpa/vdpa_user/Makefile
 create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
 create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h
 create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c
 create mode 100644 include/uapi/linux/vduse.h

Comments

Stefan Hajnoczi June 24, 2021, 3:12 p.m. UTC | #1
On Tue, Jun 15, 2021 at 10:13:21PM +0800, Xie Yongji wrote:
> This series introduces a framework that makes it possible to implement
> software-emulated vDPA devices in userspace. And to make it simple, the
> emulated vDPA device's control path is handled in the kernel and only the
> data path is implemented in the userspace.

This looks interesting. Unfortunately I don't have enough time to do a
full review, but I looked at the documentation and uapi header file to
give feedback on the userspace ABI.

Stefan
Jason Wang June 28, 2021, 4:35 a.m. UTC | #2
在 2021/6/28 下午6:33, Liu Xiaodong 写道:
> On Tue, Jun 15, 2021 at 10:13:21PM +0800, Xie Yongji wrote:
>> This series introduces a framework that makes it possible to implement
>> software-emulated vDPA devices in userspace. And to make it simple, the
>> emulated vDPA device's control path is handled in the kernel and only the
>> data path is implemented in the userspace.
>>
>> Since the emuldated vDPA device's control path is handled in the kernel,
>> a message mechnism is introduced to make userspace be aware of the data
>> path related changes. Userspace can use read()/write() to receive/reply
>> the control messages.
>>
>> In the data path, the core is mapping dma buffer into VDUSE daemon's
>> address space, which can be implemented in different ways depending on
>> the vdpa bus to which the vDPA device is attached.
>>
>> In virtio-vdpa case, we implements a MMU-based on-chip IOMMU driver with
>> bounce-buffering mechanism to achieve that. And in vhost-vdpa case, the dma
>> buffer is reside in a userspace memory region which can be shared to the
>> VDUSE userspace processs via transferring the shmfd.
>>
>> The details and our user case is shown below:
>>
>> ------------------------    -------------------------   ----------------------------------------------
>> |            Container |    |              QEMU(VM) |   |                               VDUSE daemon |
>> |       ---------      |    |  -------------------  |   | ------------------------- ---------------- |
>> |       |dev/vdx|      |    |  |/dev/vhost-vdpa-x|  |   | | vDPA device emulation | | block driver | |
>> ------------+-----------     -----------+------------   -------------+----------------------+---------
>>              |                           |                            |                      |
>>              |                           |                            |                      |
>> ------------+---------------------------+----------------------------+----------------------+---------
>> |    | block device |           |  vhost device |            | vduse driver |          | TCP/IP |    |
>> |    -------+--------           --------+--------            -------+--------          -----+----    |
>> |           |                           |                           |                       |        |
>> | ----------+----------       ----------+-----------         -------+-------                |        |
>> | | virtio-blk driver |       |  vhost-vdpa driver |         | vdpa device |                |        |
>> | ----------+----------       ----------+-----------         -------+-------                |        |
>> |           |      virtio bus           |                           |                       |        |
>> |   --------+----+-----------           |                           |                       |        |
>> |                |                      |                           |                       |        |
>> |      ----------+----------            |                           |                       |        |
>> |      | virtio-blk device |            |                           |                       |        |
>> |      ----------+----------            |                           |                       |        |
>> |                |                      |                           |                       |        |
>> |     -----------+-----------           |                           |                       |        |
>> |     |  virtio-vdpa driver |           |                           |                       |        |
>> |     -----------+-----------           |                           |                       |        |
>> |                |                      |                           |    vdpa bus           |        |
>> |     -----------+----------------------+---------------------------+------------           |        |
>> |                                                                                        ---+---     |
>> -----------------------------------------------------------------------------------------| NIC |------
>>                                                                                           ---+---
>>                                                                                              |
>>                                                                                     ---------+---------
>>                                                                                     | Remote Storages |
>>                                                                                     -------------------
>>
>> We make use of it to implement a block device connecting to
>> our distributed storage, which can be used both in containers and
>> VMs. Thus, we can have an unified technology stack in this two cases.
>>
>> To test it with null-blk:
>>
>>    $ qemu-storage-daemon \
>>        --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
>>        --monitor chardev=charmonitor \
>>        --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0 \
>>        --export type=vduse-blk,id=test,node-name=disk0,writable=on,name=vduse-null,num-queues=16,queue-size=128
>>
>> The qemu-storage-daemon can be found at https://github.com/bytedance/qemu/tree/vduse
>>
>> To make the userspace VDUSE processes such as qemu-storage-daemon able to
>> be run by an unprivileged user. We did some works on virtio driver to avoid
>> trusting device, including:
>>
>>    - validating the used length:
>>
>>      * https://lore.kernel.org/lkml/20210531135852.113-1-xieyongji@bytedance.com/
>>      * https://lore.kernel.org/lkml/20210525125622.1203-1-xieyongji@bytedance.com/
>>
>>    - validating the device config:
>>
>>      * https://lore.kernel.org/lkml/20210615104810.151-1-xieyongji@bytedance.com/
>>
>>    - validating the device response:
>>
>>      * https://lore.kernel.org/lkml/20210615105218.214-1-xieyongji@bytedance.com/
>>
>> Since I'm not sure if I missing something during auditing, especially on some
>> virtio device drivers that I'm not familiar with, we limit the supported device
>> type to virtio block device currently. The support for other device types can be
>> added after the security issue of corresponding device driver is clarified or
>> fixed in the future.
>>
>> Future work:
>>    - Improve performance
>>    - Userspace library (find a way to reuse device emulation code in qemu/rust-vmm)
>>    - Support more device types
>>
>> V7 to V8:
>> - Rebased to newest kernel tree
>> - Rework VDUSE driver to handle the device's control path in kernel
>> - Limit the supported device type to virtio block device
>> - Export free_iova_fast()
>> - Remove the virtio-blk and virtio-scsi patches (will send them alone)
>> - Remove all module parameters
>> - Use the same MAJOR for both control device and VDUSE devices
>> - Avoid eventfd cleanup in vduse_dev_release()
>>
>> V6 to V7:
>> - Export alloc_iova_fast()
>> - Add get_config_size() callback
>> - Add some patches to avoid trusting virtio devices
>> - Add limited device emulation
>> - Add some documents
>> - Use workqueue to inject config irq
>> - Add parameter on vq irq injecting
>> - Rename vduse_domain_get_mapping_page() to vduse_domain_get_coherent_page()
>> - Add WARN_ON() to catch message failure
>> - Add some padding/reserved fields to uAPI structure
>> - Fix some bugs
>> - Rebase to vhost.git
>>
>> V5 to V6:
>> - Export receive_fd() instead of __receive_fd()
>> - Factor out the unmapping logic of pa and va separatedly
>> - Remove the logic of bounce page allocation in page fault handler
>> - Use PAGE_SIZE as IOVA allocation granule
>> - Add EPOLLOUT support
>> - Enable setting API version in userspace
>> - Fix some bugs
>>
>> V4 to V5:
>> - Remove the patch for irq binding
>> - Use a single IOTLB for all types of mapping
>> - Factor out vhost_vdpa_pa_map()
>> - Add some sample codes in document
>> - Use receice_fd_user() to pass file descriptor
>> - Fix some bugs
>>
>> V3 to V4:
>> - Rebase to vhost.git
>> - Split some patches
>> - Add some documents
>> - Use ioctl to inject interrupt rather than eventfd
>> - Enable config interrupt support
>> - Support binding irq to the specified cpu
>> - Add two module parameter to limit bounce/iova size
>> - Create char device rather than anon inode per vduse
>> - Reuse vhost IOTLB for iova domain
>> - Rework the message mechnism in control path
>>
>> V2 to V3:
>> - Rework the MMU-based IOMMU driver
>> - Use the iova domain as iova allocator instead of genpool
>> - Support transferring vma->vm_file in vhost-vdpa
>> - Add SVA support in vhost-vdpa
>> - Remove the patches on bounce pages reclaim
>>
>> V1 to V2:
>> - Add vhost-vdpa support
>> - Add some documents
>> - Based on the vdpa management tool
>> - Introduce a workqueue for irq injection
>> - Replace interval tree with array map to store the iova_map
>>
>> Xie Yongji (10):
>>    iova: Export alloc_iova_fast() and free_iova_fast();
>>    file: Export receive_fd() to modules
>>    eventfd: Increase the recursion depth of eventfd_signal()
>>    vhost-iotlb: Add an opaque pointer for vhost IOTLB
>>    vdpa: Add an opaque pointer for vdpa_config_ops.dma_map()
>>    vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap()
>>    vdpa: Support transferring virtual addressing during DMA mapping
>>    vduse: Implement an MMU-based IOMMU driver
>>    vduse: Introduce VDUSE - vDPA Device in Userspace
>>    Documentation: Add documentation for VDUSE
>>
>>   Documentation/userspace-api/index.rst              |    1 +
>>   Documentation/userspace-api/ioctl/ioctl-number.rst |    1 +
>>   Documentation/userspace-api/vduse.rst              |  222 +++
>>   drivers/iommu/iova.c                               |    2 +
>>   drivers/vdpa/Kconfig                               |   10 +
>>   drivers/vdpa/Makefile                              |    1 +
>>   drivers/vdpa/ifcvf/ifcvf_main.c                    |    2 +-
>>   drivers/vdpa/mlx5/net/mlx5_vnet.c                  |    2 +-
>>   drivers/vdpa/vdpa.c                                |    9 +-
>>   drivers/vdpa/vdpa_sim/vdpa_sim.c                   |    8 +-
>>   drivers/vdpa/vdpa_user/Makefile                    |    5 +
>>   drivers/vdpa/vdpa_user/iova_domain.c               |  545 ++++++++
>>   drivers/vdpa/vdpa_user/iova_domain.h               |   73 +
>>   drivers/vdpa/vdpa_user/vduse_dev.c                 | 1453 ++++++++++++++++++++
>>   drivers/vdpa/virtio_pci/vp_vdpa.c                  |    2 +-
>>   drivers/vhost/iotlb.c                              |   20 +-
>>   drivers/vhost/vdpa.c                               |  148 +-
>>   fs/eventfd.c                                       |    2 +-
>>   fs/file.c                                          |    6 +
>>   include/linux/eventfd.h                            |    5 +-
>>   include/linux/file.h                               |    7 +-
>>   include/linux/vdpa.h                               |   21 +-
>>   include/linux/vhost_iotlb.h                        |    3 +
>>   include/uapi/linux/vduse.h                         |  143 ++
>>   24 files changed, 2641 insertions(+), 50 deletions(-)
>>   create mode 100644 Documentation/userspace-api/vduse.rst
>>   create mode 100644 drivers/vdpa/vdpa_user/Makefile
>>   create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
>>   create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h
>>   create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c
>>   create mode 100644 include/uapi/linux/vduse.h
>>
>> --
>> 2.11.0
> Hi, Yongji
>
> Great work! your method is really wise that implements a software IOMMU
> so that data path gets processed by userspace application efficiently.
> Sorry, I've just realized your work and patches.
>
>
> I was working on a similar thing aiming to get vhost-user-blk device
> from SPDK vhost-target to be exported as local host kernel block device.
> It's diagram is like this:
>
>
>                                  -----------------------------
> ------------------------        |    -----------------      |    ---------------------------------------
> |   <RunC Container>   |     <<<<<<<<| Shared-Memory |>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>        |
> |       ---------      |     v  |    -----------------      |    |                            v        |
> |       |dev/vdx|      |     v  |   <virtio-local-agent>    |    |      <Vhost-user Target>   v        |
> ------------+-----------     v  | ------------------------  |    |  --------------------------v------  |
>              |                v  | |/dev/virtio-local-ctrl|  |    |  | unix socket |   |block driver |  |
>              |                v  ------------+----------------    --------+--------------------v---------
>              |                v              |                            |                    v
> ------------+----------------v--------------+----------------------------+--------------------v--------|
> |    | block device |        v      |  Misc device |                     |                    v        |
> |    -------+--------        v      --------+-------                     |                    v        |
> |           |                v              |                            |                    v        |
> | ----------+----------      v              |                            |                    v        |
> | | virtio-blk driver |      v              |                            |                    v        |
> | ----------+----------      v              |                            |                    v        |
> |           | virtio bus     v              |                            |                    v        |
> |   --------+---+-------     v              |                            |                    v        |
> |               |            v              |                            |                    v        |
> |               |            v              |                            |                    v        |
> |     ----------+----------  v     ---------+-----------                 |                    v        |
> |     | virtio-blk device |--<----| virtio-local driver |----------------<                    v        |
> |     ----------+----------       ----------+-----------                                      v        |
> |                                                                                    ---------+--------|
> -------------------------------------------------------------------------------------| RNIC |--| PCIe |-
>                                                                                       ----+---  | NVMe |
>                                                                                           |     --------
>                                                                                  ---------+---------
>                                                                                  | Remote Storages |
>                                                                                  -------------------
>
>
> I just draft out an initial proof version. When seeing your RFC mail,
> I'm thinking that SPDK target may depends on your work, so I could
> directly drop mine.
> But after a glance of the RFC patches, seems it is not so easy or
> efficient to get vduse leveraged by SPDK.
> (Please correct me, if I get wrong understanding on vduse. :) )
>
> The large barrier is bounce-buffer mapping: SPDK requires hugepages
> for NVMe over PCIe and RDMA, so take some preallcoated hugepages to
> map as bounce buffer is necessary. Or it's hard to avoid an extra
> memcpy from bounce-buffer to hugepage.
> If you can add an option to map hugepages as bounce-buffer,
> then SPDK could also be a potential user of vduse.


Several issues:

- VDUSE needs to limit the total size of the bounce buffers (64M if I 
was not wrong). Does it work for SPDK?
- VDUSE can use hugepages but I'm not sure we can mandate hugepages (or 
we need introduce new flags for supporting this)

Thanks


>
> It would be better if SPDK vhost-target could leverage the datapath of
> vduse directly and efficiently. Even the control path is vdpa based,
> we may work out one daemon as agent to bridge SPDK vhost-target with vduse.
> Then users who already deployed SPDK vhost-target, can smoothly run
> some agent daemon without code modification on SPDK vhost-target itself.
> (It is only better-to-have for SPDK vhost-target app, not mandatory for SPDK) :)
> At least, some small barrier is there that blocked a vhost-target use vduse
> datapath efficiently:
> - Current IO completion irq of vduse is IOCTL based. If add one option
> to get it eventfd based, then vhost-target can directly notify IO
> completion via negotiated eventfd.
>
>
> Thanks
>  From Xiaodong
>
>
>
>
>
> 									
>
Liu Xiaodong June 28, 2021, 5:54 a.m. UTC | #3
>-----Original Message-----
>From: Jason Wang <jasowang@redhat.com>
>Sent: Monday, June 28, 2021 12:35 PM
>To: Liu, Xiaodong <xiaodong.liu@intel.com>; Xie Yongji
><xieyongji@bytedance.com>; mst@redhat.com; stefanha@redhat.com;
>sgarzare@redhat.com; parav@nvidia.com; hch@infradead.org;
>christian.brauner@canonical.com; rdunlap@infradead.org; willy@infradead.org;
>viro@zeniv.linux.org.uk; axboe@kernel.dk; bcrl@kvack.org; corbet@lwn.net;
>mika.penttila@nextfour.com; dan.carpenter@oracle.com; joro@8bytes.org;
>gregkh@linuxfoundation.org
>Cc: songmuchun@bytedance.com; virtualization@lists.linux-foundation.org;
>netdev@vger.kernel.org; kvm@vger.kernel.org; linux-fsdevel@vger.kernel.org;
>iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org
>Subject: Re: [PATCH v8 00/10] Introduce VDUSE - vDPA Device in Userspace
>
>
>在 2021/6/28 下午6:33, Liu Xiaodong 写道:
>> On Tue, Jun 15, 2021 at 10:13:21PM +0800, Xie Yongji wrote:
>>> This series introduces a framework that makes it possible to
>>> implement software-emulated vDPA devices in userspace. And to make it
>>> simple, the emulated vDPA device's control path is handled in the
>>> kernel and only the data path is implemented in the userspace.
>>>
>>> Since the emuldated vDPA device's control path is handled in the
>>> kernel, a message mechnism is introduced to make userspace be aware
>>> of the data path related changes. Userspace can use read()/write() to
>>> receive/reply the control messages.
>>>
>>> In the data path, the core is mapping dma buffer into VDUSE daemon's
>>> address space, which can be implemented in different ways depending
>>> on the vdpa bus to which the vDPA device is attached.
>>>
>>> In virtio-vdpa case, we implements a MMU-based on-chip IOMMU driver
>>> with bounce-buffering mechanism to achieve that. And in vhost-vdpa
>>> case, the dma buffer is reside in a userspace memory region which can
>>> be shared to the VDUSE userspace processs via transferring the shmfd.
>>>
>>> The details and our user case is shown below:
>>>
>>> ------------------------    -------------------------   ------------------------------------------
>----
>>> |            Container |    |              QEMU(VM) |   |                               VDUSE daemon
>|
>>> |       ---------      |    |  -------------------  |   | ------------------------- ---------------- |
>>> |       |dev/vdx|      |    |  |/dev/vhost-vdpa-x|  |   | | vDPA device emulation | |
>block driver | |
>>> ------------+-----------     -----------+------------   -------------+----------------------+---
>------
>>>              |                           |                            |                      |
>>>              |                           |                            |                      |
>>> ------------+---------------------------+----------------------------+----------------------
>+---------
>>> |    | block device |           |  vhost device |            | vduse driver |          | TCP/IP |
>|
>>> |    -------+--------           --------+--------            -------+--------          -----+----    |
>>> |           |                           |                           |                       |        |
>>> | ----------+----------       ----------+-----------         -------+-------                |        |
>>> | | virtio-blk driver |       |  vhost-vdpa driver |         | vdpa device |                |
>|
>>> | ----------+----------       ----------+-----------         -------+-------                |        |
>>> |           |      virtio bus           |                           |                       |        |
>>> |   --------+----+-----------           |                           |                       |        |
>>> |                |                      |                           |                       |        |
>>> |      ----------+----------            |                           |                       |        |
>>> |      | virtio-blk device |            |                           |                       |        |
>>> |      ----------+----------            |                           |                       |        |
>>> |                |                      |                           |                       |        |
>>> |     -----------+-----------           |                           |                       |        |
>>> |     |  virtio-vdpa driver |           |                           |                       |        |
>>> |     -----------+-----------           |                           |                       |        |
>>> |                |                      |                           |    vdpa bus           |        |
>>> |     -----------+----------------------+---------------------------+------------           |
>|
>>> |                                                                                        ---+---     |
>>> -----------------------------------------------------------------------------------------| NIC
>|------
>>>                                                                                           ---+---
>>>                                                                                              |
>>>                                                                                     ---------+---------
>>>                                                                                     | Remote Storages |
>>>
>>> -------------------
>>>
>>> We make use of it to implement a block device connecting to our
>>> distributed storage, which can be used both in containers and VMs.
>>> Thus, we can have an unified technology stack in this two cases.
>>>
>>> To test it with null-blk:
>>>
>>>    $ qemu-storage-daemon \
>>>        --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
>>>        --monitor chardev=charmonitor \
>>>        --blockdev
>driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-
>name=disk0 \
>>>        --export
>>> type=vduse-blk,id=test,node-name=disk0,writable=on,name=vduse-null,nu
>>> m-queues=16,queue-size=128
>>>
>>> The qemu-storage-daemon can be found at
>>> https://github.com/bytedance/qemu/tree/vduse
>>>
>>> To make the userspace VDUSE processes such as qemu-storage-daemon
>>> able to be run by an unprivileged user. We did some works on virtio
>>> driver to avoid trusting device, including:
>>>
>>>    - validating the used length:
>>>
>>>      * https://lore.kernel.org/lkml/20210531135852.113-1-
>xieyongji@bytedance.com/
>>>      *
>>> https://lore.kernel.org/lkml/20210525125622.1203-1-xieyongji@bytedanc
>>> e.com/
>>>
>>>    - validating the device config:
>>>
>>>      *
>>> https://lore.kernel.org/lkml/20210615104810.151-1-xieyongji@bytedance
>>> .com/
>>>
>>>    - validating the device response:
>>>
>>>      *
>>> https://lore.kernel.org/lkml/20210615105218.214-1-xieyongji@bytedance
>>> .com/
>>>
>>> Since I'm not sure if I missing something during auditing, especially
>>> on some virtio device drivers that I'm not familiar with, we limit
>>> the supported device type to virtio block device currently. The
>>> support for other device types can be added after the security issue
>>> of corresponding device driver is clarified or fixed in the future.
>>>
>>> Future work:
>>>    - Improve performance
>>>    - Userspace library (find a way to reuse device emulation code in qemu/rust-
>vmm)
>>>    - Support more device types
>>>
>>> V7 to V8:
>>> - Rebased to newest kernel tree
>>> - Rework VDUSE driver to handle the device's control path in kernel
>>> - Limit the supported device type to virtio block device
>>> - Export free_iova_fast()
>>> - Remove the virtio-blk and virtio-scsi patches (will send them
>>> alone)
>>> - Remove all module parameters
>>> - Use the same MAJOR for both control device and VDUSE devices
>>> - Avoid eventfd cleanup in vduse_dev_release()
>>>
>>> V6 to V7:
>>> - Export alloc_iova_fast()
>>> - Add get_config_size() callback
>>> - Add some patches to avoid trusting virtio devices
>>> - Add limited device emulation
>>> - Add some documents
>>> - Use workqueue to inject config irq
>>> - Add parameter on vq irq injecting
>>> - Rename vduse_domain_get_mapping_page() to
>>> vduse_domain_get_coherent_page()
>>> - Add WARN_ON() to catch message failure
>>> - Add some padding/reserved fields to uAPI structure
>>> - Fix some bugs
>>> - Rebase to vhost.git
>>>
>>> V5 to V6:
>>> - Export receive_fd() instead of __receive_fd()
>>> - Factor out the unmapping logic of pa and va separatedly
>>> - Remove the logic of bounce page allocation in page fault handler
>>> - Use PAGE_SIZE as IOVA allocation granule
>>> - Add EPOLLOUT support
>>> - Enable setting API version in userspace
>>> - Fix some bugs
>>>
>>> V4 to V5:
>>> - Remove the patch for irq binding
>>> - Use a single IOTLB for all types of mapping
>>> - Factor out vhost_vdpa_pa_map()
>>> - Add some sample codes in document
>>> - Use receice_fd_user() to pass file descriptor
>>> - Fix some bugs
>>>
>>> V3 to V4:
>>> - Rebase to vhost.git
>>> - Split some patches
>>> - Add some documents
>>> - Use ioctl to inject interrupt rather than eventfd
>>> - Enable config interrupt support
>>> - Support binding irq to the specified cpu
>>> - Add two module parameter to limit bounce/iova size
>>> - Create char device rather than anon inode per vduse
>>> - Reuse vhost IOTLB for iova domain
>>> - Rework the message mechnism in control path
>>>
>>> V2 to V3:
>>> - Rework the MMU-based IOMMU driver
>>> - Use the iova domain as iova allocator instead of genpool
>>> - Support transferring vma->vm_file in vhost-vdpa
>>> - Add SVA support in vhost-vdpa
>>> - Remove the patches on bounce pages reclaim
>>>
>>> V1 to V2:
>>> - Add vhost-vdpa support
>>> - Add some documents
>>> - Based on the vdpa management tool
>>> - Introduce a workqueue for irq injection
>>> - Replace interval tree with array map to store the iova_map
>>>
>>> Xie Yongji (10):
>>>    iova: Export alloc_iova_fast() and free_iova_fast();
>>>    file: Export receive_fd() to modules
>>>    eventfd: Increase the recursion depth of eventfd_signal()
>>>    vhost-iotlb: Add an opaque pointer for vhost IOTLB
>>>    vdpa: Add an opaque pointer for vdpa_config_ops.dma_map()
>>>    vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap()
>>>    vdpa: Support transferring virtual addressing during DMA mapping
>>>    vduse: Implement an MMU-based IOMMU driver
>>>    vduse: Introduce VDUSE - vDPA Device in Userspace
>>>    Documentation: Add documentation for VDUSE
>>>
>>>   Documentation/userspace-api/index.rst              |    1 +
>>>   Documentation/userspace-api/ioctl/ioctl-number.rst |    1 +
>>>   Documentation/userspace-api/vduse.rst              |  222 +++
>>>   drivers/iommu/iova.c                               |    2 +
>>>   drivers/vdpa/Kconfig                               |   10 +
>>>   drivers/vdpa/Makefile                              |    1 +
>>>   drivers/vdpa/ifcvf/ifcvf_main.c                    |    2 +-
>>>   drivers/vdpa/mlx5/net/mlx5_vnet.c                  |    2 +-
>>>   drivers/vdpa/vdpa.c                                |    9 +-
>>>   drivers/vdpa/vdpa_sim/vdpa_sim.c                   |    8 +-
>>>   drivers/vdpa/vdpa_user/Makefile                    |    5 +
>>>   drivers/vdpa/vdpa_user/iova_domain.c               |  545 ++++++++
>>>   drivers/vdpa/vdpa_user/iova_domain.h               |   73 +
>>>   drivers/vdpa/vdpa_user/vduse_dev.c                 | 1453
>++++++++++++++++++++
>>>   drivers/vdpa/virtio_pci/vp_vdpa.c                  |    2 +-
>>>   drivers/vhost/iotlb.c                              |   20 +-
>>>   drivers/vhost/vdpa.c                               |  148 +-
>>>   fs/eventfd.c                                       |    2 +-
>>>   fs/file.c                                          |    6 +
>>>   include/linux/eventfd.h                            |    5 +-
>>>   include/linux/file.h                               |    7 +-
>>>   include/linux/vdpa.h                               |   21 +-
>>>   include/linux/vhost_iotlb.h                        |    3 +
>>>   include/uapi/linux/vduse.h                         |  143 ++
>>>   24 files changed, 2641 insertions(+), 50 deletions(-)
>>>   create mode 100644 Documentation/userspace-api/vduse.rst
>>>   create mode 100644 drivers/vdpa/vdpa_user/Makefile
>>>   create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
>>>   create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h
>>>   create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c
>>>   create mode 100644 include/uapi/linux/vduse.h
>>>
>>> --
>>> 2.11.0
>> Hi, Yongji
>>
>> Great work! your method is really wise that implements a software
>> IOMMU so that data path gets processed by userspace application efficiently.
>> Sorry, I've just realized your work and patches.
>>
>>
>> I was working on a similar thing aiming to get vhost-user-blk device
>> from SPDK vhost-target to be exported as local host kernel block device.
>> It's diagram is like this:
>>
>>
>>                                  -----------------------------
>> ------------------------        |    -----------------      |    --------------------------------------
>-
>> |   <RunC Container>   |     <<<<<<<<| Shared-Memory
>|>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>        |
>> |       ---------      |     v  |    -----------------      |    |                            v        |
>> |       |dev/vdx|      |     v  |   <virtio-local-agent>    |    |      <Vhost-user Target>
>v        |
>> ------------+-----------     v  | ------------------------  |    |  --------------------------v-----
>-  |
>>              |                v  | |/dev/virtio-local-ctrl|  |    |  | unix socket |   |block driver
>|  |
>>              |                v  ------------+----------------    --------+--------------------v---------
>>              |                v              |                            |                    v
>> ------------+----------------v--------------+----------------------------+--------------------
>v--------|
>> |    | block device |        v      |  Misc device |                     |                    v        |
>> |    -------+--------        v      --------+-------                     |                    v        |
>> |           |                v              |                            |                    v        |
>> | ----------+----------      v              |                            |                    v        |
>> | | virtio-blk driver |      v              |                            |                    v        |
>> | ----------+----------      v              |                            |                    v        |
>> |           | virtio bus     v              |                            |                    v        |
>> |   --------+---+-------     v              |                            |                    v        |
>> |               |            v              |                            |                    v        |
>> |               |            v              |                            |                    v        |
>> |     ----------+----------  v     ---------+-----------                 |                    v        |
>> |     | virtio-blk device |--<----| virtio-local driver |----------------<                    v
>|
>> |     ----------+----------       ----------+-----------                                      v        |
>> |
>> | ---------+--------|
>> -------------------------------------------------------------------------------------| RNIC |--
>| PCIe |-
>>                                                                                       ----+---  | NVMe |
>>                                                                                           |     --------
>>                                                                                  ---------+---------
>>                                                                                  | Remote Storages |
>>
>> -------------------
>>
>>
>> I just draft out an initial proof version. When seeing your RFC mail,
>> I'm thinking that SPDK target may depends on your work, so I could
>> directly drop mine.
>> But after a glance of the RFC patches, seems it is not so easy or
>> efficient to get vduse leveraged by SPDK.
>> (Please correct me, if I get wrong understanding on vduse. :) )
>>
>> The large barrier is bounce-buffer mapping: SPDK requires hugepages
>> for NVMe over PCIe and RDMA, so take some preallcoated hugepages to
>> map as bounce buffer is necessary. Or it's hard to avoid an extra
>> memcpy from bounce-buffer to hugepage.
>> If you can add an option to map hugepages as bounce-buffer, then SPDK
>> could also be a potential user of vduse.
>
>
>Several issues:
>
>- VDUSE needs to limit the total size of the bounce buffers (64M if I was not
>wrong). Does it work for SPDK?

Yes, Jason. It is enough and works for SPDK.
Since it's a kind of bounce buffer mainly for in-flight IO, so limited size like
64MB is enough.

>- VDUSE can use hugepages but I'm not sure we can mandate hugepages (or we
>need introduce new flags for supporting this)

Same with your worry, I'm afraid too that it is a hard for a kernel module
to directly preallocate hugepage internal.
What I tried is that:
1. A simple agent daemon (represents for one device)  `preallocates` and maps
    dozens of 2MB hugepages (like 64MB) for one device.
2. The daemon passes its mapping addr&len and hugepage fd to kernel
    module through created IOCTL.
3. Kernel module remaps the hugepages inside kernel.
4. Vhost user target gets and maps hugepage fd from kernel module
    in vhost-user msg through Unix Domain Socket cmsg.
Then kernel module and target map on the same hugepage based
bounce buffer for in-flight IO.

If there is one option in VDUSE to map userspace preallocated memory, then
VDUSE should be able to mandate it even it is hugepage based.

>Thanks
>
>
>>
>> It would be better if SPDK vhost-target could leverage the datapath of
>> vduse directly and efficiently. Even the control path is vdpa based,
>> we may work out one daemon as agent to bridge SPDK vhost-target with vduse.
>> Then users who already deployed SPDK vhost-target, can smoothly run
>> some agent daemon without code modification on SPDK vhost-target itself.
>> (It is only better-to-have for SPDK vhost-target app, not mandatory
>> for SPDK) :) At least, some small barrier is there that blocked a
>> vhost-target use vduse datapath efficiently:
>> - Current IO completion irq of vduse is IOCTL based. If add one option
>> to get it eventfd based, then vhost-target can directly notify IO
>> completion via negotiated eventfd.
>>
>>
>> Thanks
>>  From Xiaodong
>>
>>
Yongji Xie June 28, 2021, 10:32 a.m. UTC | #4
On Mon, 28 Jun 2021 at 10:55, Liu Xiaodong <xiaodong.liu@intel.com> wrote:
>
> On Tue, Jun 15, 2021 at 10:13:21PM +0800, Xie Yongji wrote:
> >
> > This series introduces a framework that makes it possible to implement
> > software-emulated vDPA devices in userspace. And to make it simple, the
> > emulated vDPA device's control path is handled in the kernel and only the
> > data path is implemented in the userspace.
> >
> > Since the emuldated vDPA device's control path is handled in the kernel,
> > a message mechnism is introduced to make userspace be aware of the data
> > path related changes. Userspace can use read()/write() to receive/reply
> > the control messages.
> >
> > In the data path, the core is mapping dma buffer into VDUSE daemon's
> > address space, which can be implemented in different ways depending on
> > the vdpa bus to which the vDPA device is attached.
> >
> > In virtio-vdpa case, we implements a MMU-based on-chip IOMMU driver with
> > bounce-buffering mechanism to achieve that. And in vhost-vdpa case, the dma
> > buffer is reside in a userspace memory region which can be shared to the
> > VDUSE userspace processs via transferring the shmfd.
> >
> > The details and our user case is shown below:
> >
> > ------------------------    -------------------------   ----------------------------------------------
> > |            Container |    |              QEMU(VM) |   |                               VDUSE daemon |
> > |       ---------      |    |  -------------------  |   | ------------------------- ---------------- |
> > |       |dev/vdx|      |    |  |/dev/vhost-vdpa-x|  |   | | vDPA device emulation | | block driver | |
> > ------------+-----------     -----------+------------   -------------+----------------------+---------
> >             |                           |                            |                      |
> >             |                           |                            |                      |
> > ------------+---------------------------+----------------------------+----------------------+---------
> > |    | block device |           |  vhost device |            | vduse driver |          | TCP/IP |    |
> > |    -------+--------           --------+--------            -------+--------          -----+----    |
> > |           |                           |                           |                       |        |
> > | ----------+----------       ----------+-----------         -------+-------                |        |
> > | | virtio-blk driver |       |  vhost-vdpa driver |         | vdpa device |                |        |
> > | ----------+----------       ----------+-----------         -------+-------                |        |
> > |           |      virtio bus           |                           |                       |        |
> > |   --------+----+-----------           |                           |                       |        |
> > |                |                      |                           |                       |        |
> > |      ----------+----------            |                           |                       |        |
> > |      | virtio-blk device |            |                           |                       |        |
> > |      ----------+----------            |                           |                       |        |
> > |                |                      |                           |                       |        |
> > |     -----------+-----------           |                           |                       |        |
> > |     |  virtio-vdpa driver |           |                           |                       |        |
> > |     -----------+-----------           |                           |                       |        |
> > |                |                      |                           |    vdpa bus           |        |
> > |     -----------+----------------------+---------------------------+------------           |        |
> > |                                                                                        ---+---     |
> > -----------------------------------------------------------------------------------------| NIC |------
> >                                                                                          ---+---
> >                                                                                             |
> >                                                                                    ---------+---------
> >                                                                                    | Remote Storages |
> >                                                                                    -------------------
> >
> > We make use of it to implement a block device connecting to
> > our distributed storage, which can be used both in containers and
> > VMs. Thus, we can have an unified technology stack in this two cases.
> >
> > To test it with null-blk:
> >
> >   $ qemu-storage-daemon \
> >       --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
> >       --monitor chardev=charmonitor \
> >       --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0 \
> >       --export type=vduse-blk,id=test,node-name=disk0,writable=on,name=vduse-null,num-queues=16,queue-size=128
> >
> > The qemu-storage-daemon can be found at https://github.com/bytedance/qemu/tree/vduse
> >
> > To make the userspace VDUSE processes such as qemu-storage-daemon able to
> > be run by an unprivileged user. We did some works on virtio driver to avoid
> > trusting device, including:
> >
> >   - validating the used length:
> >
> >     * https://lore.kernel.org/lkml/20210531135852.113-1-xieyongji@bytedance.com/
> >     * https://lore.kernel.org/lkml/20210525125622.1203-1-xieyongji@bytedance.com/
> >
> >   - validating the device config:
> >
> >     * https://lore.kernel.org/lkml/20210615104810.151-1-xieyongji@bytedance.com/
> >
> >   - validating the device response:
> >
> >     * https://lore.kernel.org/lkml/20210615105218.214-1-xieyongji@bytedance.com/
> >
> > Since I'm not sure if I missing something during auditing, especially on some
> > virtio device drivers that I'm not familiar with, we limit the supported device
> > type to virtio block device currently. The support for other device types can be
> > added after the security issue of corresponding device driver is clarified or
> > fixed in the future.
> >
> > Future work:
> >   - Improve performance
> >   - Userspace library (find a way to reuse device emulation code in qemu/rust-vmm)
> >   - Support more device types
> >
> > V7 to V8:
> > - Rebased to newest kernel tree
> > - Rework VDUSE driver to handle the device's control path in kernel
> > - Limit the supported device type to virtio block device
> > - Export free_iova_fast()
> > - Remove the virtio-blk and virtio-scsi patches (will send them alone)
> > - Remove all module parameters
> > - Use the same MAJOR for both control device and VDUSE devices
> > - Avoid eventfd cleanup in vduse_dev_release()
> >
> > V6 to V7:
> > - Export alloc_iova_fast()
> > - Add get_config_size() callback
> > - Add some patches to avoid trusting virtio devices
> > - Add limited device emulation
> > - Add some documents
> > - Use workqueue to inject config irq
> > - Add parameter on vq irq injecting
> > - Rename vduse_domain_get_mapping_page() to vduse_domain_get_coherent_page()
> > - Add WARN_ON() to catch message failure
> > - Add some padding/reserved fields to uAPI structure
> > - Fix some bugs
> > - Rebase to vhost.git
> >
> > V5 to V6:
> > - Export receive_fd() instead of __receive_fd()
> > - Factor out the unmapping logic of pa and va separatedly
> > - Remove the logic of bounce page allocation in page fault handler
> > - Use PAGE_SIZE as IOVA allocation granule
> > - Add EPOLLOUT support
> > - Enable setting API version in userspace
> > - Fix some bugs
> >
> > V4 to V5:
> > - Remove the patch for irq binding
> > - Use a single IOTLB for all types of mapping
> > - Factor out vhost_vdpa_pa_map()
> > - Add some sample codes in document
> > - Use receice_fd_user() to pass file descriptor
> > - Fix some bugs
> >
> > V3 to V4:
> > - Rebase to vhost.git
> > - Split some patches
> > - Add some documents
> > - Use ioctl to inject interrupt rather than eventfd
> > - Enable config interrupt support
> > - Support binding irq to the specified cpu
> > - Add two module parameter to limit bounce/iova size
> > - Create char device rather than anon inode per vduse
> > - Reuse vhost IOTLB for iova domain
> > - Rework the message mechnism in control path
> >
> > V2 to V3:
> > - Rework the MMU-based IOMMU driver
> > - Use the iova domain as iova allocator instead of genpool
> > - Support transferring vma->vm_file in vhost-vdpa
> > - Add SVA support in vhost-vdpa
> > - Remove the patches on bounce pages reclaim
> >
> > V1 to V2:
> > - Add vhost-vdpa support
> > - Add some documents
> > - Based on the vdpa management tool
> > - Introduce a workqueue for irq injection
> > - Replace interval tree with array map to store the iova_map
> >
> > Xie Yongji (10):
> >   iova: Export alloc_iova_fast() and free_iova_fast();
> >   file: Export receive_fd() to modules
> >   eventfd: Increase the recursion depth of eventfd_signal()
> >   vhost-iotlb: Add an opaque pointer for vhost IOTLB
> >   vdpa: Add an opaque pointer for vdpa_config_ops.dma_map()
> >   vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap()
> >   vdpa: Support transferring virtual addressing during DMA mapping
> >   vduse: Implement an MMU-based IOMMU driver
> >   vduse: Introduce VDUSE - vDPA Device in Userspace
> >   Documentation: Add documentation for VDUSE
> >
> >  Documentation/userspace-api/index.rst              |    1 +
> >  Documentation/userspace-api/ioctl/ioctl-number.rst |    1 +
> >  Documentation/userspace-api/vduse.rst              |  222 +++
> >  drivers/iommu/iova.c                               |    2 +
> >  drivers/vdpa/Kconfig                               |   10 +
> >  drivers/vdpa/Makefile                              |    1 +
> >  drivers/vdpa/ifcvf/ifcvf_main.c                    |    2 +-
> >  drivers/vdpa/mlx5/net/mlx5_vnet.c                  |    2 +-
> >  drivers/vdpa/vdpa.c                                |    9 +-
> >  drivers/vdpa/vdpa_sim/vdpa_sim.c                   |    8 +-
> >  drivers/vdpa/vdpa_user/Makefile                    |    5 +
> >  drivers/vdpa/vdpa_user/iova_domain.c               |  545 ++++++++
> >  drivers/vdpa/vdpa_user/iova_domain.h               |   73 +
> >  drivers/vdpa/vdpa_user/vduse_dev.c                 | 1453 ++++++++++++++++++++
> >  drivers/vdpa/virtio_pci/vp_vdpa.c                  |    2 +-
> >  drivers/vhost/iotlb.c                              |   20 +-
> >  drivers/vhost/vdpa.c                               |  148 +-
> >  fs/eventfd.c                                       |    2 +-
> >  fs/file.c                                          |    6 +
> >  include/linux/eventfd.h                            |    5 +-
> >  include/linux/file.h                               |    7 +-
> >  include/linux/vdpa.h                               |   21 +-
> >  include/linux/vhost_iotlb.h                        |    3 +
> >  include/uapi/linux/vduse.h                         |  143 ++
> >  24 files changed, 2641 insertions(+), 50 deletions(-)
> >  create mode 100644 Documentation/userspace-api/vduse.rst
> >  create mode 100644 drivers/vdpa/vdpa_user/Makefile
> >  create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
> >  create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h
> >  create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c
> >  create mode 100644 include/uapi/linux/vduse.h
> >
> > --
> > 2.11.0
>
> Hi, Yongji
>
> Great work! your method is really wise that implements a software IOMMU
> so that data path gets processed by userspace application efficiently.
> Sorry, I've just realized your work and patches.
>
>
> I was working on a similar thing aiming to get vhost-user-blk device
> from SPDK vhost-target to be exported as local host kernel block device.
> It's diagram is like this:
>
>
>                                 -----------------------------
> ------------------------        |    -----------------      |    ---------------------------------------
> |   <RunC Container>   |     <<<<<<<<| Shared-Memory |>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>        |
> |       ---------      |     v  |    -----------------      |    |                            v        |
> |       |dev/vdx|      |     v  |   <virtio-local-agent>    |    |      <Vhost-user Target>   v        |
> ------------+-----------     v  | ------------------------  |    |  --------------------------v------  |
>             |                v  | |/dev/virtio-local-ctrl|  |    |  | unix socket |   |block driver |  |
>             |                v  ------------+----------------    --------+--------------------v---------
>             |                v              |                            |                    v
> ------------+----------------v--------------+----------------------------+--------------------v--------|
> |    | block device |        v      |  Misc device |                     |                    v        |
> |    -------+--------        v      --------+-------                     |                    v        |
> |           |                v              |                            |                    v        |
> | ----------+----------      v              |                            |                    v        |
> | | virtio-blk driver |      v              |                            |                    v        |
> | ----------+----------      v              |                            |                    v        |
> |           | virtio bus     v              |                            |                    v        |
> |   --------+---+-------     v              |                            |                    v        |
> |               |            v              |                            |                    v        |
> |               |            v              |                            |                    v        |
> |     ----------+----------  v     ---------+-----------                 |                    v        |
> |     | virtio-blk device |--<----| virtio-local driver |----------------<                    v        |
> |     ----------+----------       ----------+-----------                                      v        |
> |                                                                                    ---------+--------|
> -------------------------------------------------------------------------------------| RNIC |--| PCIe |-
>                                                                                      ----+---  | NVMe |
>                                                                                          |     --------
>                                                                                 ---------+---------
>                                                                                 | Remote Storages |
>                                                                                 -------------------
>

Oh, yes, this design is similar to VDUSE.

>
> I just draft out. an initial proof version. When seeing your RFC mail,
> I'm thinking that SPDK target may depends on your work, so I could
> directly drop mine.

Great to hear that! I think we can extend VDUSE to meet your needs.
But I prefer to do that after this initial version merged.

> But after a glance of the RFC patches, seems it is not so easy or
> efficient to get vduse leveraged by SPDK.
> (Please correct me, if I get wrong understanding on vduse. :) )
>
> The large barrier is bounce-buffer mapping: SPDK requires hugepages
> for NVMe over PCIe and RDMA, so take some preallcoated hugepages to
> map as bounce buffer is necessary. Or it's hard to avoid an extra
> memcpy from bounce-buffer to hugepage.
> If you can add an option to map hugepages as bounce-buffer,
> then SPDK could also be a potential user of vduse.
>

I think we can support registering user space memory for bounce-buffer
use like XDP does. But this needs to pin the pages, so I didn't
consider it in this initial version.

> It would be better if SPDK vhost-target could leverage the datapath of
> vduse directly and efficiently. Even the control path is vdpa based,
> we may work out one daemon as agent to bridge SPDK vhost-target with vduse.
> Then users who already deployed SPDK vhost-target, can smoothly run
> some agent daemon without code modification on SPDK vhost-target itself.

That's a good idea!

> (It is only better-to-have for SPDK vhost-target app, not mandatory for SPDK) :)
> At least, some small barrier is there that blocked a vhost-target use vduse
> datapath efficiently:
> - Current IO completion irq of vduse is IOCTL based. If add one option
> to get it eventfd based, then vhost-target can directly notify IO
> completion via negotiated eventfd.
>

Make sense. Actually we did use the eventfd mechanism for this purpose
in the old version. But using ioctl would be simple, so we choose it
in this initial version.

Thanks,
Yongji
Liu Xiaodong June 28, 2021, 10:33 a.m. UTC | #5
On Tue, Jun 15, 2021 at 10:13:21PM +0800, Xie Yongji wrote:
> 
> This series introduces a framework that makes it possible to implement
> software-emulated vDPA devices in userspace. And to make it simple, the
> emulated vDPA device's control path is handled in the kernel and only the
> data path is implemented in the userspace.
> 
> Since the emuldated vDPA device's control path is handled in the kernel,
> a message mechnism is introduced to make userspace be aware of the data
> path related changes. Userspace can use read()/write() to receive/reply
> the control messages.
> 
> In the data path, the core is mapping dma buffer into VDUSE daemon's
> address space, which can be implemented in different ways depending on
> the vdpa bus to which the vDPA device is attached.
> 
> In virtio-vdpa case, we implements a MMU-based on-chip IOMMU driver with
> bounce-buffering mechanism to achieve that. And in vhost-vdpa case, the dma
> buffer is reside in a userspace memory region which can be shared to the
> VDUSE userspace processs via transferring the shmfd.
> 
> The details and our user case is shown below:
> 
> ------------------------    -------------------------   ----------------------------------------------
> |            Container |    |              QEMU(VM) |   |                               VDUSE daemon |
> |       ---------      |    |  -------------------  |   | ------------------------- ---------------- |
> |       |dev/vdx|      |    |  |/dev/vhost-vdpa-x|  |   | | vDPA device emulation | | block driver | |
> ------------+-----------     -----------+------------   -------------+----------------------+---------
>             |                           |                            |                      |
>             |                           |                            |                      |
> ------------+---------------------------+----------------------------+----------------------+---------
> |    | block device |           |  vhost device |            | vduse driver |          | TCP/IP |    |
> |    -------+--------           --------+--------            -------+--------          -----+----    |
> |           |                           |                           |                       |        |
> | ----------+----------       ----------+-----------         -------+-------                |        |
> | | virtio-blk driver |       |  vhost-vdpa driver |         | vdpa device |                |        |
> | ----------+----------       ----------+-----------         -------+-------                |        |
> |           |      virtio bus           |                           |                       |        |
> |   --------+----+-----------           |                           |                       |        |
> |                |                      |                           |                       |        |
> |      ----------+----------            |                           |                       |        |
> |      | virtio-blk device |            |                           |                       |        |
> |      ----------+----------            |                           |                       |        |
> |                |                      |                           |                       |        |
> |     -----------+-----------           |                           |                       |        |
> |     |  virtio-vdpa driver |           |                           |                       |        |
> |     -----------+-----------           |                           |                       |        |
> |                |                      |                           |    vdpa bus           |        |
> |     -----------+----------------------+---------------------------+------------           |        |
> |                                                                                        ---+---     |
> -----------------------------------------------------------------------------------------| NIC |------
>                                                                                          ---+---
>                                                                                             |
>                                                                                    ---------+---------
>                                                                                    | Remote Storages |
>                                                                                    -------------------
> 
> We make use of it to implement a block device connecting to
> our distributed storage, which can be used both in containers and
> VMs. Thus, we can have an unified technology stack in this two cases.
> 
> To test it with null-blk:
> 
>   $ qemu-storage-daemon \
>       --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
>       --monitor chardev=charmonitor \
>       --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0 \
>       --export type=vduse-blk,id=test,node-name=disk0,writable=on,name=vduse-null,num-queues=16,queue-size=128
> 
> The qemu-storage-daemon can be found at https://github.com/bytedance/qemu/tree/vduse
> 
> To make the userspace VDUSE processes such as qemu-storage-daemon able to
> be run by an unprivileged user. We did some works on virtio driver to avoid
> trusting device, including:
> 
>   - validating the used length:
> 
>     * https://lore.kernel.org/lkml/20210531135852.113-1-xieyongji@bytedance.com/
>     * https://lore.kernel.org/lkml/20210525125622.1203-1-xieyongji@bytedance.com/
> 
>   - validating the device config:
> 
>     * https://lore.kernel.org/lkml/20210615104810.151-1-xieyongji@bytedance.com/
> 
>   - validating the device response:
> 
>     * https://lore.kernel.org/lkml/20210615105218.214-1-xieyongji@bytedance.com/
> 
> Since I'm not sure if I missing something during auditing, especially on some
> virtio device drivers that I'm not familiar with, we limit the supported device
> type to virtio block device currently. The support for other device types can be
> added after the security issue of corresponding device driver is clarified or
> fixed in the future.
> 
> Future work:
>   - Improve performance
>   - Userspace library (find a way to reuse device emulation code in qemu/rust-vmm)
>   - Support more device types
> 
> V7 to V8:
> - Rebased to newest kernel tree
> - Rework VDUSE driver to handle the device's control path in kernel
> - Limit the supported device type to virtio block device
> - Export free_iova_fast()
> - Remove the virtio-blk and virtio-scsi patches (will send them alone)
> - Remove all module parameters
> - Use the same MAJOR for both control device and VDUSE devices
> - Avoid eventfd cleanup in vduse_dev_release()
> 
> V6 to V7:
> - Export alloc_iova_fast()
> - Add get_config_size() callback
> - Add some patches to avoid trusting virtio devices
> - Add limited device emulation
> - Add some documents
> - Use workqueue to inject config irq
> - Add parameter on vq irq injecting
> - Rename vduse_domain_get_mapping_page() to vduse_domain_get_coherent_page()
> - Add WARN_ON() to catch message failure
> - Add some padding/reserved fields to uAPI structure
> - Fix some bugs
> - Rebase to vhost.git
> 
> V5 to V6:
> - Export receive_fd() instead of __receive_fd()
> - Factor out the unmapping logic of pa and va separatedly
> - Remove the logic of bounce page allocation in page fault handler
> - Use PAGE_SIZE as IOVA allocation granule
> - Add EPOLLOUT support
> - Enable setting API version in userspace
> - Fix some bugs
> 
> V4 to V5:
> - Remove the patch for irq binding
> - Use a single IOTLB for all types of mapping
> - Factor out vhost_vdpa_pa_map()
> - Add some sample codes in document
> - Use receice_fd_user() to pass file descriptor
> - Fix some bugs
> 
> V3 to V4:
> - Rebase to vhost.git
> - Split some patches
> - Add some documents
> - Use ioctl to inject interrupt rather than eventfd
> - Enable config interrupt support
> - Support binding irq to the specified cpu
> - Add two module parameter to limit bounce/iova size
> - Create char device rather than anon inode per vduse
> - Reuse vhost IOTLB for iova domain
> - Rework the message mechnism in control path
> 
> V2 to V3:
> - Rework the MMU-based IOMMU driver
> - Use the iova domain as iova allocator instead of genpool
> - Support transferring vma->vm_file in vhost-vdpa
> - Add SVA support in vhost-vdpa
> - Remove the patches on bounce pages reclaim
> 
> V1 to V2:
> - Add vhost-vdpa support
> - Add some documents
> - Based on the vdpa management tool
> - Introduce a workqueue for irq injection
> - Replace interval tree with array map to store the iova_map
> 
> Xie Yongji (10):
>   iova: Export alloc_iova_fast() and free_iova_fast();
>   file: Export receive_fd() to modules
>   eventfd: Increase the recursion depth of eventfd_signal()
>   vhost-iotlb: Add an opaque pointer for vhost IOTLB
>   vdpa: Add an opaque pointer for vdpa_config_ops.dma_map()
>   vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap()
>   vdpa: Support transferring virtual addressing during DMA mapping
>   vduse: Implement an MMU-based IOMMU driver
>   vduse: Introduce VDUSE - vDPA Device in Userspace
>   Documentation: Add documentation for VDUSE
> 
>  Documentation/userspace-api/index.rst              |    1 +
>  Documentation/userspace-api/ioctl/ioctl-number.rst |    1 +
>  Documentation/userspace-api/vduse.rst              |  222 +++
>  drivers/iommu/iova.c                               |    2 +
>  drivers/vdpa/Kconfig                               |   10 +
>  drivers/vdpa/Makefile                              |    1 +
>  drivers/vdpa/ifcvf/ifcvf_main.c                    |    2 +-
>  drivers/vdpa/mlx5/net/mlx5_vnet.c                  |    2 +-
>  drivers/vdpa/vdpa.c                                |    9 +-
>  drivers/vdpa/vdpa_sim/vdpa_sim.c                   |    8 +-
>  drivers/vdpa/vdpa_user/Makefile                    |    5 +
>  drivers/vdpa/vdpa_user/iova_domain.c               |  545 ++++++++
>  drivers/vdpa/vdpa_user/iova_domain.h               |   73 +
>  drivers/vdpa/vdpa_user/vduse_dev.c                 | 1453 ++++++++++++++++++++
>  drivers/vdpa/virtio_pci/vp_vdpa.c                  |    2 +-
>  drivers/vhost/iotlb.c                              |   20 +-
>  drivers/vhost/vdpa.c                               |  148 +-
>  fs/eventfd.c                                       |    2 +-
>  fs/file.c                                          |    6 +
>  include/linux/eventfd.h                            |    5 +-
>  include/linux/file.h                               |    7 +-
>  include/linux/vdpa.h                               |   21 +-
>  include/linux/vhost_iotlb.h                        |    3 +
>  include/uapi/linux/vduse.h                         |  143 ++
>  24 files changed, 2641 insertions(+), 50 deletions(-)
>  create mode 100644 Documentation/userspace-api/vduse.rst
>  create mode 100644 drivers/vdpa/vdpa_user/Makefile
>  create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
>  create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h
>  create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c
>  create mode 100644 include/uapi/linux/vduse.h
> 
> --
> 2.11.0

Hi, Yongji

Great work! your method is really wise that implements a software IOMMU
so that data path gets processed by userspace application efficiently.
Sorry, I've just realized your work and patches.


I was working on a similar thing aiming to get vhost-user-blk device
from SPDK vhost-target to be exported as local host kernel block device.
It's diagram is like this:


                                -----------------------------                
------------------------        |    -----------------      |    ---------------------------------------
|   <RunC Container>   |     <<<<<<<<| Shared-Memory |>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>        |
|       ---------      |     v  |    -----------------      |    |                            v        |
|       |dev/vdx|      |     v  |   <virtio-local-agent>    |    |      <Vhost-user Target>   v        |
------------+-----------     v  | ------------------------  |    |  --------------------------v------  |
            |                v  | |/dev/virtio-local-ctrl|  |    |  | unix socket |   |block driver |  |
            |                v  ------------+----------------    --------+--------------------v---------
            |                v              |                            |                    v
------------+----------------v--------------+----------------------------+--------------------v--------|
|    | block device |        v      |  Misc device |                     |                    v        |
|    -------+--------        v      --------+-------                     |                    v        |
|           |                v              |                            |                    v        |
| ----------+----------      v              |                            |                    v        |
| | virtio-blk driver |      v              |                            |                    v        |
| ----------+----------      v              |                            |                    v        |
|           | virtio bus     v              |                            |                    v        |
|   --------+---+-------     v              |                            |                    v        |
|               |            v              |                            |                    v        |
|               |            v              |                            |                    v        |
|     ----------+----------  v     ---------+-----------                 |                    v        |
|     | virtio-blk device |--<----| virtio-local driver |----------------<                    v        |
|     ----------+----------       ----------+-----------                                      v        |
|                                                                                    ---------+--------|
-------------------------------------------------------------------------------------| RNIC |--| PCIe |-
                                                                                     ----+---  | NVMe |
                                                                                         |     --------
                                                                                ---------+---------
                                                                                | Remote Storages |
                                                                                -------------------


I just draft out an initial proof version. When seeing your RFC mail,
I'm thinking that SPDK target may depends on your work, so I could
directly drop mine.
But after a glance of the RFC patches, seems it is not so easy or
efficient to get vduse leveraged by SPDK.
(Please correct me, if I get wrong understanding on vduse. :) )

The large barrier is bounce-buffer mapping: SPDK requires hugepages
for NVMe over PCIe and RDMA, so take some preallcoated hugepages to
map as bounce buffer is necessary. Or it's hard to avoid an extra
memcpy from bounce-buffer to hugepage.
If you can add an option to map hugepages as bounce-buffer,
then SPDK could also be a potential user of vduse.

It would be better if SPDK vhost-target could leverage the datapath of
vduse directly and efficiently. Even the control path is vdpa based,
we may work out one daemon as agent to bridge SPDK vhost-target with vduse.
Then users who already deployed SPDK vhost-target, can smoothly run
some agent daemon without code modification on SPDK vhost-target itself.
(It is only better-to-have for SPDK vhost-target app, not mandatory for SPDK) :)
At least, some small barrier is there that blocked a vhost-target use vduse
datapath efficiently:
- Current IO completion irq of vduse is IOCTL based. If add one option
to get it eventfd based, then vhost-target can directly notify IO
completion via negotiated eventfd.


Thanks
From Xiaodong
Yongji Xie June 29, 2021, 3:15 a.m. UTC | #6
On Mon, Jun 28, 2021 at 9:02 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Jun 15, 2021 at 10:13:21PM +0800, Xie Yongji wrote:
> > This series introduces a framework that makes it possible to implement
> > software-emulated vDPA devices in userspace. And to make it simple, the
> > emulated vDPA device's control path is handled in the kernel and only the
> > data path is implemented in the userspace.
>
> This looks interesting. Unfortunately I don't have enough time to do a
> full review, but I looked at the documentation and uapi header file to
> give feedback on the userspace ABI.
>

OK. Thanks for your comments. It's helpful!

Thanks,
Yongji
Jason Wang June 29, 2021, 4:10 a.m. UTC | #7
在 2021/6/28 下午1:54, Liu, Xiaodong 写道:
>> Several issues:
>>
>> - VDUSE needs to limit the total size of the bounce buffers (64M if I was not
>> wrong). Does it work for SPDK?
> Yes, Jason. It is enough and works for SPDK.
> Since it's a kind of bounce buffer mainly for in-flight IO, so limited size like
> 64MB is enough.


Ok.


>
>> - VDUSE can use hugepages but I'm not sure we can mandate hugepages (or we
>> need introduce new flags for supporting this)
> Same with your worry, I'm afraid too that it is a hard for a kernel module
> to directly preallocate hugepage internal.
> What I tried is that:
> 1. A simple agent daemon (represents for one device)  `preallocates` and maps
>      dozens of 2MB hugepages (like 64MB) for one device.
> 2. The daemon passes its mapping addr&len and hugepage fd to kernel
>      module through created IOCTL.
> 3. Kernel module remaps the hugepages inside kernel.


Such model should work, but the main "issue" is that it introduce  
overheads in the case of vhost-vDPA.

Note that in the case of vhost-vDPA, we don't use bounce buffer, the  
userspace pages were shared directly.

And since DMA is not done per page, it prevents us from using tricks  
like vm_insert_page() in those cases.


> 4. Vhost user target gets and maps hugepage fd from kernel module
>      in vhost-user msg through Unix Domain Socket cmsg.
> Then kernel module and target map on the same hugepage based
> bounce buffer for in-flight IO.
>
> If there is one option in VDUSE to map userspace preallocated memory, then
> VDUSE should be able to mandate it even it is hugepage based.
>

As above, this requires some kind of re-design since VDUSE depends on  
the model of mmap(MAP_SHARED) instead of umem registering.

Thanks
Jason Wang June 29, 2021, 4:12 a.m. UTC | #8
在 2021/6/28 下午6:32, Yongji Xie 写道:
>> The large barrier is bounce-buffer mapping: SPDK requires hugepages
>> for NVMe over PCIe and RDMA, so take some preallcoated hugepages to
>> map as bounce buffer is necessary. Or it's hard to avoid an extra
>> memcpy from bounce-buffer to hugepage.
>> If you can add an option to map hugepages as bounce-buffer,
>> then SPDK could also be a potential user of vduse.
>>
> I think we can support registering user space memory for bounce-buffer
> use like XDP does. But this needs to pin the pages, so I didn't
> consider it in this initial version.
>

Note that userspace should be unaware of the existence of the bounce buffer.

So we need to think carefully of mmap() vs umem registering.

Thanks
Yongji Xie June 29, 2021, 6:40 a.m. UTC | #9
On Tue, Jun 29, 2021 at 12:13 PM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2021/6/28 下午6:32, Yongji Xie 写道:
> >> The large barrier is bounce-buffer mapping: SPDK requires hugepages
> >> for NVMe over PCIe and RDMA, so take some preallcoated hugepages to
> >> map as bounce buffer is necessary. Or it's hard to avoid an extra
> >> memcpy from bounce-buffer to hugepage.
> >> If you can add an option to map hugepages as bounce-buffer,
> >> then SPDK could also be a potential user of vduse.
> >>
> > I think we can support registering user space memory for bounce-buffer
> > use like XDP does. But this needs to pin the pages, so I didn't
> > consider it in this initial version.
> >
>
> Note that userspace should be unaware of the existence of the bounce buffer.
>

If so, it might be hard to use umem. Because we can't use umem for
coherent mapping which needs physical address contiguous space.

Thanks,
Yongji
Jason Wang June 29, 2021, 7:33 a.m. UTC | #10
在 2021/6/29 下午2:40, Yongji Xie 写道:
> On Tue, Jun 29, 2021 at 12:13 PM Jason Wang <jasowang@redhat.com> wrote:
>>
>> 在 2021/6/28 下午6:32, Yongji Xie 写道:
>>>> The large barrier is bounce-buffer mapping: SPDK requires hugepages
>>>> for NVMe over PCIe and RDMA, so take some preallcoated hugepages to
>>>> map as bounce buffer is necessary. Or it's hard to avoid an extra
>>>> memcpy from bounce-buffer to hugepage.
>>>> If you can add an option to map hugepages as bounce-buffer,
>>>> then SPDK could also be a potential user of vduse.
>>>>
>>> I think we can support registering user space memory for bounce-buffer
>>> use like XDP does. But this needs to pin the pages, so I didn't
>>> consider it in this initial version.
>>>
>> Note that userspace should be unaware of the existence of the bounce buffer.
>>
> If so, it might be hard to use umem. Because we can't use umem for
> coherent mapping which needs physical address contiguous space.
>
> Thanks,
> Yongji


We probably can use umem for memory other than the virtqueue (still via 
mmap()).

Thanks
Liu Xiaodong June 29, 2021, 7:56 a.m. UTC | #11
>-----Original Message-----
>From: Jason Wang <jasowang@redhat.com>
>Sent: Tuesday, June 29, 2021 12:11 PM
>To: Liu, Xiaodong <xiaodong.liu@intel.com>; Xie Yongji
><xieyongji@bytedance.com>; mst@redhat.com; stefanha@redhat.com;
>sgarzare@redhat.com; parav@nvidia.com; hch@infradead.org;
>christian.brauner@canonical.com; rdunlap@infradead.org; willy@infradead.org;
>viro@zeniv.linux.org.uk; axboe@kernel.dk; bcrl@kvack.org; corbet@lwn.net;
>mika.penttila@nextfour.com; dan.carpenter@oracle.com; joro@8bytes.org;
>gregkh@linuxfoundation.org
>Cc: songmuchun@bytedance.com; virtualization@lists.linux-foundation.org;
>netdev@vger.kernel.org; kvm@vger.kernel.org; linux-fsdevel@vger.kernel.org;
>iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org
>Subject: Re: [PATCH v8 00/10] Introduce VDUSE - vDPA Device in Userspace
>
>
>在 2021/6/28 下午1:54, Liu, Xiaodong 写道:
>>> Several issues:
>>>
>>> - VDUSE needs to limit the total size of the bounce buffers (64M if I was not
>>> wrong). Does it work for SPDK?
>> Yes, Jason. It is enough and works for SPDK.
>> Since it's a kind of bounce buffer mainly for in-flight IO, so limited size like
>> 64MB is enough.
>
>
>Ok.
>
>
>>
>>> - VDUSE can use hugepages but I'm not sure we can mandate hugepages (or
>we
>>> need introduce new flags for supporting this)
>> Same with your worry, I'm afraid too that it is a hard for a kernel module
>> to directly preallocate hugepage internal.
>> What I tried is that:
>> 1. A simple agent daemon (represents for one device)  `preallocates` and maps
>>      dozens of 2MB hugepages (like 64MB) for one device.
>> 2. The daemon passes its mapping addr&len and hugepage fd to kernel
>>      module through created IOCTL.
>> 3. Kernel module remaps the hugepages inside kernel.
>
>
>Such model should work, but the main "issue" is that it introduce
>overheads in the case of vhost-vDPA.
>
>Note that in the case of vhost-vDPA, we don't use bounce buffer, the
>userspace pages were shared directly.
>
>And since DMA is not done per page, it prevents us from using tricks
>like vm_insert_page() in those cases.
>

Yes, really, it's a problem to handle vhost-vDPA case.
But there are already several solutions to get VM served, like vhost-user,
vfio-user, so at least for SPDK, it won't serve VM through VDUSE. If a user
still want to do that, then the user should tolerate Introduced overhead.

In other words, software backend like SPDK, will appreciate the virtio
datapath of VDUSE to serve local host instead of VM. That's why I also drafted
a "virtio-local" to bridge vhost-user target and local host kernel virtio-blk.

>
>> 4. Vhost user target gets and maps hugepage fd from kernel module
>>      in vhost-user msg through Unix Domain Socket cmsg.
>> Then kernel module and target map on the same hugepage based
>> bounce buffer for in-flight IO.
>>
>> If there is one option in VDUSE to map userspace preallocated memory, then
>> VDUSE should be able to mandate it even it is hugepage based.
>>
>
>As above, this requires some kind of re-design since VDUSE depends on
>the model of mmap(MAP_SHARED) instead of umem registering.

Got it, Jason, this may be hard for current version of VDUSE.
Maybe we can consider these options after VDUSE merged later.

Since if VDUSE datapath could be directly leveraged by vhost-user target,
its value will be propagated immediately.

>
>Thanks
Yongji Xie June 29, 2021, 8:14 a.m. UTC | #12
On Tue, Jun 29, 2021 at 3:56 PM Liu, Xiaodong <xiaodong.liu@intel.com> wrote:
>
>
>
> >-----Original Message-----
> >From: Jason Wang <jasowang@redhat.com>
> >Sent: Tuesday, June 29, 2021 12:11 PM
> >To: Liu, Xiaodong <xiaodong.liu@intel.com>; Xie Yongji
> ><xieyongji@bytedance.com>; mst@redhat.com; stefanha@redhat.com;
> >sgarzare@redhat.com; parav@nvidia.com; hch@infradead.org;
> >christian.brauner@canonical.com; rdunlap@infradead.org; willy@infradead.org;
> >viro@zeniv.linux.org.uk; axboe@kernel.dk; bcrl@kvack.org; corbet@lwn.net;
> >mika.penttila@nextfour.com; dan.carpenter@oracle.com; joro@8bytes.org;
> >gregkh@linuxfoundation.org
> >Cc: songmuchun@bytedance.com; virtualization@lists.linux-foundation.org;
> >netdev@vger.kernel.org; kvm@vger.kernel.org; linux-fsdevel@vger.kernel.org;
> >iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org
> >Subject: Re: [PATCH v8 00/10] Introduce VDUSE - vDPA Device in Userspace
> >
> >
> >在 2021/6/28 下午1:54, Liu, Xiaodong 写道:
> >>> Several issues:
> >>>
> >>> - VDUSE needs to limit the total size of the bounce buffers (64M if I was not
> >>> wrong). Does it work for SPDK?
> >> Yes, Jason. It is enough and works for SPDK.
> >> Since it's a kind of bounce buffer mainly for in-flight IO, so limited size like
> >> 64MB is enough.
> >
> >
> >Ok.
> >
> >
> >>
> >>> - VDUSE can use hugepages but I'm not sure we can mandate hugepages (or
> >we
> >>> need introduce new flags for supporting this)
> >> Same with your worry, I'm afraid too that it is a hard for a kernel module
> >> to directly preallocate hugepage internal.
> >> What I tried is that:
> >> 1. A simple agent daemon (represents for one device)  `preallocates` and maps
> >>      dozens of 2MB hugepages (like 64MB) for one device.
> >> 2. The daemon passes its mapping addr&len and hugepage fd to kernel
> >>      module through created IOCTL.
> >> 3. Kernel module remaps the hugepages inside kernel.
> >
> >
> >Such model should work, but the main "issue" is that it introduce
> >overheads in the case of vhost-vDPA.
> >
> >Note that in the case of vhost-vDPA, we don't use bounce buffer, the
> >userspace pages were shared directly.
> >
> >And since DMA is not done per page, it prevents us from using tricks
> >like vm_insert_page() in those cases.
> >
>
> Yes, really, it's a problem to handle vhost-vDPA case.
> But there are already several solutions to get VM served, like vhost-user,
> vfio-user, so at least for SPDK, it won't serve VM through VDUSE. If a user
> still want to do that, then the user should tolerate Introduced overhead.
>
> In other words, software backend like SPDK, will appreciate the virtio
> datapath of VDUSE to serve local host instead of VM. That's why I also drafted
> a "virtio-local" to bridge vhost-user target and local host kernel virtio-blk.
>
> >
> >> 4. Vhost user target gets and maps hugepage fd from kernel module
> >>      in vhost-user msg through Unix Domain Socket cmsg.
> >> Then kernel module and target map on the same hugepage based
> >> bounce buffer for in-flight IO.
> >>
> >> If there is one option in VDUSE to map userspace preallocated memory, then
> >> VDUSE should be able to mandate it even it is hugepage based.
> >>
> >
> >As above, this requires some kind of re-design since VDUSE depends on
> >the model of mmap(MAP_SHARED) instead of umem registering.
>
> Got it, Jason, this may be hard for current version of VDUSE.
> Maybe we can consider these options after VDUSE merged later.
>
> Since if VDUSE datapath could be directly leveraged by vhost-user target,
> its value will be propagated immediately.
>

Agreed!

Thanks,
Yongji