mbox series

[V3,0/7] mdev based hardware virtio offloading support

Message ID 20191011081557.28302-1-jasowang@redhat.com (mailing list archive)
Headers show
Series mdev based hardware virtio offloading support | expand

Message

Jason Wang Oct. 11, 2019, 8:15 a.m. UTC
Hi all:

There are hardware that can do virtio datapath offloading while having
its own control path. This path tries to implement a mdev based
unified API to support using kernel virtio driver to drive those
devices. This is done by introducing a new mdev transport for virtio
(virtio_mdev) and register itself as a new kind of mdev driver. Then
it provides a unified way for kernel virtio driver to talk with mdev
device implementation.

Though the series only contains kernel driver support, the goal is to
make the transport generic enough to support userspace drivers. This
means vhost-mdev[1] could be built on top as well by resuing the
transport.

A sample driver is also implemented which simulate a virito-net
loopback ethernet device on top of vringh + workqueue. This could be
used as a reference implementation for real hardware driver.

Consider mdev framework only support VFIO device and driver right now,
this series also extend it to support other types. This is done
through introducing class id to the device and pairing it with
id_talbe claimed by the driver. On top, this seris also decouple
device specific parents ops out of the common ones.

Pktgen test was done with virito-net + mvnet loop back device.

Please review.

[1] https://lkml.org/lkml/2019/9/26/15

Changes from V2:

- fail when class_id is not specified
- drop the vringh patch
- match the doc to the code
- tweak the commit log
- move device_ops from parent to mdev device
- remove the unused MDEV_ID_VHOST

Changes from V1:

- move virtio_mdev.c to drivers/virtio
- store class_id in mdev_device instead of mdev_parent
- store device_ops in mdev_device instead of mdev_parent
- reorder the patch, vringh fix comes first
- really silent compiling warnings
- really switch to use u16 for class_id
- uevent and modpost support for mdev class_id
- vraious tweaks per comments from Parav

Changes from RFC-V2:

- silent compile warnings on some specific configuration
- use u16 instead u8 for class id
- reseve MDEV_ID_VHOST for future vhost-mdev work
- introduce "virtio" type for mvnet and make "vhost" type for future
  work
- add entries in MAINTAINER
- tweak and typos fixes in commit log

Changes from RFC-V1:

- rename device id to class id
- add docs for class id and device specific ops (device_ops)
- split device_ops into seperate headers
- drop the mdev_set_dma_ops()
- use device_ops to implement the transport API, then it's not a part
  of UAPI any more
- use GFP_ATOMIC in mvnet sample device and other tweaks
- set_vring_base/get_vring_base support for mvnet device

Jason Wang (7):
  mdev: class id support
  mdev: bus uevent support
  modpost: add support for mdev class id
  mdev: introduce device specific ops
  mdev: introduce virtio device and its device ops
  virtio: introduce a mdev based transport
  docs: sample driver to demonstrate how to implement virtio-mdev
    framework

 .../driver-api/vfio-mediated-device.rst       |  25 +-
 MAINTAINERS                                   |   2 +
 drivers/gpu/drm/i915/gvt/kvmgt.c              |  17 +-
 drivers/s390/cio/vfio_ccw_ops.c               |  17 +-
 drivers/s390/crypto/vfio_ap_ops.c             |  13 +-
 drivers/vfio/mdev/mdev_core.c                 |  18 +
 drivers/vfio/mdev/mdev_driver.c               |  22 +
 drivers/vfio/mdev/mdev_private.h              |   2 +
 drivers/vfio/mdev/vfio_mdev.c                 |  45 +-
 drivers/virtio/Kconfig                        |   7 +
 drivers/virtio/Makefile                       |   1 +
 drivers/virtio/virtio_mdev.c                  | 416 +++++++++++
 include/linux/mdev.h                          |  49 +-
 include/linux/mod_devicetable.h               |   8 +
 include/linux/vfio_mdev.h                     |  52 ++
 include/linux/virtio_mdev.h                   | 148 ++++
 samples/Kconfig                               |   7 +
 samples/vfio-mdev/Makefile                    |   1 +
 samples/vfio-mdev/mbochs.c                    |  19 +-
 samples/vfio-mdev/mdpy.c                      |  20 +-
 samples/vfio-mdev/mtty.c                      |  17 +-
 samples/vfio-mdev/mvnet.c                     | 691 ++++++++++++++++++
 scripts/mod/devicetable-offsets.c             |   3 +
 scripts/mod/file2alias.c                      |  10 +
 24 files changed, 1523 insertions(+), 87 deletions(-)
 create mode 100644 drivers/virtio/virtio_mdev.c
 create mode 100644 include/linux/vfio_mdev.h
 create mode 100644 include/linux/virtio_mdev.h
 create mode 100644 samples/vfio-mdev/mvnet.c

Comments

Stefan Hajnoczi Oct. 14, 2019, 5:49 p.m. UTC | #1
On Fri, Oct 11, 2019 at 04:15:50PM +0800, Jason Wang wrote:
> There are hardware that can do virtio datapath offloading while having
> its own control path. This path tries to implement a mdev based
> unified API to support using kernel virtio driver to drive those
> devices. This is done by introducing a new mdev transport for virtio
> (virtio_mdev) and register itself as a new kind of mdev driver. Then
> it provides a unified way for kernel virtio driver to talk with mdev
> device implementation.
> 
> Though the series only contains kernel driver support, the goal is to
> make the transport generic enough to support userspace drivers. This
> means vhost-mdev[1] could be built on top as well by resuing the
> transport.
> 
> A sample driver is also implemented which simulate a virito-net
> loopback ethernet device on top of vringh + workqueue. This could be
> used as a reference implementation for real hardware driver.
> 
> Consider mdev framework only support VFIO device and driver right now,
> this series also extend it to support other types. This is done
> through introducing class id to the device and pairing it with
> id_talbe claimed by the driver. On top, this seris also decouple
> device specific parents ops out of the common ones.

I was curious so I took a quick look and posted comments.

I guess this driver runs inside the guest since it registers virtio
devices?

If this is used with physical PCI devices that support datapath
offloading then how are physical devices presented to the guest without
SR-IOV?

Stefan
Jason Wang Oct. 15, 2019, 3:37 a.m. UTC | #2
On 2019/10/15 上午1:49, Stefan Hajnoczi wrote:
> On Fri, Oct 11, 2019 at 04:15:50PM +0800, Jason Wang wrote:
>> There are hardware that can do virtio datapath offloading while having
>> its own control path. This path tries to implement a mdev based
>> unified API to support using kernel virtio driver to drive those
>> devices. This is done by introducing a new mdev transport for virtio
>> (virtio_mdev) and register itself as a new kind of mdev driver. Then
>> it provides a unified way for kernel virtio driver to talk with mdev
>> device implementation.
>>
>> Though the series only contains kernel driver support, the goal is to
>> make the transport generic enough to support userspace drivers. This
>> means vhost-mdev[1] could be built on top as well by resuing the
>> transport.
>>
>> A sample driver is also implemented which simulate a virito-net
>> loopback ethernet device on top of vringh + workqueue. This could be
>> used as a reference implementation for real hardware driver.
>>
>> Consider mdev framework only support VFIO device and driver right now,
>> this series also extend it to support other types. This is done
>> through introducing class id to the device and pairing it with
>> id_talbe claimed by the driver. On top, this seris also decouple
>> device specific parents ops out of the common ones.
> I was curious so I took a quick look and posted comments.
>
> I guess this driver runs inside the guest since it registers virtio
> devices?


It could run in either guest or host. But the main focus is to run in 
the host then we can use virtio drivers in containers.


>
> If this is used with physical PCI devices that support datapath
> offloading then how are physical devices presented to the guest without
> SR-IOV?


We will do control path meditation through vhost-mdev[1] and 
vhost-vfio[2]. Then we will present a full virtio compatible ethernet 
device for guest.

SR-IOV is not a must, any mdev device that implements the API defined in 
patch 5 can be used by this framework.

Thanks

[1] https://lkml.org/lkml/2019/9/26/15

[2] https://patchwork.ozlabs.org/cover/984763/


>
> Stefan
Stefan Hajnoczi Oct. 15, 2019, 2:37 p.m. UTC | #3
On Tue, Oct 15, 2019 at 11:37:17AM +0800, Jason Wang wrote:
> 
> On 2019/10/15 上午1:49, Stefan Hajnoczi wrote:
> > On Fri, Oct 11, 2019 at 04:15:50PM +0800, Jason Wang wrote:
> > > There are hardware that can do virtio datapath offloading while having
> > > its own control path. This path tries to implement a mdev based
> > > unified API to support using kernel virtio driver to drive those
> > > devices. This is done by introducing a new mdev transport for virtio
> > > (virtio_mdev) and register itself as a new kind of mdev driver. Then
> > > it provides a unified way for kernel virtio driver to talk with mdev
> > > device implementation.
> > > 
> > > Though the series only contains kernel driver support, the goal is to
> > > make the transport generic enough to support userspace drivers. This
> > > means vhost-mdev[1] could be built on top as well by resuing the
> > > transport.
> > > 
> > > A sample driver is also implemented which simulate a virito-net
> > > loopback ethernet device on top of vringh + workqueue. This could be
> > > used as a reference implementation for real hardware driver.
> > > 
> > > Consider mdev framework only support VFIO device and driver right now,
> > > this series also extend it to support other types. This is done
> > > through introducing class id to the device and pairing it with
> > > id_talbe claimed by the driver. On top, this seris also decouple
> > > device specific parents ops out of the common ones.
> > I was curious so I took a quick look and posted comments.
> > 
> > I guess this driver runs inside the guest since it registers virtio
> > devices?
> 
> 
> It could run in either guest or host. But the main focus is to run in the
> host then we can use virtio drivers in containers.
> 
> 
> > 
> > If this is used with physical PCI devices that support datapath
> > offloading then how are physical devices presented to the guest without
> > SR-IOV?
> 
> 
> We will do control path meditation through vhost-mdev[1] and vhost-vfio[2].
> Then we will present a full virtio compatible ethernet device for guest.
> 
> SR-IOV is not a must, any mdev device that implements the API defined in
> patch 5 can be used by this framework.

What I'm trying to understand is: if you want to present a virtio-pci
device to the guest (e.g. using vhost-mdev or vhost-vfio), then how is
that related to this patch series?

Does this mean this patch series is useful mostly for presenting virtio
devices to containers or the host?

Stefan
Jason Wang Oct. 17, 2019, 1:42 a.m. UTC | #4
On 2019/10/15 下午10:37, Stefan Hajnoczi wrote:
> On Tue, Oct 15, 2019 at 11:37:17AM +0800, Jason Wang wrote:
>> On 2019/10/15 上午1:49, Stefan Hajnoczi wrote:
>>> On Fri, Oct 11, 2019 at 04:15:50PM +0800, Jason Wang wrote:
>>>> There are hardware that can do virtio datapath offloading while having
>>>> its own control path. This path tries to implement a mdev based
>>>> unified API to support using kernel virtio driver to drive those
>>>> devices. This is done by introducing a new mdev transport for virtio
>>>> (virtio_mdev) and register itself as a new kind of mdev driver. Then
>>>> it provides a unified way for kernel virtio driver to talk with mdev
>>>> device implementation.
>>>>
>>>> Though the series only contains kernel driver support, the goal is to
>>>> make the transport generic enough to support userspace drivers. This
>>>> means vhost-mdev[1] could be built on top as well by resuing the
>>>> transport.
>>>>
>>>> A sample driver is also implemented which simulate a virito-net
>>>> loopback ethernet device on top of vringh + workqueue. This could be
>>>> used as a reference implementation for real hardware driver.
>>>>
>>>> Consider mdev framework only support VFIO device and driver right now,
>>>> this series also extend it to support other types. This is done
>>>> through introducing class id to the device and pairing it with
>>>> id_talbe claimed by the driver. On top, this seris also decouple
>>>> device specific parents ops out of the common ones.
>>> I was curious so I took a quick look and posted comments.
>>>
>>> I guess this driver runs inside the guest since it registers virtio
>>> devices?
>>
>> It could run in either guest or host. But the main focus is to run in the
>> host then we can use virtio drivers in containers.
>>
>>
>>> If this is used with physical PCI devices that support datapath
>>> offloading then how are physical devices presented to the guest without
>>> SR-IOV?
>>
>> We will do control path meditation through vhost-mdev[1] and vhost-vfio[2].
>> Then we will present a full virtio compatible ethernet device for guest.
>>
>> SR-IOV is not a must, any mdev device that implements the API defined in
>> patch 5 can be used by this framework.
> What I'm trying to understand is: if you want to present a virtio-pci
> device to the guest (e.g. using vhost-mdev or vhost-vfio), then how is
> that related to this patch series?


This series introduce some infrastructure that would be used by vhost-mdev:

1) allow new type of mdev devices/drivers other than vfio (through 
class_id and device ops)

2) a set of virtio specific callbacks that will be used by both 
vhost-mdev and virtio-mdev defined in patch 5

Then vhost-mdev can be implemented on top: a new mdev class id but reuse 
the callback defined in 2. Through this way the parent can provides a 
single set of callbacks (device ops) for both kernel virtio driver 
(through virtio-mdev) or userspace virtio driver (through vhost-mdev).


>
> Does this mean this patch series is useful mostly for presenting virtio
> devices to containers or the host?


Patch 6 is mainly for bare metal or container use case, through it could 
be used in guest as well. Patch 7 is a sample virtio mdev device 
implementation. Patch 1 - 5 was the infrastructure for implementing 
types other than vfio, the first user is virito-mdev, then Tiwei's 
vhost-mdev and Parav's mlx5 mdev.

Thanks


>
> Stefan
Stefan Hajnoczi Oct. 17, 2019, 9:43 a.m. UTC | #5
On Thu, Oct 17, 2019 at 09:42:53AM +0800, Jason Wang wrote:
> 
> On 2019/10/15 下午10:37, Stefan Hajnoczi wrote:
> > On Tue, Oct 15, 2019 at 11:37:17AM +0800, Jason Wang wrote:
> > > On 2019/10/15 上午1:49, Stefan Hajnoczi wrote:
> > > > On Fri, Oct 11, 2019 at 04:15:50PM +0800, Jason Wang wrote:
> > > > > There are hardware that can do virtio datapath offloading while having
> > > > > its own control path. This path tries to implement a mdev based
> > > > > unified API to support using kernel virtio driver to drive those
> > > > > devices. This is done by introducing a new mdev transport for virtio
> > > > > (virtio_mdev) and register itself as a new kind of mdev driver. Then
> > > > > it provides a unified way for kernel virtio driver to talk with mdev
> > > > > device implementation.
> > > > > 
> > > > > Though the series only contains kernel driver support, the goal is to
> > > > > make the transport generic enough to support userspace drivers. This
> > > > > means vhost-mdev[1] could be built on top as well by resuing the
> > > > > transport.
> > > > > 
> > > > > A sample driver is also implemented which simulate a virito-net
> > > > > loopback ethernet device on top of vringh + workqueue. This could be
> > > > > used as a reference implementation for real hardware driver.
> > > > > 
> > > > > Consider mdev framework only support VFIO device and driver right now,
> > > > > this series also extend it to support other types. This is done
> > > > > through introducing class id to the device and pairing it with
> > > > > id_talbe claimed by the driver. On top, this seris also decouple
> > > > > device specific parents ops out of the common ones.
> > > > I was curious so I took a quick look and posted comments.
> > > > 
> > > > I guess this driver runs inside the guest since it registers virtio
> > > > devices?
> > > 
> > > It could run in either guest or host. But the main focus is to run in the
> > > host then we can use virtio drivers in containers.
> > > 
> > > 
> > > > If this is used with physical PCI devices that support datapath
> > > > offloading then how are physical devices presented to the guest without
> > > > SR-IOV?
> > > 
> > > We will do control path meditation through vhost-mdev[1] and vhost-vfio[2].
> > > Then we will present a full virtio compatible ethernet device for guest.
> > > 
> > > SR-IOV is not a must, any mdev device that implements the API defined in
> > > patch 5 can be used by this framework.
> > What I'm trying to understand is: if you want to present a virtio-pci
> > device to the guest (e.g. using vhost-mdev or vhost-vfio), then how is
> > that related to this patch series?
> 
> 
> This series introduce some infrastructure that would be used by vhost-mdev:
> 
> 1) allow new type of mdev devices/drivers other than vfio (through class_id
> and device ops)
> 
> 2) a set of virtio specific callbacks that will be used by both vhost-mdev
> and virtio-mdev defined in patch 5
> 
> Then vhost-mdev can be implemented on top: a new mdev class id but reuse the
> callback defined in 2. Through this way the parent can provides a single set
> of callbacks (device ops) for both kernel virtio driver (through
> virtio-mdev) or userspace virtio driver (through vhost-mdev).

Okay, thanks for explaining!

Stefan