Message ID | 1512444796-30615-3-git-send-email-wei.w.wang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote: > Add the vhost-pci-net device emulation. The device uses bar 2 to expose > the remote VM's memory to the guest. The first 4KB of the the bar area > stores the metadata which describes the remote memory and vring info. This device looks like the beginning of a new "vhost-pci" virtio device type. There are layering violations: 1. This has nothing to do with virtio-net or networking, it's purely vhost-pci. Why is it called vhost-pci-net instead of vhost-pci? 2. VirtIODevice does not know about PCI. It should work over virtio-ccw or virtio-mmio. This patch talks about BARs inside a VirtIODevice so there is a problem here. I'm concerned that there is no clear architecture and elements of the virtio architecture are being mixed up with no justification. Can you explain what you're trying to do? Please post a specification for the vhost-pci device so the operation of the device can be discussed and is clear to reviewers.
On Tue, Dec 05, 2017 at 02:59:50PM +0000, Stefan Hajnoczi wrote: > On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote: > > Add the vhost-pci-net device emulation. The device uses bar 2 to expose > > the remote VM's memory to the guest. The first 4KB of the the bar area > > stores the metadata which describes the remote memory and vring info. > > This device looks like the beginning of a new "vhost-pci" virtio device > type. There are layering violations: > > 1. This has nothing to do with virtio-net or networking, it's purely > vhost-pci. Why is it called vhost-pci-net instead of vhost-pci? > > 2. VirtIODevice does not know about PCI. It should work over virtio-ccw > or virtio-mmio. This patch talks about BARs inside a VirtIODevice so > there is a problem here. > > I'm concerned that there is no clear architecture and elements of the > virtio architecture are being mixed up with no justification. > > Can you explain what you're trying to do? A specification was posted here: https://lists.oasis-open.org/archives/virtio-comment/201606/msg00009.html I gather there have been some changes since. > Please post a specification for the vhost-pci device so the operation of > the device can be discussed and is clear to reviewers. I'm not sure a full respin of the spec is strictly required at this point. A list of differences with last spec posted would be appreciated.
On Tue, Dec 05, 2017 at 02:59:50PM +0000, Stefan Hajnoczi wrote: > On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote: > > Add the vhost-pci-net device emulation. The device uses bar 2 to expose > > the remote VM's memory to the guest. The first 4KB of the the bar area > > stores the metadata which describes the remote memory and vring info. > > This device looks like the beginning of a new "vhost-pci" virtio device > type. There are layering violations: > > 1. This has nothing to do with virtio-net or networking, it's purely > vhost-pci. Why is it called vhost-pci-net instead of vhost-pci? > > 2. VirtIODevice does not know about PCI. It should work over virtio-ccw > or virtio-mmio. This patch talks about BARs inside a VirtIODevice so > there is a problem here. I think the point is how memory is exposed to another guest. This device exposes it as a pci bar. I don't think e.g. ccw can do this, it's all hypercall-based. > I'm concerned that there is no clear architecture and elements of the > virtio architecture are being mixed up with no justification. > > Can you explain what you're trying to do? > > Please post a specification for the vhost-pci device so the operation of > the device can be discussed and is clear to reviewers.
On Tue, Dec 05, 2017 at 05:55:45PM +0200, Michael S. Tsirkin wrote: > On Tue, Dec 05, 2017 at 02:59:50PM +0000, Stefan Hajnoczi wrote: > > On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote: > > > Add the vhost-pci-net device emulation. The device uses bar 2 to expose > > > the remote VM's memory to the guest. The first 4KB of the the bar area > > > stores the metadata which describes the remote memory and vring info. > > > > This device looks like the beginning of a new "vhost-pci" virtio device > > type. There are layering violations: > > > > 1. This has nothing to do with virtio-net or networking, it's purely > > vhost-pci. Why is it called vhost-pci-net instead of vhost-pci? > > > > 2. VirtIODevice does not know about PCI. It should work over virtio-ccw > > or virtio-mmio. This patch talks about BARs inside a VirtIODevice so > > there is a problem here. > > I think the point is how memory is exposed to another guest. This > device exposes it as a pci bar. I don't think e.g. ccw can do this, > it's all hypercall-based. Yes, that's why the BAR issue needs to be discussed. In terms of the patches, the clean way to do it is for the vhost-pci device to have a memory region that is not called "BAR". The virtio-pci transport can expose it as a BAR but the device doesn't need to know about it. Other transports that support memory mapping could then work with this device too. The VIRTIO specification needs to capture this transport requirement somehow too so it's clear that the vhost device can only run over transports that support memory mapping. That said, it's not clear to me why the vhost-pci device is a VIRTIO device. It doesn't use virtqueues or the configuration space. It only uses the vhost-user chardev and the mapped memory. Isn't it better to make it a PCI device? Stefan
On Tue, Dec 05, 2017 at 04:41:54PM +0000, Stefan Hajnoczi wrote: > On Tue, Dec 05, 2017 at 05:55:45PM +0200, Michael S. Tsirkin wrote: > > On Tue, Dec 05, 2017 at 02:59:50PM +0000, Stefan Hajnoczi wrote: > > > On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote: > > > > Add the vhost-pci-net device emulation. The device uses bar 2 to expose > > > > the remote VM's memory to the guest. The first 4KB of the the bar area > > > > stores the metadata which describes the remote memory and vring info. > > > > > > This device looks like the beginning of a new "vhost-pci" virtio device > > > type. There are layering violations: > > > > > > 1. This has nothing to do with virtio-net or networking, it's purely > > > vhost-pci. Why is it called vhost-pci-net instead of vhost-pci? > > > > > > 2. VirtIODevice does not know about PCI. It should work over virtio-ccw > > > or virtio-mmio. This patch talks about BARs inside a VirtIODevice so > > > there is a problem here. > > > > I think the point is how memory is exposed to another guest. This > > device exposes it as a pci bar. I don't think e.g. ccw can do this, > > it's all hypercall-based. > > Yes, that's why the BAR issue needs to be discussed. > > In terms of the patches, the clean way to do it is for the > vhost-pci device to have a memory region that is not called "BAR". The > virtio-pci transport can expose it as a BAR but the device doesn't need > to know about it. Other transports that support memory mapping could > then work with this device too. True, though mmio is pretty much a legacy transport at this point at least from qemu perspective as arm devs don't seem to be working on virtio 1.0 support in qemu. So I am not sure how much of a priority should transport isolation be. > The VIRTIO specification needs to capture this transport requirement > somehow too so it's clear that the vhost device can only run over > transports that support memory mapping. > > That said, it's not clear to me why the vhost-pci device is a VIRTIO > device. It doesn't use virtqueues or the configuration space. It only > uses the vhost-user chardev and the mapped memory. Isn't it better to > make it a PCI device? > > Stefan Seems similar enough to me, except The roles of device and driver are reversed here.
On Tue, 5 Dec 2017 18:53:29 +0200 "Michael S. Tsirkin" <mst@redhat.com> wrote: > On Tue, Dec 05, 2017 at 04:41:54PM +0000, Stefan Hajnoczi wrote: > > On Tue, Dec 05, 2017 at 05:55:45PM +0200, Michael S. Tsirkin wrote: > > > On Tue, Dec 05, 2017 at 02:59:50PM +0000, Stefan Hajnoczi wrote: > > > > On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote: > > > > > Add the vhost-pci-net device emulation. The device uses bar 2 to expose > > > > > the remote VM's memory to the guest. The first 4KB of the the bar area > > > > > stores the metadata which describes the remote memory and vring info. > > > > > > > > This device looks like the beginning of a new "vhost-pci" virtio device > > > > type. There are layering violations: > > > > > > > > 1. This has nothing to do with virtio-net or networking, it's purely > > > > vhost-pci. Why is it called vhost-pci-net instead of vhost-pci? > > > > > > > > 2. VirtIODevice does not know about PCI. It should work over virtio-ccw > > > > or virtio-mmio. This patch talks about BARs inside a VirtIODevice so > > > > there is a problem here. > > > > > > I think the point is how memory is exposed to another guest. This > > > device exposes it as a pci bar. I don't think e.g. ccw can do this, > > > it's all hypercall-based. > > > > Yes, that's why the BAR issue needs to be discussed. > > > > In terms of the patches, the clean way to do it is for the > > vhost-pci device to have a memory region that is not called "BAR". The > > virtio-pci transport can expose it as a BAR but the device doesn't need > > to know about it. Other transports that support memory mapping could > > then work with this device too. > > True, though mmio is pretty much a legacy transport at this point > at least from qemu perspective as arm devs don't seem to be working > on virtio 1.0 support in qemu. So I am not sure how much > of a priority should transport isolation be. I currently don't see an easy way to make this work via ccw, FWIW. We would need a dedicated mechanism for it, and I'm not sure what the gain would be. > > > The VIRTIO specification needs to capture this transport requirement > > somehow too so it's clear that the vhost device can only run over > > transports that support memory mapping. > > > > That said, it's not clear to me why the vhost-pci device is a VIRTIO > > device. It doesn't use virtqueues or the configuration space. It only > > uses the vhost-user chardev and the mapped memory. Isn't it better to > > make it a PCI device? > > > > Stefan > > Seems similar enough to me, except The roles of device and driver are > reversed here. > But will anything other than pci ever make use of this?
On Tue, Dec 05, 2017 at 06:00:10PM +0100, Cornelia Huck wrote: > On Tue, 5 Dec 2017 18:53:29 +0200 > "Michael S. Tsirkin" <mst@redhat.com> wrote: > > > On Tue, Dec 05, 2017 at 04:41:54PM +0000, Stefan Hajnoczi wrote: > > > On Tue, Dec 05, 2017 at 05:55:45PM +0200, Michael S. Tsirkin wrote: > > > > On Tue, Dec 05, 2017 at 02:59:50PM +0000, Stefan Hajnoczi wrote: > > > > > On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote: > > > > > > Add the vhost-pci-net device emulation. The device uses bar 2 to expose > > > > > > the remote VM's memory to the guest. The first 4KB of the the bar area > > > > > > stores the metadata which describes the remote memory and vring info. > > > > > > > > > > This device looks like the beginning of a new "vhost-pci" virtio device > > > > > type. There are layering violations: > > > > > > > > > > 1. This has nothing to do with virtio-net or networking, it's purely > > > > > vhost-pci. Why is it called vhost-pci-net instead of vhost-pci? > > > > > > > > > > 2. VirtIODevice does not know about PCI. It should work over virtio-ccw > > > > > or virtio-mmio. This patch talks about BARs inside a VirtIODevice so > > > > > there is a problem here. > > > > > > > > I think the point is how memory is exposed to another guest. This > > > > device exposes it as a pci bar. I don't think e.g. ccw can do this, > > > > it's all hypercall-based. > > > > > > Yes, that's why the BAR issue needs to be discussed. > > > > > > In terms of the patches, the clean way to do it is for the > > > vhost-pci device to have a memory region that is not called "BAR". The > > > virtio-pci transport can expose it as a BAR but the device doesn't need > > > to know about it. Other transports that support memory mapping could > > > then work with this device too. > > > > True, though mmio is pretty much a legacy transport at this point > > at least from qemu perspective as arm devs don't seem to be working > > on virtio 1.0 support in qemu. So I am not sure how much > > of a priority should transport isolation be. > > I currently don't see an easy way to make this work via ccw, FWIW. We > would need a dedicated mechanism for it, and I'm not sure what the gain > would be. > > > > > > The VIRTIO specification needs to capture this transport requirement > > > somehow too so it's clear that the vhost device can only run over > > > transports that support memory mapping. > > > > > > That said, it's not clear to me why the vhost-pci device is a VIRTIO > > > device. It doesn't use virtqueues or the configuration space. It only > > > uses the vhost-user chardev and the mapped memory. Isn't it better to > > > make it a PCI device? > > > > > > Stefan > > > > Seems similar enough to me, except The roles of device and driver are > > reversed here. > > > > But will anything other than pci ever make use of this? That's just it, I am not entirely sure. So IMHO it's fine to make it a pci specific thing for now. virtio started like that too.
On 12/05/2017 10:59 PM, Stefan Hajnoczi wrote: > On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote: >> Add the vhost-pci-net device emulation. The device uses bar 2 to expose >> the remote VM's memory to the guest. The first 4KB of the the bar area >> stores the metadata which describes the remote memory and vring info. > This device looks like the beginning of a new "vhost-pci" virtio device > type. There are layering violations: > > 1. This has nothing to do with virtio-net or networking, it's purely > vhost-pci. Why is it called vhost-pci-net instead of vhost-pci? Here are a few things that are specific to vhost-pci-net here: 1) The device category here is set to NETWORK. 2) vhost-pci-net related features (e.g. future MQ feature) will be added to the property here. Right now, we only have vhost-pci-net. How about all focusing on the vhost-pci-net, and ignore vhost-pci for now? When future other types of devices are addded, we can abstract out a common vhost-pci layer? Best, Wei
On Wed, Dec 06, 2017 at 06:17:27PM +0800, Wei Wang wrote: > On 12/05/2017 10:59 PM, Stefan Hajnoczi wrote: > > On Tue, Dec 05, 2017 at 11:33:11AM +0800, Wei Wang wrote: > > > Add the vhost-pci-net device emulation. The device uses bar 2 to expose > > > the remote VM's memory to the guest. The first 4KB of the the bar area > > > stores the metadata which describes the remote memory and vring info. > > This device looks like the beginning of a new "vhost-pci" virtio device > > type. There are layering violations: > > > > 1. This has nothing to do with virtio-net or networking, it's purely > > vhost-pci. Why is it called vhost-pci-net instead of vhost-pci? > > Here are a few things that are specific to vhost-pci-net here: > > 1) The device category here is set to NETWORK. > 2) vhost-pci-net related features (e.g. future MQ feature) will be added to > the property here. > > Right now, we only have vhost-pci-net. How about all focusing on the > vhost-pci-net, and ignore vhost-pci for now? When future other types of > devices are addded, we can abstract out a common vhost-pci layer? That won't work well for a new device. It would be fine if this code was internal to QEMU, but it's a guest interface and changing it is relatively costly. Once this code ships the hardware interface is fixed and drivers depend on it. Changing the hardware interface requires driver upgrades inside the guest. All information needed to design vhost-pci for multiple device types is already available. The VIRTIO specification defines the device model and vhost-user supports at least virtio-net, virtio-scsi, and virtio-blk (in development). I have CCed Felipe (vhost-user-scsi) and Changpeng (vhost-user-blk) to see if they are interested in vhost-pci. Stefan
diff --git a/hw/net/Makefile.objs b/hw/net/Makefile.objs index 4171af0..8b392fb 100644 --- a/hw/net/Makefile.objs +++ b/hw/net/Makefile.objs @@ -36,7 +36,7 @@ obj-$(CONFIG_MILKYMIST) += milkymist-minimac2.o obj-$(CONFIG_PSERIES) += spapr_llan.o obj-$(CONFIG_XILINX_ETHLITE) += xilinx_ethlite.o -obj-$(CONFIG_VIRTIO) += virtio-net.o +obj-$(CONFIG_VIRTIO) += virtio-net.o vhost_pci_net.o obj-y += vhost_net.o obj-$(CONFIG_ETSEC) += fsl_etsec/etsec.o fsl_etsec/registers.o \ diff --git a/hw/net/vhost_pci_net.c b/hw/net/vhost_pci_net.c new file mode 100644 index 0000000..db8f954 --- /dev/null +++ b/hw/net/vhost_pci_net.c @@ -0,0 +1,106 @@ +/* + * vhost-pci-net support + * + * Copyright Intel, Inc. 2017 + * + * Authors: + * Wei Wang <wei.w.wang@intel.com> + * Zhiyong Yang <zhiyong.yang@intel.com> + * + * This work is licensed under the terms of the GNU GPL, version 2. See + * the COPYING file in the top-level directory. + * + * Contributions after 2012-01-13 are licensed under the terms of the + * GNU GPL, version 2 or (at your option) any later version. + */ + +#include "qemu/osdep.h" +#include "qemu/iov.h" +#include "qemu/error-report.h" +#include "hw/pci/pci.h" +#include "hw/virtio/virtio-access.h" +#include "hw/virtio/vhost-pci-net.h" +#include "hw/virtio/virtio-net.h" + +static uint64_t vpnet_get_features(VirtIODevice *vdev, uint64_t features, + Error **errp) +{ + VhostPCINet *vpnet = VHOST_PCI_NET(vdev); + features |= vpnet->host_features; + + return features; +} + +static void vpnet_get_config(VirtIODevice *vdev, uint8_t *config) +{ + VhostPCINet *vpnet = VHOST_PCI_NET(vdev); + struct vpnet_config netcfg; + + virtio_stw_p(vdev, &netcfg.status, vpnet->status); + memcpy(config, &netcfg, vpnet->config_size); +} + +static void vpnet_device_realize(DeviceState *dev, Error **errp) +{ + VirtIODevice *vdev = VIRTIO_DEVICE(dev); + VhostPCINet *vpnet = VHOST_PCI_NET(vdev); + + virtio_init(vdev, "vhost-pci-net", VIRTIO_ID_VHOST_PCI_NET, + vpnet->config_size); + + memory_region_init_ram(&vpnet->metadata_region, NULL, + "Metadata", METADATA_SIZE, NULL); + memory_region_add_subregion(&vpnet->bar_region, 0, + &vpnet->metadata_region); + vpnet->metadata = memory_region_get_ram_ptr(&vpnet->metadata_region); + memset(vpnet->metadata, 0, METADATA_SIZE); +} + +static void vpnet_device_unrealize(DeviceState *dev, Error **errp) +{ + VirtIODevice *vdev = VIRTIO_DEVICE(dev); + + virtio_cleanup(vdev); +} + +static Property vpnet_properties[] = { + DEFINE_PROP_BIT("mrg_rxbuf", VhostPCINet, host_features, + VIRTIO_NET_F_MRG_RXBUF, true), + DEFINE_PROP_CHR("chardev", VhostPCINet, chr_be), + DEFINE_PROP_END_OF_LIST(), +}; + +static void vpnet_instance_init(Object *obj) +{ + VhostPCINet *vpnet = VHOST_PCI_NET(obj); + + vpnet->config_size = sizeof(struct vpnet_config); +} + +static void vpnet_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(klass); + VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass); + + dc->props = vpnet_properties; + set_bit(DEVICE_CATEGORY_NETWORK, dc->categories); + vdc->realize = vpnet_device_realize; + vdc->unrealize = vpnet_device_unrealize; + vdc->get_config = vpnet_get_config; + vdc->get_features = vpnet_get_features; +} + +static const TypeInfo vpnet_info = { + .name = TYPE_VHOST_PCI_NET, + .parent = TYPE_VIRTIO_DEVICE, + .instance_size = sizeof(VhostPCINet), + .instance_init = vpnet_instance_init, + .class_init = vpnet_class_init, +}; + +static void virtio_register_types(void) +{ + type_register_static(&vpnet_info); +} + +type_init(virtio_register_types) diff --git a/include/hw/virtio/vhost-pci-net.h b/include/hw/virtio/vhost-pci-net.h new file mode 100644 index 0000000..0c6886d --- /dev/null +++ b/include/hw/virtio/vhost-pci-net.h @@ -0,0 +1,37 @@ +/* + * Virtio Network Device + * + * Copyright Intel, Corp. 2017 + * + * Authors: + * Wei Wang <wei.w.wang@intel.com> + * Zhiyong Yang <zhiyong.yang@intel.com> + * + * This work is licensed under the terms of the GNU GPL, version 2. See + * the COPYING file in the top-level directory. + * + */ + +#ifndef _QEMU_VHOST_PCI_NET_H +#define _QEMU_VHOST_PCI_NET_H + +#include "standard-headers/linux/vhost_pci_net.h" +#include "hw/virtio/virtio.h" +#include "chardev/char-fe.h" + +#define TYPE_VHOST_PCI_NET "vhost-pci-net-device" +#define VHOST_PCI_NET(obj) \ + OBJECT_CHECK(VhostPCINet, (obj), TYPE_VHOST_PCI_NET) + +typedef struct VhostPCINet { + VirtIODevice parent_obj; + MemoryRegion bar_region; + MemoryRegion metadata_region; + struct vpnet_metadata *metadata; + uint32_t host_features; + size_t config_size; + uint16_t status; + CharBackend chr_be; +} VhostPCINet; + +#endif diff --git a/include/standard-headers/linux/vhost_pci_net.h b/include/standard-headers/linux/vhost_pci_net.h new file mode 100644 index 0000000..cfb2413 --- /dev/null +++ b/include/standard-headers/linux/vhost_pci_net.h @@ -0,0 +1,60 @@ +#ifndef _LINUX_VHOST_PCI_NET_H +#define _LINUX_VHOST_PCI_NET_H + +/* This header is BSD licensed so anyone can use the definitions to implement + * compatible drivers/servers. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. Neither the name of Intel nor the names of its contributors + * may be used to endorse or promote products derived from this software + * without specific prior written permission. + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL Intel OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. */ + +#include "standard-headers/linux/virtio_ids.h" + +#define METADATA_SIZE 4096 +#define MAX_REMOTE_REGION 8 + +struct vpnet_config { + uint16_t status; +}; + +struct vpnet_remote_mem { + uint64_t gpa; + uint64_t size; +}; + +struct vpnet_remote_vq { + uint16_t last_avail_idx; + int32_t vring_enabled; + uint32_t vring_num; + uint64_t desc_gpa; + uint64_t avail_gpa; + uint64_t used_gpa; +}; + +struct vpnet_metadata { + uint32_t nregions; + uint32_t nvqs; + struct vpnet_remote_mem mem[MAX_REMOTE_REGION]; + struct vpnet_remote_vq vq[0]; +}; + +#endif diff --git a/include/standard-headers/linux/virtio_ids.h b/include/standard-headers/linux/virtio_ids.h index 6d5c3b2..333bbd1 100644 --- a/include/standard-headers/linux/virtio_ids.h +++ b/include/standard-headers/linux/virtio_ids.h @@ -43,5 +43,6 @@ #define VIRTIO_ID_INPUT 18 /* virtio input */ #define VIRTIO_ID_VSOCK 19 /* virtio vsock transport */ #define VIRTIO_ID_CRYPTO 20 /* virtio crypto */ +#define VIRTIO_ID_VHOST_PCI_NET 21 /* vhost-pci-net */ #endif /* _LINUX_VIRTIO_IDS_H */
Add the vhost-pci-net device emulation. The device uses bar 2 to expose the remote VM's memory to the guest. The first 4KB of the the bar area stores the metadata which describes the remote memory and vring info. Signed-off-by: Wei Wang <wei.w.wang@intel.com> --- hw/net/Makefile.objs | 2 +- hw/net/vhost_pci_net.c | 106 +++++++++++++++++++++++++ include/hw/virtio/vhost-pci-net.h | 37 +++++++++ include/standard-headers/linux/vhost_pci_net.h | 60 ++++++++++++++ include/standard-headers/linux/virtio_ids.h | 1 + 5 files changed, 205 insertions(+), 1 deletion(-) create mode 100644 hw/net/vhost_pci_net.c create mode 100644 include/hw/virtio/vhost-pci-net.h create mode 100644 include/standard-headers/linux/vhost_pci_net.h