mbox series

[v3,0/5] vhost-user: Add SHMEM_MAP/UNMAP requests

Message ID 20240912145335.129447-1-aesteve@redhat.com (mailing list archive)
Headers show
Series vhost-user: Add SHMEM_MAP/UNMAP requests | expand

Message

Albert Esteve Sept. 12, 2024, 2:53 p.m. UTC
Hi all,

v2->v3:
- Add track for mapped memory in VIRTIO
  Shared memory regions, so that boundaries
  can be verified when a request for
  new mmap is received
- Use address_space_read/write() for
  MEM_READ/_WRITE handling methods.
- Improve/fix support for flexible
  array members for MEM_READ/_WRITE requests.
- Split documentation into a separate patch.
- Various small fixes from previous review.

The usecase for this patch is, e.g., to support
vhost-user-gpu RESOURCE_BLOB operations,
or DAX Window request for virtio-fs. In
general, any operation where a backend
need to request the frontend to mmap an
fd into a VIRTIO Shared Memory Region,
so that the guest can then access it.

After receiving the SHMEM_MAP/UNMAP request,
the frontend will perform the mmap with the
instructed parameters (i.e., shmid, shm_offset,
fd_offset, fd, lenght).

As there are already a couple devices
that could benefit of such a feature,
and more could require it in the future,
the goal is to make the implementation
generic.

To that end, the VIRTIO Shared Memory
Region list is declared in the `VirtIODevice`
struct.

This patch also includes:
SHMEM_CONFIG frontend request that is
specifically meant to allow generic
vhost-user-device frontend to be able to
query VIRTIO Shared Memory settings from the
backend (as this device is generic and agnostic
of the actual backend configuration).

Finally, MEM_READ/WRITE backend requests are
added to deal with a potential issue when having
any backend sharing a descriptor that references
a mapping to another backend. The first
backend will not be able to see these
mappings. So these requests are a fallback
for vhost-user memory translation fails.

Albert Esteve (5):
  vhost-user: Add VIRTIO Shared Memory map request
  virtio: Track shared memory mappings
  vhost_user: Add frontend command for shmem config
  vhost-user-dev: Add cache BAR
  vhost_user: Add MEM_READ/WRITE backend requests

 hw/virtio/vhost-user-base.c               |  37 ++-
 hw/virtio/vhost-user-device-pci.c         |  39 +++-
 hw/virtio/vhost-user.c                    | 273 ++++++++++++++++++++--
 hw/virtio/virtio.c                        |  59 +++++
 include/hw/virtio/vhost-backend.h         |   6 +
 include/hw/virtio/vhost-user.h            |   1 +
 include/hw/virtio/virtio.h                |  26 +++
 subprojects/libvhost-user/libvhost-user.c | 144 ++++++++++++
 subprojects/libvhost-user/libvhost-user.h |  90 +++++++
 9 files changed, 648 insertions(+), 27 deletions(-)

Comments

Albert Esteve Sept. 12, 2024, 2:59 p.m. UTC | #1
Link to the documentation:
https://lore.kernel.org/all/20240912144432.126717-1-aesteve@redhat.com/T/#t

On Thu, Sep 12, 2024 at 4:53 PM Albert Esteve <aesteve@redhat.com> wrote:

> Hi all,
>
> v2->v3:
> - Add track for mapped memory in VIRTIO
>   Shared memory regions, so that boundaries
>   can be verified when a request for
>   new mmap is received
> - Use address_space_read/write() for
>   MEM_READ/_WRITE handling methods.
> - Improve/fix support for flexible
>   array members for MEM_READ/_WRITE requests.
> - Split documentation into a separate patch.
> - Various small fixes from previous review.
>
> The usecase for this patch is, e.g., to support
> vhost-user-gpu RESOURCE_BLOB operations,
> or DAX Window request for virtio-fs. In
> general, any operation where a backend
> need to request the frontend to mmap an
> fd into a VIRTIO Shared Memory Region,
> so that the guest can then access it.
>
> After receiving the SHMEM_MAP/UNMAP request,
> the frontend will perform the mmap with the
> instructed parameters (i.e., shmid, shm_offset,
> fd_offset, fd, lenght).
>
> As there are already a couple devices
> that could benefit of such a feature,
> and more could require it in the future,
> the goal is to make the implementation
> generic.
>
> To that end, the VIRTIO Shared Memory
> Region list is declared in the `VirtIODevice`
> struct.
>
> This patch also includes:
> SHMEM_CONFIG frontend request that is
> specifically meant to allow generic
> vhost-user-device frontend to be able to
> query VIRTIO Shared Memory settings from the
> backend (as this device is generic and agnostic
> of the actual backend configuration).
>
> Finally, MEM_READ/WRITE backend requests are
> added to deal with a potential issue when having
> any backend sharing a descriptor that references
> a mapping to another backend. The first
> backend will not be able to see these
> mappings. So these requests are a fallback
> for vhost-user memory translation fails.
>
> Albert Esteve (5):
>   vhost-user: Add VIRTIO Shared Memory map request
>   virtio: Track shared memory mappings
>   vhost_user: Add frontend command for shmem config
>   vhost-user-dev: Add cache BAR
>   vhost_user: Add MEM_READ/WRITE backend requests
>
>  hw/virtio/vhost-user-base.c               |  37 ++-
>  hw/virtio/vhost-user-device-pci.c         |  39 +++-
>  hw/virtio/vhost-user.c                    | 273 ++++++++++++++++++++--
>  hw/virtio/virtio.c                        |  59 +++++
>  include/hw/virtio/vhost-backend.h         |   6 +
>  include/hw/virtio/vhost-user.h            |   1 +
>  include/hw/virtio/virtio.h                |  26 +++
>  subprojects/libvhost-user/libvhost-user.c | 144 ++++++++++++
>  subprojects/libvhost-user/libvhost-user.h |  90 +++++++
>  9 files changed, 648 insertions(+), 27 deletions(-)
>
> --
> 2.45.2
>
>
Stefan Hajnoczi Sept. 16, 2024, 5:57 p.m. UTC | #2
This patch series could use tests. The first two patches seem broken and
testing would have revealed that the memory allocation and pointers are
not quite right.

One testing approach is to write a test device using libvhost-user that
exposes VIRTIO Shared Memory Regions, launch QEMU in qtest mode with
--device vhost-user-device, and then use the qtest API to enumerate and
access the VIRTIO Shared Memory Regions. Unfortunately this involves
writing quite a bit of test code. I can explain it in more detail if you
want.

Does anyone have other ideas for testing?

Stefan
Albert Esteve Sept. 17, 2024, 7:05 a.m. UTC | #3
On Mon, Sep 16, 2024 at 7:57 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:

> This patch series could use tests. The first two patches seem broken and
> testing would have revealed that the memory allocation and pointers are
> not quite right.
>

My bad. Previous version of the patch I did test with a device that I've
been working on that utilizes the map/unmap messages. But I skipped it
for this one. I will test it for any coming versions.


>
> One testing approach is to write a test device using libvhost-user that
> exposes VIRTIO Shared Memory Regions, launch QEMU in qtest mode with
> --device vhost-user-device, and then use the qtest API to enumerate and
> access the VIRTIO Shared Memory Regions. Unfortunately this involves
> writing quite a bit of test code. I can explain it in more detail if you
> want.
>

If we want to have tests covering the feature within qemu, I can try
to do this. I'm also more comfortable if there are tests in place.
As I mentioned, before this patch I was verifying with an
external device myself.


>
> Does anyone have other ideas for testing?
>
> Stefan
>
Stefan Hajnoczi Sept. 17, 2024, 7:43 a.m. UTC | #4
On Tue, Sep 17, 2024 at 09:05:34AM +0200, Albert Esteve wrote:
> On Mon, Sep 16, 2024 at 7:57 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> 
> > This patch series could use tests. The first two patches seem broken and
> > testing would have revealed that the memory allocation and pointers are
> > not quite right.
> >
> 
> My bad. Previous version of the patch I did test with a device that I've
> been working on that utilizes the map/unmap messages. But I skipped it
> for this one. I will test it for any coming versions.
> 
> 
> >
> > One testing approach is to write a test device using libvhost-user that
> > exposes VIRTIO Shared Memory Regions, launch QEMU in qtest mode with
> > --device vhost-user-device, and then use the qtest API to enumerate and
> > access the VIRTIO Shared Memory Regions. Unfortunately this involves
> > writing quite a bit of test code. I can explain it in more detail if you
> > want.
> >
> 
> If we want to have tests covering the feature within qemu, I can try
> to do this. I'm also more comfortable if there are tests in place.
> As I mentioned, before this patch I was verifying with an
> external device myself.

Good, automated tests will continue to be protected by tests after it is
merged.

QEMU's qtest framework (tests/qtest/) launches QEMU in the special qtest
mode where the guest does not execute CPU instructions. The test case
can send commands like reading and writing guest RAM and hardware
registers so it can remote-control QEMU as if it were a running guest.
https://www.qemu.org/docs/master/devel/testing/qtest.html

qtest is low-level but there are VIRTIO qtest APIs that offer something
similar to a VIRTIO driver API. You could extend that API to support
VIRTIO Shared Memory Regions over virtio-pci
(tests/qtest/libqos/virtio-pci.c).

A vhost-user device is also required. You could implement a dummy device
with libvhost-user that has a few VIRTIO Shared Memory Regions and
nothing else (no virtqueues, etc). The dummy device would create a
shared memory file descriptor and send the SHMEM_MAP message.

Then a qtest test case can be written that launches the dummy vhost-user
device and QEMU with --device vhost-user-device. Using the qtest VIRTIO
API you can initialize the device, enumerate VIRTIO Shared Memory
Regions (using the new qtest API you added), and test that
loading/storing to the VIRTIO Shared Memory Region works.

It would also be possible to test more advanced cases like 256 VIRTIO
Shared Memory Regions, skipping regions with 0 size, MAP/UNMAP sequences
including rejecting partial UNMAP and overlapping MAP, etc.

Stefan