mbox series

[RFC,0/2] Handling aliased guest memory maps in vhost-vDPA SVQs

Message ID 20240821125548.749143-1-jonah.palmer@oracle.com (mailing list archive)
Headers show
Series Handling aliased guest memory maps in vhost-vDPA SVQs | expand

Message

Jonah Palmer Aug. 21, 2024, 12:55 p.m. UTC
The guest may overlap guest memory regions when mapping IOVA to HVA
translations in the IOVA->HVA tree. This means that different HVAs, that
correspond to different guest memory region mappings, may translate to
the same IOVA. This can cause conflicts when a mapping is incorrectly
referenced.

For example, consider this example of mapped guest memory regions:

              HVA                            GPA                         IOVA
-------------------------------  --------------------------- ----------------------------
[0x7f7903e00000, 0x7f7983e00000) [0x0, 0x80000000)           [0x1000, 0x80000000)
[0x7f7983e00000, 0x7f9903e00000) [0x100000000, 0x2080000000) [0x80001000, 0x2000001000)
[0x7f7903ea0000, 0x7f7903ec0000) [0xfeda0000, 0xfedc0000)    [0x2000001000, 0x2000021000)

The last HVA range [0x7f7903ea0000, 0x7f7903ec0000) is contained within
the first HVA range [0x7f7903e00000, 0x7f7983e00000). Despite this, the
GPA ranges for the first and third mappings don't overlap, so the guest
sees them as different physical memory regions.

So, for example, say we're given an HVA of 0x7f7903eb0000 when we go to
unmap the mapping associated with this address. This HVA technically
fits in the first and third mapped HVA ranges.

When we go to search the IOVA->HVA tree, we'll stop at the first mapping
whose HVA range accommodates our given HVA. Given that IOVATrees are
GTrees which are balanced binary red-black trees, the search will stop
at the first mapping, which has an HVA range of [0x7f7903e00000,
0x7f7983e00000).

However, the correct mapping to remove in this case is the third mapping
because the HVA to GPA translation would result in a GPA of 0xfedb0000,
which only fits in the third mapping's GPA range.
--------

To avoid this issue, we can create a GPA->IOVA tree for guest memory
mappings and use the GPA to find the correct IOVA translation, as GPAs
wont overlap and will always translate to the correct IOVA.

To accommodate this solution, we decouple the IOVA allocator, where all
allocated IOVA ranges are stored in an IOVA-only tree (iova_map), and
split the current IOVA->HVA tree into a GPA->IOVA tree (guest memory)
and a IOVA->SVQ HVA tree (host-only memory). In other words, any
allocated IOVA ranges are stored in an IOVA-only tree, any guest memory
mappings are placed inside of the GPA->IOVA tree, and lastly any
host-only memory mappings are stored in the IOVA->SVQ HVA tree.

--------
This series is a different approach of [1] and is based off of [2],
where this issue was originally discovered.

RFC v1:
-------
 * Alternative approach of [1].
 * First attempt to address this issue found in [2].

[1] https://lore.kernel.org/qemu-devel/20240410100345.389462-1-eperezma@redhat.com
[2] https://lore.kernel.org/qemu-devel/20240201180924.487579-1-eperezma@redhat.com

Jonah Palmer (2):
  vhost-vdpa: Decouple the IOVA allocator
  vhost-vdpa: Implement GPA->IOVA & IOVA->SVQ HVA trees

 hw/virtio/vhost-iova-tree.c        | 91 ++++++++++++++++++++++++++++--
 hw/virtio/vhost-iova-tree.h        |  6 +-
 hw/virtio/vhost-shadow-virtqueue.c | 48 +++++++++++++---
 hw/virtio/vhost-vdpa.c             | 43 +++++++++-----
 include/qemu/iova-tree.h           | 22 ++++++++
 net/vhost-vdpa.c                   | 13 ++++-
 util/iova-tree.c                   | 46 +++++++++++++++
 7 files changed, 240 insertions(+), 29 deletions(-)