mbox series

[RFC,V1,00/12] IOREQ feature (+ virtio-mmio) on Arm

Message ID 1596478888-23030-1-git-send-email-olekstysh@gmail.com (mailing list archive)
Headers show
Series IOREQ feature (+ virtio-mmio) on Arm | expand

Message

Oleksandr Tyshchenko Aug. 3, 2020, 6:21 p.m. UTC
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hello all.

The purpose of this patch series is to add IOREQ/DM support to Xen on Arm.
You can find an initial discussion at [1]. Xen on Arm requires some implementation
to forward guest MMIO access to a device model in order to implement virtio-mmio
backend or even mediator outside of hypervisor. As Xen on x86 already contains
required support this patch series tries to make it common and introduce Arm
specific bits plus some new functionality. Patch series is based on Julien's
PoC "xen/arm: Add support for Guest IO forwarding to a device emulator".
Besides splitting existing IOREQ/DM support and introducing Arm side, 
the patch series also includes virtio-mmio related changes (toolstack)
for the reviewers to be able to see how the whole picture could look like.
For a non-RFC, the IOREQ/DM and virtio-mmio support will be sent separately.

According to the initial discussion there are a few open questions/concerns
regarding security, performance in VirtIO solution:
1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require different
   transport...
2. virtio backend is able to access all guest memory, some kind of protection
   is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in guest'
3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using
   Xenstore in virtio backend if possible.
4. a lot of 'foreing mapping' could lead to the memory exhaustion, Julien
   has some idea regarding that.

Looks like all of them are valid and worth considering, but the first thing
which we need on Arm is a mechanism to forward guest IO to a device emulator,
so let's focus on it in the first place.

***

Patch series [2] was rebased on Xen v4.14 release and tested on Renesas Salvator-X
board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend (we will share it later)
running in driver domain and unmodified Linux Guest running on existing
virtio-blk driver (frontend). No issues were observed. Guest domain 'reboot/destroy'
use-cases work properly. Patch series was only build-tested on x86.

Please note, build-test passed for the following modes:
1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set
5. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set

Build-test didn't pass for Arm32 mode with 'CONFIG_IOREQ_SERVER=y' due to the lack of
cmpxchg_64 support on Arm32. See cmpxchg usage in hvm_send_buffered_ioreq()).

***

Any feedback/help would be highly appreciated.

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-07/msg00825.html
[2] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml1

Oleksandr Tyshchenko (12):
  hvm/ioreq: Make x86's IOREQ feature common
  hvm/dm: Make x86's DM feature common
  xen/mm: Make x86's XENMEM_resource_ioreq_server handling common
  xen/arm: Introduce arch specific bits for IOREQ/DM features
  hvm/dm: Introduce xendevicemodel_set_irq_level DM op
  libxl: Introduce basic virtio-mmio support on Arm
  A collection of tweaks to be able to run emulator in driver domain
  xen/arm: Invalidate qemu mapcache on XENMEM_decrease_reservation
  libxl: Handle virtio-mmio irq in more correct way
  libxl: Add support for virtio-disk configuration
  libxl: Insert "dma-coherent" property into virtio-mmio device node
  libxl: Fix duplicate memory node in DT

 tools/libs/devicemodel/core.c                   |   18 +
 tools/libs/devicemodel/include/xendevicemodel.h |    4 +
 tools/libs/devicemodel/libxendevicemodel.map    |    1 +
 tools/libxc/xc_dom_arm.c                        |   25 +-
 tools/libxl/Makefile                            |    4 +-
 tools/libxl/libxl_arm.c                         |   98 +-
 tools/libxl/libxl_create.c                      |    1 +
 tools/libxl/libxl_internal.h                    |    1 +
 tools/libxl/libxl_types.idl                     |   16 +
 tools/libxl/libxl_types_internal.idl            |    1 +
 tools/libxl/libxl_virtio_disk.c                 |  109 ++
 tools/xl/Makefile                               |    2 +-
 tools/xl/xl.h                                   |    3 +
 tools/xl/xl_cmdtable.c                          |   15 +
 tools/xl/xl_parse.c                             |  116 ++
 tools/xl/xl_virtio_disk.c                       |   46 +
 xen/arch/arm/Kconfig                            |    1 +
 xen/arch/arm/Makefile                           |    2 +
 xen/arch/arm/dm.c                               |   54 +
 xen/arch/arm/domain.c                           |    9 +
 xen/arch/arm/hvm.c                              |   46 +-
 xen/arch/arm/io.c                               |   67 +-
 xen/arch/arm/ioreq.c                            |  100 ++
 xen/arch/arm/traps.c                            |   23 +
 xen/arch/x86/Kconfig                            |    1 +
 xen/arch/x86/hvm/dm.c                           |  289 +----
 xen/arch/x86/hvm/emulate.c                      |    2 +-
 xen/arch/x86/hvm/hvm.c                          |    2 +-
 xen/arch/x86/hvm/io.c                           |    2 +-
 xen/arch/x86/hvm/ioreq.c                        | 1431 +----------------------
 xen/arch/x86/hvm/stdvga.c                       |    2 +-
 xen/arch/x86/hvm/vmx/realmode.c                 |    1 +
 xen/arch/x86/hvm/vmx/vvmx.c                     |    2 +-
 xen/arch/x86/mm.c                               |   45 -
 xen/arch/x86/mm/shadow/common.c                 |    2 +-
 xen/common/Kconfig                              |    3 +
 xen/common/Makefile                             |    1 +
 xen/common/domain.c                             |   15 +
 xen/common/domctl.c                             |    8 +-
 xen/common/event_channel.c                      |   14 +-
 xen/common/hvm/Makefile                         |    2 +
 xen/common/hvm/dm.c                             |  288 +++++
 xen/common/hvm/ioreq.c                          | 1430 ++++++++++++++++++++++
 xen/common/memory.c                             |   54 +-
 xen/include/asm-arm/domain.h                    |   82 ++
 xen/include/asm-arm/hvm/ioreq.h                 |  105 ++
 xen/include/asm-arm/mm.h                        |    8 -
 xen/include/asm-arm/mmio.h                      |    1 +
 xen/include/asm-arm/p2m.h                       |    7 +-
 xen/include/asm-x86/hvm/ioreq.h                 |   45 +-
 xen/include/asm-x86/hvm/vcpu.h                  |    7 -
 xen/include/asm-x86/mm.h                        |    4 -
 xen/include/public/hvm/dm_op.h                  |   15 +
 xen/include/xen/hvm/ioreq.h                     |   89 ++
 xen/include/xen/hypercall.h                     |   12 +
 xen/include/xsm/dummy.h                         |   20 +-
 xen/include/xsm/xsm.h                           |    6 +-
 xen/xsm/dummy.c                                 |    2 +-
 xen/xsm/flask/hooks.c                           |    5 +-
 59 files changed, 2958 insertions(+), 1806 deletions(-)
 create mode 100644 tools/libxl/libxl_virtio_disk.c
 create mode 100644 tools/xl/xl_virtio_disk.c
 create mode 100644 xen/arch/arm/dm.c
 create mode 100644 xen/arch/arm/ioreq.c
 create mode 100644 xen/common/hvm/Makefile
 create mode 100644 xen/common/hvm/dm.c
 create mode 100644 xen/common/hvm/ioreq.c
 create mode 100644 xen/include/asm-arm/hvm/ioreq.h
 create mode 100644 xen/include/xen/hvm/ioreq.h

Comments

Julien Grall Aug. 15, 2020, 5:24 p.m. UTC | #1
Hi Oleksandr,

On 03/08/2020 19:21, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Hello all.
> 
> The purpose of this patch series is to add IOREQ/DM support to Xen on Arm.
> You can find an initial discussion at [1]. Xen on Arm requires some implementation
> to forward guest MMIO access to a device model in order to implement virtio-mmio
> backend or even mediator outside of hypervisor. As Xen on x86 already contains
> required support this patch series tries to make it common and introduce Arm
> specific bits plus some new functionality. Patch series is based on Julien's
> PoC "xen/arm: Add support for Guest IO forwarding to a device emulator".
> Besides splitting existing IOREQ/DM support and introducing Arm side,
> the patch series also includes virtio-mmio related changes (toolstack)
> for the reviewers to be able to see how the whole picture could look like.
> For a non-RFC, the IOREQ/DM and virtio-mmio support will be sent separately.
> 
> According to the initial discussion there are a few open questions/concerns
> regarding security, performance in VirtIO solution:
> 1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require different
>     transport...
> 2. virtio backend is able to access all guest memory, some kind of protection
>     is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in guest'
> 3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using
>     Xenstore in virtio backend if possible.
> 4. a lot of 'foreing mapping' could lead to the memory exhaustion, Julien
>     has some idea regarding that.
> 
> Looks like all of them are valid and worth considering, but the first thing
> which we need on Arm is a mechanism to forward guest IO to a device emulator,
> so let's focus on it in the first place.
> 
> ***
> 
> Patch series [2] was rebased on Xen v4.14 release and tested on Renesas Salvator-X
> board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend (we will share it later)
> running in driver domain and unmodified Linux Guest running on existing
> virtio-blk driver (frontend). No issues were observed. Guest domain 'reboot/destroy'
> use-cases work properly. Patch series was only build-tested on x86.
> 
> Please note, build-test passed for the following modes:
> 1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
> 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
> 3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
> 4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set
> 5. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set
> 
> Build-test didn't pass for Arm32 mode with 'CONFIG_IOREQ_SERVER=y' due to the lack of
> cmpxchg_64 support on Arm32. See cmpxchg usage in hvm_send_buffered_ioreq()).

I have sent a patch to implement cmpxchg64() and guest_cmpxchg64() (see 
[1]).

Cheers,

[1] 
https://lore.kernel.org/xen-devel/20200815172143.1327-1-julien@xen.org/T/#u
Oleksandr Tyshchenko Aug. 16, 2020, 7:34 p.m. UTC | #2
On 15.08.20 20:24, Julien Grall wrote:
> Hi Oleksandr,

Hi Julien.


>
> On 03/08/2020 19:21, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Hello all.
>>
>> The purpose of this patch series is to add IOREQ/DM support to Xen on 
>> Arm.
>> You can find an initial discussion at [1]. Xen on Arm requires some 
>> implementation
>> to forward guest MMIO access to a device model in order to implement 
>> virtio-mmio
>> backend or even mediator outside of hypervisor. As Xen on x86 already 
>> contains
>> required support this patch series tries to make it common and 
>> introduce Arm
>> specific bits plus some new functionality. Patch series is based on 
>> Julien's
>> PoC "xen/arm: Add support for Guest IO forwarding to a device emulator".
>> Besides splitting existing IOREQ/DM support and introducing Arm side,
>> the patch series also includes virtio-mmio related changes (toolstack)
>> for the reviewers to be able to see how the whole picture could look 
>> like.
>> For a non-RFC, the IOREQ/DM and virtio-mmio support will be sent 
>> separately.
>>
>> According to the initial discussion there are a few open 
>> questions/concerns
>> regarding security, performance in VirtIO solution:
>> 1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require 
>> different
>>     transport...
>> 2. virtio backend is able to access all guest memory, some kind of 
>> protection
>>     is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys 
>> in guest'
>> 3. interface between toolstack and 'out-of-qemu' virtio backend, 
>> avoid using
>>     Xenstore in virtio backend if possible.
>> 4. a lot of 'foreing mapping' could lead to the memory exhaustion, 
>> Julien
>>     has some idea regarding that.
>>
>> Looks like all of them are valid and worth considering, but the first 
>> thing
>> which we need on Arm is a mechanism to forward guest IO to a device 
>> emulator,
>> so let's focus on it in the first place.
>>
>> ***
>>
>> Patch series [2] was rebased on Xen v4.14 release and tested on 
>> Renesas Salvator-X
>> board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend (we will 
>> share it later)
>> running in driver domain and unmodified Linux Guest running on existing
>> virtio-blk driver (frontend). No issues were observed. Guest domain 
>> 'reboot/destroy'
>> use-cases work properly. Patch series was only build-tested on x86.
>>
>> Please note, build-test passed for the following modes:
>> 1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
>> 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
>> 3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
>> 4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set
>> 5. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set
>>
>> Build-test didn't pass for Arm32 mode with 'CONFIG_IOREQ_SERVER=y' 
>> due to the lack of
>> cmpxchg_64 support on Arm32. See cmpxchg usage in 
>> hvm_send_buffered_ioreq()).
>
> I have sent a patch to implement cmpxchg64() and guest_cmpxchg64() 
> (see [1]).
>
> Cheers,
>
> [1] 
> https://lore.kernel.org/xen-devel/20200815172143.1327-1-julien@xen.org/T/#u

  Thank you! I have already build-tested it. No issues). I will update 
corresponding patch to select IOREQ_SERVER for "config ARM" instead of 
"config ARM64".