Message ID | 1610488352-18494-1-git-send-email-olekstysh@gmail.com (mailing list archive) |
---|---|
Headers | show |
Series | IOREQ feature (+ virtio-mmio) on Arm | expand |
Hi Oleksandr, I have tested this series with latest master and staging branches. The virtio function works well for Arm as v3. For latest staging branch, it needs a tiny rebase for: 0011 xen/mm: Make x86's XENMEM_resource_ioreq_server handling common As staging branch changes rapidly, I had done it manually and done the test. It should not affect review. Tested-by: Wei Chen <Wei.Chen@arm.com> > -----Original Message----- > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of > Oleksandr Tyshchenko > Sent: 2021年1月13日 5:52 > To: xen-devel@lists.xenproject.org > Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant > <paul@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper > <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; > Wei Liu <wl@xen.org>; Julien Grall <Julien.Grall@arm.com>; George Dunlap > <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Julien Grall > <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Jun Nakajima > <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; Tim Deegan > <tim@xen.org>; Daniel De Graaf <dgdegra@tycho.nsa.gov>; Volodymyr > Babchuk <Volodymyr_Babchuk@epam.com>; Anthony PERARD > <anthony.perard@citrix.com>; Bertrand Marquis <Bertrand.Marquis@arm.com>; > Wei Chen <Wei.Chen@arm.com>; Kaly Xin <Kaly.Xin@arm.com>; Artem > Mygaiev <joculator@gmail.com>; Alex Bennée <alex.bennee@linaro.org> > Subject: [PATCH V4 00/24] IOREQ feature (+ virtio-mmio) on Arm > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> > > Hello all. > > The purpose of this patch series is to add IOREQ/DM support to Xen on Arm. > You can find an initial discussion at [1] and RFC-V3 series at [2]-[5]. > Xen on Arm requires some implementation to forward guest MMIO access to a > device > model in order to implement virtio-mmio backend or even mediator outside of > hypervisor. > As Xen on x86 already contains required support this series tries to make it > common > and introduce Arm specific bits plus some new functionality. Patch series is > based on > Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device > emulator". > Besides splitting existing IOREQ/DM support and introducing Arm side, the series > also includes virtio-mmio related changes (last 2 patches for toolstack) > for the reviewers to be able to see how the whole picture could look like > and give it a try. > > According to the initial/subsequent discussions there are a few open > questions/concerns regarding security, performance in VirtIO solution: > 1. virtio-mmio vs virtio-pci, SPI vs MSI, or even a composition of virtio-mmio + > MSI, > different use-cases require different transport... > 2. virtio backend is able to access all guest memory, some kind of protection > is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in guest', > etc > (for the first two Alex have provided valuable input at [6]) > 3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using > Xenstore in virtio backend if possible. Also, there is a desire to make VirtIO > backend hypervisor-agnostic. > 4. a lot of 'foreing mapping' could lead to the memory exhaustion at the host > side, > as we are stealing the page from host memory in order to map the guest page. > Julien has some idea regarding that. > 5. Julien also has some ideas how to optimize the IOREQ code: > 5.1 vcpu_ioreq_handle_completion (former handle_hvm_io_completion) > which is called in > an hotpath on Arm (everytime we are re-entering to the guest): > Ideally, vcpu_ioreq_handle_completion should be a NOP (at max a few > instructions) > if there is nothing to do (if we don't have I/O forwarded to an IOREQ server). > Maybe we want to introduce a per-vCPU flag indicating if an I/O has been > forwarded to an IOREQ server. This would allow us to bypass most of the > function > if there is nothing to do. > 5.2 The current way to handle MMIO is the following: > - Pause the vCPU > - Forward the access to the backend domain > - Schedule the backend domain > - Wait for the access to be handled > - Unpause the vCPU > The sequence is going to be fairly expensive on Xen. > It might be possible to optimize the ACK and avoid to wait for the backend > to handle the access. > > Looks like all of them are valid and worth considering, but the first thing > which we need on Arm is a mechanism to forward guest IO to a device emulator, > so let's focus on it in the first place. > > *** > > There are a lot of changes since RFC series, almost all TODOs were resolved on > Arm, > Arm code was improved and hardened, common IOREQ/DM code became really > arch-agnostic > (without HVM-ism), the "legacy" mechanism of mapping magic pages for the > IOREQ servers > was left x86 specific, etc. Also patch that makes DM code public was reworked > to have > the top level dm-op handling arch-specific and call into ioreq_server_dm_op() > for otherwise unhandled ops. > But one TODO still remains which is "PIO handling" on Arm. > The "PIO handling" TODO is expected to left unaddressed for the current series. > It is not an big issue for now while Xen doesn't have support for vPCI on Arm. > On Arm64 they are only used for PCI IO Bar and we would probably want to > expose > them to emulator as PIO access to make a DM completely arch-agnostic. So > "PIO handling" > should be implemented when we add support for vPCI. > > I left interface untouched in the following patch > "xen/dm: Introduce xendevicemodel_set_irq_level DM op" > since there is still an open discussion what interface to use/what > information to pass to the hypervisor. > > There are patches on review this series depends on: > https://patchwork.kernel.org/patch/11816689 > https://patchwork.kernel.org/patch/11803383 > > Please note, that IOREQ feature is disabled by default on Arm within current > series. > > *** > > Patch series [7] was rebased on recent "staging branch" > (7ba2ab4 x86/p2m: Fix paging_gva_to_gfn() for nested virt) and tested on > Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend > [8] > running in driver domain and unmodified Linux Guest running on existing > virtio-blk driver (frontend). No issues were observed. Guest domain > 'reboot/destroy' > use-cases work properly. Patch series was only build-tested on x86. > > Please note, build-test passed for the following modes: > 1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default) > 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set > 3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y > 4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set (default) > 5. Arm32: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y > 6. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set (default) > > *** > > Any feedback/help would be highly appreciated. > > [1] https://lists.xenproject.org/archives/html/xen-devel/2020- > 07/msg00825.html > [2] https://lists.xenproject.org/archives/html/xen-devel/2020- > 08/msg00071.html > [3] https://lists.xenproject.org/archives/html/xen-devel/2020- > 09/msg00732.html > [4] https://lists.xenproject.org/archives/html/xen-devel/2020- > 10/msg01077.html > [5] https://lists.xenproject.org/archives/html/xen-devel/2020- > 11/msg02188.html > [6] https://lists.xenproject.org/archives/html/xen-devel/2020- > 11/msg02212.html > [7] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml5 > [8] https://github.com/xen-troops/virtio-disk/commits/ioreq_ml1 > > Julien Grall (5): > xen/ioreq: Make x86's IOREQ related dm-op handling common > xen/mm: Make x86's XENMEM_resource_ioreq_server handling common > arm/ioreq: Introduce arch specific bits for IOREQ/DM features > xen/dm: Introduce xendevicemodel_set_irq_level DM op > libxl: Introduce basic virtio-mmio support on Arm > > Oleksandr Tyshchenko (19): > x86/ioreq: Prepare IOREQ feature for making it common > x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving > x86/ioreq: Provide out-of-line wrapper for the handle_mmio() > xen/ioreq: Make x86's IOREQ feature common > xen/ioreq: Make x86's hvm_ioreq_needs_completion() common > xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common > xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common > xen/ioreq: Move x86's ioreq_server to struct domain > xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu > xen/ioreq: Remove "hvm" prefixes from involved function names > xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() > xen/arm: Stick around in leave_hypervisor_to_guest until I/O has > completed > xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm > xen/ioreq: Introduce domain_has_ioreq_server() > xen/arm: io: Abstract sign-extension > xen/arm: io: Harden sign extension check > xen/ioreq: Make x86's send_invalidate_req() common > xen/arm: Add mapcache invalidation handling > [RFC] libxl: Add support for virtio-disk configuration > > MAINTAINERS | 8 +- > tools/include/xendevicemodel.h | 4 + > tools/libs/devicemodel/core.c | 18 + > tools/libs/devicemodel/libxendevicemodel.map | 1 + > tools/libs/light/Makefile | 1 + > tools/libs/light/libxl_arm.c | 94 +- > tools/libs/light/libxl_create.c | 1 + > tools/libs/light/libxl_internal.h | 1 + > tools/libs/light/libxl_types.idl | 16 + > tools/libs/light/libxl_types_internal.idl | 1 + > tools/libs/light/libxl_virtio_disk.c | 109 ++ > tools/xl/Makefile | 2 +- > tools/xl/xl.h | 3 + > tools/xl/xl_cmdtable.c | 15 + > tools/xl/xl_parse.c | 116 +++ > tools/xl/xl_virtio_disk.c | 46 + > xen/arch/arm/Makefile | 2 + > xen/arch/arm/dm.c | 174 ++++ > xen/arch/arm/domain.c | 9 + > xen/arch/arm/io.c | 30 +- > xen/arch/arm/ioreq.c | 198 ++++ > xen/arch/arm/p2m.c | 51 +- > xen/arch/arm/traps.c | 72 +- > xen/arch/x86/Kconfig | 1 + > xen/arch/x86/hvm/dm.c | 107 +- > xen/arch/x86/hvm/emulate.c | 220 ++-- > xen/arch/x86/hvm/hvm.c | 14 +- > xen/arch/x86/hvm/hypercall.c | 9 +- > xen/arch/x86/hvm/intercept.c | 5 +- > xen/arch/x86/hvm/io.c | 52 +- > xen/arch/x86/hvm/ioreq.c | 1375 ++----------------------- > xen/arch/x86/hvm/stdvga.c | 12 +- > xen/arch/x86/hvm/svm/nestedsvm.c | 2 +- > xen/arch/x86/hvm/vmx/realmode.c | 8 +- > xen/arch/x86/hvm/vmx/vvmx.c | 5 +- > xen/arch/x86/mm.c | 46 +- > xen/arch/x86/mm/p2m.c | 17 +- > xen/arch/x86/mm/shadow/common.c | 2 +- > xen/common/Kconfig | 3 + > xen/common/Makefile | 1 + > xen/common/ioreq.c | 1426 ++++++++++++++++++++++++++ > xen/common/memory.c | 72 +- > xen/include/asm-arm/domain.h | 3 + > xen/include/asm-arm/hvm/ioreq.h | 72 ++ > xen/include/asm-arm/mm.h | 8 - > xen/include/asm-arm/mmio.h | 1 + > xen/include/asm-arm/p2m.h | 19 +- > xen/include/asm-arm/traps.h | 25 + > xen/include/asm-x86/hvm/domain.h | 43 - > xen/include/asm-x86/hvm/emulate.h | 2 +- > xen/include/asm-x86/hvm/io.h | 17 - > xen/include/asm-x86/hvm/ioreq.h | 39 +- > xen/include/asm-x86/hvm/vcpu.h | 18 - > xen/include/asm-x86/mm.h | 4 - > xen/include/asm-x86/p2m.h | 27 +- > xen/include/public/arch-arm.h | 5 + > xen/include/public/hvm/dm_op.h | 16 + > xen/include/xen/dm.h | 39 + > xen/include/xen/ioreq.h | 140 +++ > xen/include/xen/p2m-common.h | 4 + > xen/include/xen/sched.h | 34 + > xen/include/xsm/dummy.h | 4 +- > xen/include/xsm/xsm.h | 6 +- > xen/xsm/dummy.c | 2 +- > xen/xsm/flask/hooks.c | 5 +- > 65 files changed, 3073 insertions(+), 1809 deletions(-) > create mode 100644 tools/libs/light/libxl_virtio_disk.c > create mode 100644 tools/xl/xl_virtio_disk.c > create mode 100644 xen/arch/arm/dm.c > create mode 100644 xen/arch/arm/ioreq.c > create mode 100644 xen/common/ioreq.c > create mode 100644 xen/include/asm-arm/hvm/ioreq.h > create mode 100644 xen/include/xen/dm.h > create mode 100644 xen/include/xen/ioreq.h > > -- > 2.7.4 >
On 14.01.21 05:55, Wei Chen wrote: > Hi Oleksandr, Hi Wei. > > I have tested this series with latest master and staging branches. > The virtio function works well for Arm as v3. Thank you! This is good news. > > For latest staging branch, it needs a tiny rebase for: > 0011 xen/mm: Make x86's XENMEM_resource_ioreq_server handling common > As staging branch changes rapidly, I had done it manually and done the test. > It should not affect review. Yes, very rapidly I would say) Yes, I will need to rebase due to the recent "xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE" patch > > Tested-by: Wei Chen <Wei.Chen@arm.com> Thanks. > >> -----Original Message----- >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of >> Oleksandr Tyshchenko >> Sent: 2021年1月13日 5:52 >> To: xen-devel@lists.xenproject.org >> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant >> <paul@xen.org>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper >> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; >> Wei Liu <wl@xen.org>; Julien Grall <Julien.Grall@arm.com>; George Dunlap >> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Julien Grall >> <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Jun Nakajima >> <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; Tim Deegan >> <tim@xen.org>; Daniel De Graaf <dgdegra@tycho.nsa.gov>; Volodymyr >> Babchuk <Volodymyr_Babchuk@epam.com>; Anthony PERARD >> <anthony.perard@citrix.com>; Bertrand Marquis <Bertrand.Marquis@arm.com>; >> Wei Chen <Wei.Chen@arm.com>; Kaly Xin <Kaly.Xin@arm.com>; Artem >> Mygaiev <joculator@gmail.com>; Alex Bennée <alex.bennee@linaro.org> >> Subject: [PATCH V4 00/24] IOREQ feature (+ virtio-mmio) on Arm >> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> >> >> Hello all. >> >> The purpose of this patch series is to add IOREQ/DM support to Xen on Arm. >> You can find an initial discussion at [1] and RFC-V3 series at [2]-[5]. >> Xen on Arm requires some implementation to forward guest MMIO access to a >> device >> model in order to implement virtio-mmio backend or even mediator outside of >> hypervisor. >> As Xen on x86 already contains required support this series tries to make it >> common >> and introduce Arm specific bits plus some new functionality. Patch series is >> based on >> Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device >> emulator". >> Besides splitting existing IOREQ/DM support and introducing Arm side, the series >> also includes virtio-mmio related changes (last 2 patches for toolstack) >> for the reviewers to be able to see how the whole picture could look like >> and give it a try. >> >> According to the initial/subsequent discussions there are a few open >> questions/concerns regarding security, performance in VirtIO solution: >> 1. virtio-mmio vs virtio-pci, SPI vs MSI, or even a composition of virtio-mmio + >> MSI, >> different use-cases require different transport... >> 2. virtio backend is able to access all guest memory, some kind of protection >> is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in guest', >> etc >> (for the first two Alex have provided valuable input at [6]) >> 3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using >> Xenstore in virtio backend if possible. Also, there is a desire to make VirtIO >> backend hypervisor-agnostic. >> 4. a lot of 'foreing mapping' could lead to the memory exhaustion at the host >> side, >> as we are stealing the page from host memory in order to map the guest page. >> Julien has some idea regarding that. >> 5. Julien also has some ideas how to optimize the IOREQ code: >> 5.1 vcpu_ioreq_handle_completion (former handle_hvm_io_completion) >> which is called in >> an hotpath on Arm (everytime we are re-entering to the guest): >> Ideally, vcpu_ioreq_handle_completion should be a NOP (at max a few >> instructions) >> if there is nothing to do (if we don't have I/O forwarded to an IOREQ server). >> Maybe we want to introduce a per-vCPU flag indicating if an I/O has been >> forwarded to an IOREQ server. This would allow us to bypass most of the >> function >> if there is nothing to do. >> 5.2 The current way to handle MMIO is the following: >> - Pause the vCPU >> - Forward the access to the backend domain >> - Schedule the backend domain >> - Wait for the access to be handled >> - Unpause the vCPU >> The sequence is going to be fairly expensive on Xen. >> It might be possible to optimize the ACK and avoid to wait for the backend >> to handle the access. >> >> Looks like all of them are valid and worth considering, but the first thing >> which we need on Arm is a mechanism to forward guest IO to a device emulator, >> so let's focus on it in the first place. >> >> *** >> >> There are a lot of changes since RFC series, almost all TODOs were resolved on >> Arm, >> Arm code was improved and hardened, common IOREQ/DM code became really >> arch-agnostic >> (without HVM-ism), the "legacy" mechanism of mapping magic pages for the >> IOREQ servers >> was left x86 specific, etc. Also patch that makes DM code public was reworked >> to have >> the top level dm-op handling arch-specific and call into ioreq_server_dm_op() >> for otherwise unhandled ops. >> But one TODO still remains which is "PIO handling" on Arm. >> The "PIO handling" TODO is expected to left unaddressed for the current series. >> It is not an big issue for now while Xen doesn't have support for vPCI on Arm. >> On Arm64 they are only used for PCI IO Bar and we would probably want to >> expose >> them to emulator as PIO access to make a DM completely arch-agnostic. So >> "PIO handling" >> should be implemented when we add support for vPCI. >> >> I left interface untouched in the following patch >> "xen/dm: Introduce xendevicemodel_set_irq_level DM op" >> since there is still an open discussion what interface to use/what >> information to pass to the hypervisor. >> >> There are patches on review this series depends on: >> https://patchwork.kernel.org/patch/11816689 >> https://patchwork.kernel.org/patch/11803383 >> >> Please note, that IOREQ feature is disabled by default on Arm within current >> series. >> >> *** >> >> Patch series [7] was rebased on recent "staging branch" >> (7ba2ab4 x86/p2m: Fix paging_gva_to_gfn() for nested virt) and tested on >> Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend >> [8] >> running in driver domain and unmodified Linux Guest running on existing >> virtio-blk driver (frontend). No issues were observed. Guest domain >> 'reboot/destroy' >> use-cases work properly. Patch series was only build-tested on x86. >> >> Please note, build-test passed for the following modes: >> 1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default) >> 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set >> 3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y >> 4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set (default) >> 5. Arm32: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y >> 6. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set (default) >> >> *** >> >> Any feedback/help would be highly appreciated. >> >> [1] https://lists.xenproject.org/archives/html/xen-devel/2020- >> 07/msg00825.html >> [2] https://lists.xenproject.org/archives/html/xen-devel/2020- >> 08/msg00071.html >> [3] https://lists.xenproject.org/archives/html/xen-devel/2020- >> 09/msg00732.html >> [4] https://lists.xenproject.org/archives/html/xen-devel/2020- >> 10/msg01077.html >> [5] https://lists.xenproject.org/archives/html/xen-devel/2020- >> 11/msg02188.html >> [6] https://lists.xenproject.org/archives/html/xen-devel/2020- >> 11/msg02212.html >> [7] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml5 >> [8] https://github.com/xen-troops/virtio-disk/commits/ioreq_ml1 >> >> Julien Grall (5): >> xen/ioreq: Make x86's IOREQ related dm-op handling common >> xen/mm: Make x86's XENMEM_resource_ioreq_server handling common >> arm/ioreq: Introduce arch specific bits for IOREQ/DM features >> xen/dm: Introduce xendevicemodel_set_irq_level DM op >> libxl: Introduce basic virtio-mmio support on Arm >> >> Oleksandr Tyshchenko (19): >> x86/ioreq: Prepare IOREQ feature for making it common >> x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving >> x86/ioreq: Provide out-of-line wrapper for the handle_mmio() >> xen/ioreq: Make x86's IOREQ feature common >> xen/ioreq: Make x86's hvm_ioreq_needs_completion() common >> xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common >> xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common >> xen/ioreq: Move x86's ioreq_server to struct domain >> xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu >> xen/ioreq: Remove "hvm" prefixes from involved function names >> xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() >> xen/arm: Stick around in leave_hypervisor_to_guest until I/O has >> completed >> xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm >> xen/ioreq: Introduce domain_has_ioreq_server() >> xen/arm: io: Abstract sign-extension >> xen/arm: io: Harden sign extension check >> xen/ioreq: Make x86's send_invalidate_req() common >> xen/arm: Add mapcache invalidation handling >> [RFC] libxl: Add support for virtio-disk configuration >> >> MAINTAINERS | 8 +- >> tools/include/xendevicemodel.h | 4 + >> tools/libs/devicemodel/core.c | 18 + >> tools/libs/devicemodel/libxendevicemodel.map | 1 + >> tools/libs/light/Makefile | 1 + >> tools/libs/light/libxl_arm.c | 94 +- >> tools/libs/light/libxl_create.c | 1 + >> tools/libs/light/libxl_internal.h | 1 + >> tools/libs/light/libxl_types.idl | 16 + >> tools/libs/light/libxl_types_internal.idl | 1 + >> tools/libs/light/libxl_virtio_disk.c | 109 ++ >> tools/xl/Makefile | 2 +- >> tools/xl/xl.h | 3 + >> tools/xl/xl_cmdtable.c | 15 + >> tools/xl/xl_parse.c | 116 +++ >> tools/xl/xl_virtio_disk.c | 46 + >> xen/arch/arm/Makefile | 2 + >> xen/arch/arm/dm.c | 174 ++++ >> xen/arch/arm/domain.c | 9 + >> xen/arch/arm/io.c | 30 +- >> xen/arch/arm/ioreq.c | 198 ++++ >> xen/arch/arm/p2m.c | 51 +- >> xen/arch/arm/traps.c | 72 +- >> xen/arch/x86/Kconfig | 1 + >> xen/arch/x86/hvm/dm.c | 107 +- >> xen/arch/x86/hvm/emulate.c | 220 ++-- >> xen/arch/x86/hvm/hvm.c | 14 +- >> xen/arch/x86/hvm/hypercall.c | 9 +- >> xen/arch/x86/hvm/intercept.c | 5 +- >> xen/arch/x86/hvm/io.c | 52 +- >> xen/arch/x86/hvm/ioreq.c | 1375 ++----------------------- >> xen/arch/x86/hvm/stdvga.c | 12 +- >> xen/arch/x86/hvm/svm/nestedsvm.c | 2 +- >> xen/arch/x86/hvm/vmx/realmode.c | 8 +- >> xen/arch/x86/hvm/vmx/vvmx.c | 5 +- >> xen/arch/x86/mm.c | 46 +- >> xen/arch/x86/mm/p2m.c | 17 +- >> xen/arch/x86/mm/shadow/common.c | 2 +- >> xen/common/Kconfig | 3 + >> xen/common/Makefile | 1 + >> xen/common/ioreq.c | 1426 ++++++++++++++++++++++++++ >> xen/common/memory.c | 72 +- >> xen/include/asm-arm/domain.h | 3 + >> xen/include/asm-arm/hvm/ioreq.h | 72 ++ >> xen/include/asm-arm/mm.h | 8 - >> xen/include/asm-arm/mmio.h | 1 + >> xen/include/asm-arm/p2m.h | 19 +- >> xen/include/asm-arm/traps.h | 25 + >> xen/include/asm-x86/hvm/domain.h | 43 - >> xen/include/asm-x86/hvm/emulate.h | 2 +- >> xen/include/asm-x86/hvm/io.h | 17 - >> xen/include/asm-x86/hvm/ioreq.h | 39 +- >> xen/include/asm-x86/hvm/vcpu.h | 18 - >> xen/include/asm-x86/mm.h | 4 - >> xen/include/asm-x86/p2m.h | 27 +- >> xen/include/public/arch-arm.h | 5 + >> xen/include/public/hvm/dm_op.h | 16 + >> xen/include/xen/dm.h | 39 + >> xen/include/xen/ioreq.h | 140 +++ >> xen/include/xen/p2m-common.h | 4 + >> xen/include/xen/sched.h | 34 + >> xen/include/xsm/dummy.h | 4 +- >> xen/include/xsm/xsm.h | 6 +- >> xen/xsm/dummy.c | 2 +- >> xen/xsm/flask/hooks.c | 5 +- >> 65 files changed, 3073 insertions(+), 1809 deletions(-) >> create mode 100644 tools/libs/light/libxl_virtio_disk.c >> create mode 100644 tools/xl/xl_virtio_disk.c >> create mode 100644 xen/arch/arm/dm.c >> create mode 100644 xen/arch/arm/ioreq.c >> create mode 100644 xen/common/ioreq.c >> create mode 100644 xen/include/asm-arm/hvm/ioreq.h >> create mode 100644 xen/include/xen/dm.h >> create mode 100644 xen/include/xen/ioreq.h >> >> -- >> 2.7.4 >>
Hi, thanks for giving this update. Since you were the only person who took the time to send such an update I feel I can spend some time on trying to help with any obstacles you may face. Hence this enquiry: Oleksandr writes ("Re: [ANNOUNCE] Xen 4.15 release schedule and feature tracking"): > I work on virtio-mmio on Arm which involves x86's IOREQ/DM features. > Currently I am working on making the said features common, implementing > missing bits, code cleanup an hardening, etc. > I don't think the virtio-mmio is a 4.15 material, but it would be great > have at least "common" IOREQ/DM in 4.15. .. > > P: your current estimate of the probability it making 4.15, as a %age > > Difficult to say, it depends ... > RFC was posted Aug 3, 2020, The last posted version is V3. Currently I > am in the middle of preparing v4, still need to find a common ground for > few bits. So, I'm replying to V4 here. Did you resolve your issues ? What are the major outstanding risks to this series and do you need any help from the Xen Project (eg from me as Release Manager) ? NB I have not been following this series in detail - I'm just looking at your mail and your 00/ posting and so on. So if there is some blocker or risk I am probably unaware of it. I notice that there's one libxl RFC patch in there. Since that's in my bailiwick I will try to review it soon. Regards, Ian.
On 14.01.21 18:11, Ian Jackson wrote: Hi Ian > Hi, thanks for giving this update. > > Since you were the only person who took the time to send such an > update I feel I can spend some time on trying to help with any > obstacles you may face. Thank you. > > Hence this enquiry: > > Oleksandr writes ("Re: [ANNOUNCE] Xen 4.15 release schedule and feature tracking"): >> I work on virtio-mmio on Arm which involves x86's IOREQ/DM features. >> Currently I am working on making the said features common, implementing >> missing bits, code cleanup an hardening, etc. >> I don't think the virtio-mmio is a 4.15 material, but it would be great >> have at least "common" IOREQ/DM in 4.15. > .. >>> P: your current estimate of the probability it making 4.15, as a %age >> Difficult to say, it depends ... >> RFC was posted Aug 3, 2020, The last posted version is V3. Currently I >> am in the middle of preparing v4, still need to find a common ground for >> few bits. > So, I'm replying to V4 here. Did you resolve your issues ? I think, yes. I hope, I addressed all review сomments/requests for V3. > What are the major outstanding risks to this series and do you need > any help from the Xen Project (eg from me as Release Manager) ? I would like to get the review for V4 for me to be able to make the required changes in time. Last 2 patches could be skipped for now, I don't expect the VirtIO to be in 4.15 (I keep these patches for visibility reason and test purposes): - libxl: Introduce basic virtio-mmio support on Arm - [RFC] libxl: Add support for virtio-disk configuration But, first 22 patches (IOREQ/DM) I would like to see in 4.15. Also what worries me the most is that a quite big series hasn't been fully tested on x86 (only build tested). > > NB I have not been following this series in detail - I'm just looking > at your mail and your 00/ posting and so on. So if there is some > blocker or risk I am probably unaware of it. > > I notice that there's one libxl RFC patch in there. Since that's in > my bailiwick I will try to review it soon. Thank you, please note, this RFC patch is not a target for 4.15, no rush) For the next version I will drop these VirtIO related patches.
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Hello all. The purpose of this patch series is to add IOREQ/DM support to Xen on Arm. You can find an initial discussion at [1] and RFC-V3 series at [2]-[5]. Xen on Arm requires some implementation to forward guest MMIO access to a device model in order to implement virtio-mmio backend or even mediator outside of hypervisor. As Xen on x86 already contains required support this series tries to make it common and introduce Arm specific bits plus some new functionality. Patch series is based on Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device emulator". Besides splitting existing IOREQ/DM support and introducing Arm side, the series also includes virtio-mmio related changes (last 2 patches for toolstack) for the reviewers to be able to see how the whole picture could look like and give it a try. According to the initial/subsequent discussions there are a few open questions/concerns regarding security, performance in VirtIO solution: 1. virtio-mmio vs virtio-pci, SPI vs MSI, or even a composition of virtio-mmio + MSI, different use-cases require different transport... 2. virtio backend is able to access all guest memory, some kind of protection is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in guest', etc (for the first two Alex have provided valuable input at [6]) 3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using Xenstore in virtio backend if possible. Also, there is a desire to make VirtIO backend hypervisor-agnostic. 4. a lot of 'foreing mapping' could lead to the memory exhaustion at the host side, as we are stealing the page from host memory in order to map the guest page. Julien has some idea regarding that. 5. Julien also has some ideas how to optimize the IOREQ code: 5.1 vcpu_ioreq_handle_completion (former handle_hvm_io_completion) which is called in an hotpath on Arm (everytime we are re-entering to the guest): Ideally, vcpu_ioreq_handle_completion should be a NOP (at max a few instructions) if there is nothing to do (if we don't have I/O forwarded to an IOREQ server). Maybe we want to introduce a per-vCPU flag indicating if an I/O has been forwarded to an IOREQ server. This would allow us to bypass most of the function if there is nothing to do. 5.2 The current way to handle MMIO is the following: - Pause the vCPU - Forward the access to the backend domain - Schedule the backend domain - Wait for the access to be handled - Unpause the vCPU The sequence is going to be fairly expensive on Xen. It might be possible to optimize the ACK and avoid to wait for the backend to handle the access. Looks like all of them are valid and worth considering, but the first thing which we need on Arm is a mechanism to forward guest IO to a device emulator, so let's focus on it in the first place. *** There are a lot of changes since RFC series, almost all TODOs were resolved on Arm, Arm code was improved and hardened, common IOREQ/DM code became really arch-agnostic (without HVM-ism), the "legacy" mechanism of mapping magic pages for the IOREQ servers was left x86 specific, etc. Also patch that makes DM code public was reworked to have the top level dm-op handling arch-specific and call into ioreq_server_dm_op() for otherwise unhandled ops. But one TODO still remains which is "PIO handling" on Arm. The "PIO handling" TODO is expected to left unaddressed for the current series. It is not an big issue for now while Xen doesn't have support for vPCI on Arm. On Arm64 they are only used for PCI IO Bar and we would probably want to expose them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling" should be implemented when we add support for vPCI. I left interface untouched in the following patch "xen/dm: Introduce xendevicemodel_set_irq_level DM op" since there is still an open discussion what interface to use/what information to pass to the hypervisor. There are patches on review this series depends on: https://patchwork.kernel.org/patch/11816689 https://patchwork.kernel.org/patch/11803383 Please note, that IOREQ feature is disabled by default on Arm within current series. *** Patch series [7] was rebased on recent "staging branch" (7ba2ab4 x86/p2m: Fix paging_gva_to_gfn() for nested virt) and tested on Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend [8] running in driver domain and unmodified Linux Guest running on existing virtio-blk driver (frontend). No issues were observed. Guest domain 'reboot/destroy' use-cases work properly. Patch series was only build-tested on x86. Please note, build-test passed for the following modes: 1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default) 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set 3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y 4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set (default) 5. Arm32: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y 6. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set (default) *** Any feedback/help would be highly appreciated. [1] https://lists.xenproject.org/archives/html/xen-devel/2020-07/msg00825.html [2] https://lists.xenproject.org/archives/html/xen-devel/2020-08/msg00071.html [3] https://lists.xenproject.org/archives/html/xen-devel/2020-09/msg00732.html [4] https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg01077.html [5] https://lists.xenproject.org/archives/html/xen-devel/2020-11/msg02188.html [6] https://lists.xenproject.org/archives/html/xen-devel/2020-11/msg02212.html [7] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml5 [8] https://github.com/xen-troops/virtio-disk/commits/ioreq_ml1 Julien Grall (5): xen/ioreq: Make x86's IOREQ related dm-op handling common xen/mm: Make x86's XENMEM_resource_ioreq_server handling common arm/ioreq: Introduce arch specific bits for IOREQ/DM features xen/dm: Introduce xendevicemodel_set_irq_level DM op libxl: Introduce basic virtio-mmio support on Arm Oleksandr Tyshchenko (19): x86/ioreq: Prepare IOREQ feature for making it common x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving x86/ioreq: Provide out-of-line wrapper for the handle_mmio() xen/ioreq: Make x86's IOREQ feature common xen/ioreq: Make x86's hvm_ioreq_needs_completion() common xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common xen/ioreq: Move x86's ioreq_server to struct domain xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu xen/ioreq: Remove "hvm" prefixes from involved function names xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm xen/ioreq: Introduce domain_has_ioreq_server() xen/arm: io: Abstract sign-extension xen/arm: io: Harden sign extension check xen/ioreq: Make x86's send_invalidate_req() common xen/arm: Add mapcache invalidation handling [RFC] libxl: Add support for virtio-disk configuration MAINTAINERS | 8 +- tools/include/xendevicemodel.h | 4 + tools/libs/devicemodel/core.c | 18 + tools/libs/devicemodel/libxendevicemodel.map | 1 + tools/libs/light/Makefile | 1 + tools/libs/light/libxl_arm.c | 94 +- tools/libs/light/libxl_create.c | 1 + tools/libs/light/libxl_internal.h | 1 + tools/libs/light/libxl_types.idl | 16 + tools/libs/light/libxl_types_internal.idl | 1 + tools/libs/light/libxl_virtio_disk.c | 109 ++ tools/xl/Makefile | 2 +- tools/xl/xl.h | 3 + tools/xl/xl_cmdtable.c | 15 + tools/xl/xl_parse.c | 116 +++ tools/xl/xl_virtio_disk.c | 46 + xen/arch/arm/Makefile | 2 + xen/arch/arm/dm.c | 174 ++++ xen/arch/arm/domain.c | 9 + xen/arch/arm/io.c | 30 +- xen/arch/arm/ioreq.c | 198 ++++ xen/arch/arm/p2m.c | 51 +- xen/arch/arm/traps.c | 72 +- xen/arch/x86/Kconfig | 1 + xen/arch/x86/hvm/dm.c | 107 +- xen/arch/x86/hvm/emulate.c | 220 ++-- xen/arch/x86/hvm/hvm.c | 14 +- xen/arch/x86/hvm/hypercall.c | 9 +- xen/arch/x86/hvm/intercept.c | 5 +- xen/arch/x86/hvm/io.c | 52 +- xen/arch/x86/hvm/ioreq.c | 1375 ++----------------------- xen/arch/x86/hvm/stdvga.c | 12 +- xen/arch/x86/hvm/svm/nestedsvm.c | 2 +- xen/arch/x86/hvm/vmx/realmode.c | 8 +- xen/arch/x86/hvm/vmx/vvmx.c | 5 +- xen/arch/x86/mm.c | 46 +- xen/arch/x86/mm/p2m.c | 17 +- xen/arch/x86/mm/shadow/common.c | 2 +- xen/common/Kconfig | 3 + xen/common/Makefile | 1 + xen/common/ioreq.c | 1426 ++++++++++++++++++++++++++ xen/common/memory.c | 72 +- xen/include/asm-arm/domain.h | 3 + xen/include/asm-arm/hvm/ioreq.h | 72 ++ xen/include/asm-arm/mm.h | 8 - xen/include/asm-arm/mmio.h | 1 + xen/include/asm-arm/p2m.h | 19 +- xen/include/asm-arm/traps.h | 25 + xen/include/asm-x86/hvm/domain.h | 43 - xen/include/asm-x86/hvm/emulate.h | 2 +- xen/include/asm-x86/hvm/io.h | 17 - xen/include/asm-x86/hvm/ioreq.h | 39 +- xen/include/asm-x86/hvm/vcpu.h | 18 - xen/include/asm-x86/mm.h | 4 - xen/include/asm-x86/p2m.h | 27 +- xen/include/public/arch-arm.h | 5 + xen/include/public/hvm/dm_op.h | 16 + xen/include/xen/dm.h | 39 + xen/include/xen/ioreq.h | 140 +++ xen/include/xen/p2m-common.h | 4 + xen/include/xen/sched.h | 34 + xen/include/xsm/dummy.h | 4 +- xen/include/xsm/xsm.h | 6 +- xen/xsm/dummy.c | 2 +- xen/xsm/flask/hooks.c | 5 +- 65 files changed, 3073 insertions(+), 1809 deletions(-) create mode 100644 tools/libs/light/libxl_virtio_disk.c create mode 100644 tools/xl/xl_virtio_disk.c create mode 100644 xen/arch/arm/dm.c create mode 100644 xen/arch/arm/ioreq.c create mode 100644 xen/common/ioreq.c create mode 100644 xen/include/asm-arm/hvm/ioreq.h create mode 100644 xen/include/xen/dm.h create mode 100644 xen/include/xen/ioreq.h