mbox series

[v16,QEMU,00/16] Add migration support for VFIO devices

Message ID 1585084154-29461-1-git-send-email-kwankhede@nvidia.com (mailing list archive)
Headers show
Series Add migration support for VFIO devices | expand

Message

Kirti Wankhede March 24, 2020, 9:08 p.m. UTC
Hi,

This Patch set adds migration support for VFIO devices in QEMU.

This Patch set include patches as below:
Patch 1:
- Define KABI for VFIO device for migration support for device state and newly
  added ioctl definations to get dirty pages bitmap. This is a placeholder
  patch.

Patch 2-4:
- Few code refactor
- Added save and restore functions for PCI configuration space

Patch 5-10:
- Generic migration functionality for VFIO device.
  * This patch set adds functionality only for PCI devices, but can be
    extended to other VFIO devices.
  * Added all the basic functions required for pre-copy, stop-and-copy and
    resume phases of migration.
  * Added state change notifier and from that notifier function, VFIO
    device's state changed is conveyed to VFIO device driver.
  * During save setup phase and resume/load setup phase, migration region
    is queried and is used to read/write VFIO device data.
  * .save_live_pending and .save_live_iterate are implemented to use QEMU's
    functionality of iteration during pre-copy phase.
  * In .save_live_complete_precopy, that is in stop-and-copy phase,
    iteration to read data from VFIO device driver is implemented till pending
    bytes returned by driver are not zero.

Patch 11-12
- Add helper function for migration with vIOMMU enabled to get address limit
  IOMMU supports.
- Set DIRTY_MEMORY_MIGRATION flag in dirty log mask for migration with vIOMMU
  enabled.

Patch 13-14:
- Add function to start and stop dirty pages tracking.
- Add vfio_listerner_log_sync to mark dirty pages. Dirty pages bitmap is queried
  per container. All pages pinned by vendor driver through vfio_pin_pages
  external API has to be marked as dirty during  migration.
  When there are CPU writes, CPU dirty page tracking can identify dirtied
  pages, but any page pinned by vendor driver can also be written by
  device. As of now there is no device which has hardware support for
  dirty page tracking. So all pages which are pinned by vendor driver
  should be considered as dirty.
  In Qemu, marking pages dirty is only done when device is in stop-and-copy
  phase because if pages are marked dirty during pre-copy phase and content is
  transfered from source to distination, there is no way to know newly dirtied
  pages from the point they were copied earlier until device stops. To avoid
  repeated copy of same content, pinned pages are marked dirty only during
  stop-and-copy phase.

Patch 15:
- With vIOMMU, IO virtual address range can get unmapped while in pre-copy
  phase of migration. In that case, unmap ioctl should return pages pinned
  in that range and QEMU should report corresponding guest physical pages
  dirty.

Patch 16:
- Make VFIO PCI device migration capable. If migration region is not provided by
  driver, migration is blocked.

Yet TODO:
Since there is no device which has hardware support for system memmory
dirty bitmap tracking, right now there is no other API from vendor driver
to VFIO IOMMU module to report dirty pages. In future, when such hardware
support will be implemented, an API will be required in kernel such that
vendor driver could report dirty pages to VFIO module during migration phases.

Below is the flow of state change for live migration where states in brackets
represent VM state, migration state and VFIO device state as:
    (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)

Live migration save path:
        QEMU normal running state
        (RUNNING, _NONE, _RUNNING)
                        |
    migrate_init spawns migration_thread.
    (RUNNING, _SETUP, _RUNNING|_SAVING)
    Migration thread then calls each device's .save_setup()
                        |
    (RUNNING, _ACTIVE, _RUNNING|_SAVING)
    If device is active, get pending bytes by .save_live_pending()
    if pending bytes >= threshold_size,  call save_live_iterate()
    Data of VFIO device for pre-copy phase is copied.
    Iterate till pending bytes converge and are less than threshold
                        |
    On migration completion, vCPUs stops and calls .save_live_complete_precopy
    for each active device. VFIO device is then transitioned in
     _SAVING state.
    (FINISH_MIGRATE, _DEVICE, _SAVING)
    For VFIO device, iterate in  .save_live_complete_precopy  until
    pending data is 0.
    (FINISH_MIGRATE, _DEVICE, _STOPPED)
                        |
    (FINISH_MIGRATE, _COMPLETED, STOPPED)
    Migraton thread schedule cleanup bottom half and exit

Live migration resume path:
    Incomming migration calls .load_setup for each device
    (RESTORE_VM, _ACTIVE, STOPPED)
                        |
    For each device, .load_state is called for that device section data
                        |
    At the end, called .load_cleanup for each device and vCPUs are started.
                        |
        (RUNNING, _NONE, _RUNNING)

Note that:
- Migration post copy is not supported.

v9 -> v16
- KABI almost finalised on kernel patches.
- Added support for migration with vIOMMU enabled.

v8 -> v9:
- Split patch set in 2 sets, Kernel and QEMU sets.
- Dirty pages bitmap is queried from IOMMU container rather than from
  vendor driver for per device. Added 2 ioctls to achieve this.

v7 -> v8:
- Updated comments for KABI
- Added BAR address validation check during PCI device's config space load as
  suggested by Dr. David Alan Gilbert.
- Changed vfio_migration_set_state() to set or clear device state flags.
- Some nit fixes.

v6 -> v7:
- Fix build failures.

v5 -> v6:
- Fix build failure.

v4 -> v5:
- Added decriptive comment about the sequence of access of members of structure
  vfio_device_migration_info to be followed based on Alex's suggestion
- Updated get dirty pages sequence.
- As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
  get_object, save_config and load_config.
- Fixed multiple nit picks.
- Tested live migration with multiple vfio device assigned to a VM.

v3 -> v4:
- Added one more bit for _RESUMING flag to be set explicitly.
- data_offset field is read-only for user space application.
- data_size is read for every iteration before reading data from migration, that
  is removed assumption that data will be till end of migration region.
- If vendor driver supports mappable sparsed region, map those region during
  setup state of save/load, similarly unmap those from cleanup routines.
- Handles race condition that causes data corruption in migration region during
  save device state by adding mutex and serialiaing save_buffer and
  get_dirty_pages routines.
- Skip called get_dirty_pages routine for mapped MMIO region of device.
- Added trace events.
- Splitted into multiple functional patches.

v2 -> v3:
- Removed enum of VFIO device states. Defined VFIO device state with 2 bits.
- Re-structured vfio_device_migration_info to keep it minimal and defined action
  on read and write access on its members.

v1 -> v2:
- Defined MIGRATION region type and sub-type which should be used with region
  type capability.
- Re-structured vfio_device_migration_info. This structure will be placed at 0th
  offset of migration region.
- Replaced ioctl with read/write for trapped part of migration region.
- Added both type of access support, trapped or mmapped, for data section of the
  region.
- Moved PCI device functions to pci file.
- Added iteration to get dirty page bitmap until bitmap for all requested pages
  are copied.

Thanks,
Kirti



Kirti Wankhede (16):
  vfio: KABI for migration interface - Kernel header placeholder
  vfio: Add function to unmap VFIO region
  vfio: Add vfio_get_object callback to VFIODeviceOps
  vfio: Add save and load functions for VFIO PCI devices
  vfio: Add migration region initialization and finalize function
  vfio: Add VM state change handler to know state of VM
  vfio: Add migration state change notifier
  vfio: Register SaveVMHandlers for VFIO device
  vfio: Add save state functions to SaveVMHandlers
  vfio: Add load state functions to SaveVMHandlers
  iommu: add callback to get address limit IOMMU supports
  memory: Set DIRTY_MEMORY_MIGRATION when IOMMU is enabled
  vfio: Add function to start and stop dirty pages tracking
  vfio: Add vfio_listener_log_sync to mark dirty pages
  vfio: Add ioctl to get dirty pages bitmap during dma unmap.
  vfio: Make vfio-pci device migration capable

 hw/i386/intel_iommu.c         |   9 +
 hw/vfio/Makefile.objs         |   2 +-
 hw/vfio/common.c              | 303 +++++++++++++++-
 hw/vfio/migration.c           | 788 ++++++++++++++++++++++++++++++++++++++++++
 hw/vfio/pci.c                 | 203 +++++++++--
 hw/vfio/pci.h                 |   1 -
 hw/vfio/trace-events          |  19 +
 include/exec/memory.h         |  19 +
 include/hw/vfio/vfio-common.h |  20 ++
 linux-headers/linux/vfio.h    | 297 +++++++++++++++-
 memory.c                      |  13 +-
 11 files changed, 1639 insertions(+), 35 deletions(-)
 create mode 100644 hw/vfio/migration.c

Comments

no-reply@patchew.org March 24, 2020, 11:36 p.m. UTC | #1
Patchew URL: https://patchew.org/QEMU/1585084154-29461-1-git-send-email-kwankhede@nvidia.com/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

  CC      x86_64-softmmu/hw/vfio/pci-quirks.o
  CC      aarch64-softmmu/hw/intc/exynos4210_combiner.o
/tmp/qemu-test/src/hw/vfio/common.c: In function 'vfio_listerner_log_sync':
/tmp/qemu-test/src/hw/vfio/common.c:945:66: error: 'giommu' may be used uninitialized in this function [-Werror=maybe-uninitialized]
                             memory_region_iommu_get_address_limit(giommu->iommu,
                                                                  ^
/tmp/qemu-test/src/hw/vfio/common.c:923:21: note: 'giommu' was declared here
     VFIOGuestIOMMU *giommu;
                     ^
cc1: all warnings being treated as errors
make[1]: *** [hw/vfio/common.o] Error 1
make[1]: *** Waiting for unfinished jobs....
  CC      aarch64-softmmu/hw/intc/omap_intc.o
  CC      aarch64-softmmu/hw/intc/bcm2835_ic.o
---
  CC      aarch64-softmmu/hw/vfio/amd-xgbe.o
  CC      aarch64-softmmu/hw/virtio/virtio.o
  CC      aarch64-softmmu/hw/virtio/vhost.o
make: *** [x86_64-softmmu/all] Error 2
make: *** Waiting for unfinished jobs....
  CC      aarch64-softmmu/hw/virtio/vhost-backend.o
  CC      aarch64-softmmu/hw/virtio/vhost-user.o
---
  CC      aarch64-softmmu/hw/virtio/virtio-iommu.o
  CC      aarch64-softmmu/hw/virtio/vhost-vsock.o
/tmp/qemu-test/src/hw/vfio/common.c: In function 'vfio_listerner_log_sync':
/tmp/qemu-test/src/hw/vfio/common.c:945:66: error: 'giommu' may be used uninitialized in this function [-Werror=maybe-uninitialized]
                             memory_region_iommu_get_address_limit(giommu->iommu,
                                                                  ^
/tmp/qemu-test/src/hw/vfio/common.c:923:21: note: 'giommu' was declared here
     VFIOGuestIOMMU *giommu;
                     ^
cc1: all warnings being treated as errors
make[1]: *** [hw/vfio/common.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make: *** [aarch64-softmmu/all] Error 2
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 664, in <module>
    sys.exit(main())
---
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=c9b01bcc7fc04e2d8f5e74bf460f0d7a', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-lne31pn7/src/docker-src.2020-03-24-19.33.46.14149:/var/tmp/qemu:z,ro', 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=c9b01bcc7fc04e2d8f5e74bf460f0d7a
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-lne31pn7/src'
make: *** [docker-run-test-quick@centos7] Error 2

real    3m5.634s
user    0m8.335s


The full log is available at
http://patchew.org/logs/1585084154-29461-1-git-send-email-kwankhede@nvidia.com/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
Alex Williamson March 31, 2020, 6:34 p.m. UTC | #2
On Wed, 25 Mar 2020 02:38:58 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Hi,
> 
> This Patch set adds migration support for VFIO devices in QEMU.

Hi Kirti,

Do you have any migration data you can share to show that this solution
is viable and useful?  I was chatting with Dave Gilbert and there still
seems to be a concern that we actually have a real-world practical
solution.  We know this is inefficient with QEMU today, vendor pinned
memory will get copied multiple times if we're lucky.  If we're not
lucky we may be copying all of guest RAM repeatedly.  There are known
inefficiencies with vIOMMU, etc.  QEMU could learn new heuristics to
account for some of this and we could potentially report different
bitmaps in different phases through vfio, but let's make sure that
there are useful cases enabled by this first implementation.

With a reasonably sized VM, running a reasonable graphics demo or
workload, can we achieve reasonably live migration?  What kind of
downtime do we achieve and what is the working set size of the pinned
memory?  Intel folks, if you've been able to port to this or similar
code base, please report your results as well, open source consumers
are arguably even more important.  Thanks,

Alex
Yan Zhao April 1, 2020, 6:41 a.m. UTC | #3
On Wed, Apr 01, 2020 at 02:34:24AM +0800, Alex Williamson wrote:
> On Wed, 25 Mar 2020 02:38:58 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
> > Hi,
> > 
> > This Patch set adds migration support for VFIO devices in QEMU.
> 
> Hi Kirti,
> 
> Do you have any migration data you can share to show that this solution
> is viable and useful?  I was chatting with Dave Gilbert and there still
> seems to be a concern that we actually have a real-world practical
> solution.  We know this is inefficient with QEMU today, vendor pinned
> memory will get copied multiple times if we're lucky.  If we're not
> lucky we may be copying all of guest RAM repeatedly.  There are known
> inefficiencies with vIOMMU, etc.  QEMU could learn new heuristics to
> account for some of this and we could potentially report different
> bitmaps in different phases through vfio, but let's make sure that
> there are useful cases enabled by this first implementation.
> 
> With a reasonably sized VM, running a reasonable graphics demo or
> workload, can we achieve reasonably live migration?  What kind of
> downtime do we achieve and what is the working set size of the pinned
> memory?  Intel folks, if you've been able to port to this or similar
> code base, please report your results as well, open source consumers
> are arguably even more important.  Thanks,
> 
hi Alex
we're in the process of porting to this code, and now it's able to
migrate successfully without dirty pages.

when there're dirty pages, we met several issues.
one of them is reported here
(https://lists.gnu.org/archive/html/qemu-devel/2020-04/msg00004.html).
dirty pages for some regions are not able to be collected correctly,
especially for memory range from 3G to 4G.

even without this bug, qemu still got stuck in middle before
reaching stop-and-copy phase and cannot be killed by admin.
still in debugging of this problem.

Thanks
Yan
Alex Williamson April 1, 2020, 6:34 p.m. UTC | #4
On Wed, 1 Apr 2020 02:41:54 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> On Wed, Apr 01, 2020 at 02:34:24AM +0800, Alex Williamson wrote:
> > On Wed, 25 Mar 2020 02:38:58 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >   
> > > Hi,
> > > 
> > > This Patch set adds migration support for VFIO devices in QEMU.  
> > 
> > Hi Kirti,
> > 
> > Do you have any migration data you can share to show that this solution
> > is viable and useful?  I was chatting with Dave Gilbert and there still
> > seems to be a concern that we actually have a real-world practical
> > solution.  We know this is inefficient with QEMU today, vendor pinned
> > memory will get copied multiple times if we're lucky.  If we're not
> > lucky we may be copying all of guest RAM repeatedly.  There are known
> > inefficiencies with vIOMMU, etc.  QEMU could learn new heuristics to
> > account for some of this and we could potentially report different
> > bitmaps in different phases through vfio, but let's make sure that
> > there are useful cases enabled by this first implementation.
> > 
> > With a reasonably sized VM, running a reasonable graphics demo or
> > workload, can we achieve reasonably live migration?  What kind of
> > downtime do we achieve and what is the working set size of the pinned
> > memory?  Intel folks, if you've been able to port to this or similar
> > code base, please report your results as well, open source consumers
> > are arguably even more important.  Thanks,
> >   
> hi Alex
> we're in the process of porting to this code, and now it's able to
> migrate successfully without dirty pages.
> 
> when there're dirty pages, we met several issues.
> one of them is reported here
> (https://lists.gnu.org/archive/html/qemu-devel/2020-04/msg00004.html).
> dirty pages for some regions are not able to be collected correctly,
> especially for memory range from 3G to 4G.
> 
> even without this bug, qemu still got stuck in middle before
> reaching stop-and-copy phase and cannot be killed by admin.
> still in debugging of this problem.

Thanks, Yan.  So it seems we have various bugs, known limitations, and
we haven't actually proven that this implementation provides a useful
feature, at least for the open source consumer.  This doesn't give me
much confidence to consider the kernel portion ready for v5.7 given how
late we are already :-\  Thanks,

Alex