Message ID | 1585084154-29461-1-git-send-email-kwankhede@nvidia.com (mailing list archive) |
---|---|
Headers | show |
Series | Add migration support for VFIO devices | expand |
Patchew URL: https://patchew.org/QEMU/1585084154-29461-1-git-send-email-kwankhede@nvidia.com/ Hi, This series failed the docker-quick@centos7 build test. Please find the testing commands and their output below. If you have Docker installed, you can probably reproduce it locally. === TEST SCRIPT BEGIN === #!/bin/bash make docker-image-centos7 V=1 NETWORK=1 time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1 === TEST SCRIPT END === CC x86_64-softmmu/hw/vfio/pci-quirks.o CC aarch64-softmmu/hw/intc/exynos4210_combiner.o /tmp/qemu-test/src/hw/vfio/common.c: In function 'vfio_listerner_log_sync': /tmp/qemu-test/src/hw/vfio/common.c:945:66: error: 'giommu' may be used uninitialized in this function [-Werror=maybe-uninitialized] memory_region_iommu_get_address_limit(giommu->iommu, ^ /tmp/qemu-test/src/hw/vfio/common.c:923:21: note: 'giommu' was declared here VFIOGuestIOMMU *giommu; ^ cc1: all warnings being treated as errors make[1]: *** [hw/vfio/common.o] Error 1 make[1]: *** Waiting for unfinished jobs.... CC aarch64-softmmu/hw/intc/omap_intc.o CC aarch64-softmmu/hw/intc/bcm2835_ic.o --- CC aarch64-softmmu/hw/vfio/amd-xgbe.o CC aarch64-softmmu/hw/virtio/virtio.o CC aarch64-softmmu/hw/virtio/vhost.o make: *** [x86_64-softmmu/all] Error 2 make: *** Waiting for unfinished jobs.... CC aarch64-softmmu/hw/virtio/vhost-backend.o CC aarch64-softmmu/hw/virtio/vhost-user.o --- CC aarch64-softmmu/hw/virtio/virtio-iommu.o CC aarch64-softmmu/hw/virtio/vhost-vsock.o /tmp/qemu-test/src/hw/vfio/common.c: In function 'vfio_listerner_log_sync': /tmp/qemu-test/src/hw/vfio/common.c:945:66: error: 'giommu' may be used uninitialized in this function [-Werror=maybe-uninitialized] memory_region_iommu_get_address_limit(giommu->iommu, ^ /tmp/qemu-test/src/hw/vfio/common.c:923:21: note: 'giommu' was declared here VFIOGuestIOMMU *giommu; ^ cc1: all warnings being treated as errors make[1]: *** [hw/vfio/common.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make: *** [aarch64-softmmu/all] Error 2 Traceback (most recent call last): File "./tests/docker/docker.py", line 664, in <module> sys.exit(main()) --- raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=c9b01bcc7fc04e2d8f5e74bf460f0d7a', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-lne31pn7/src/docker-src.2020-03-24-19.33.46.14149:/var/tmp/qemu:z,ro', 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2. filter=--filter=label=com.qemu.instance.uuid=c9b01bcc7fc04e2d8f5e74bf460f0d7a make[1]: *** [docker-run] Error 1 make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-lne31pn7/src' make: *** [docker-run-test-quick@centos7] Error 2 real 3m5.634s user 0m8.335s The full log is available at http://patchew.org/logs/1585084154-29461-1-git-send-email-kwankhede@nvidia.com/testing.docker-quick@centos7/?type=message. --- Email generated automatically by Patchew [https://patchew.org/]. Please send your feedback to patchew-devel@redhat.com
On Wed, 25 Mar 2020 02:38:58 +0530 Kirti Wankhede <kwankhede@nvidia.com> wrote: > Hi, > > This Patch set adds migration support for VFIO devices in QEMU. Hi Kirti, Do you have any migration data you can share to show that this solution is viable and useful? I was chatting with Dave Gilbert and there still seems to be a concern that we actually have a real-world practical solution. We know this is inefficient with QEMU today, vendor pinned memory will get copied multiple times if we're lucky. If we're not lucky we may be copying all of guest RAM repeatedly. There are known inefficiencies with vIOMMU, etc. QEMU could learn new heuristics to account for some of this and we could potentially report different bitmaps in different phases through vfio, but let's make sure that there are useful cases enabled by this first implementation. With a reasonably sized VM, running a reasonable graphics demo or workload, can we achieve reasonably live migration? What kind of downtime do we achieve and what is the working set size of the pinned memory? Intel folks, if you've been able to port to this or similar code base, please report your results as well, open source consumers are arguably even more important. Thanks, Alex
On Wed, Apr 01, 2020 at 02:34:24AM +0800, Alex Williamson wrote: > On Wed, 25 Mar 2020 02:38:58 +0530 > Kirti Wankhede <kwankhede@nvidia.com> wrote: > > > Hi, > > > > This Patch set adds migration support for VFIO devices in QEMU. > > Hi Kirti, > > Do you have any migration data you can share to show that this solution > is viable and useful? I was chatting with Dave Gilbert and there still > seems to be a concern that we actually have a real-world practical > solution. We know this is inefficient with QEMU today, vendor pinned > memory will get copied multiple times if we're lucky. If we're not > lucky we may be copying all of guest RAM repeatedly. There are known > inefficiencies with vIOMMU, etc. QEMU could learn new heuristics to > account for some of this and we could potentially report different > bitmaps in different phases through vfio, but let's make sure that > there are useful cases enabled by this first implementation. > > With a reasonably sized VM, running a reasonable graphics demo or > workload, can we achieve reasonably live migration? What kind of > downtime do we achieve and what is the working set size of the pinned > memory? Intel folks, if you've been able to port to this or similar > code base, please report your results as well, open source consumers > are arguably even more important. Thanks, > hi Alex we're in the process of porting to this code, and now it's able to migrate successfully without dirty pages. when there're dirty pages, we met several issues. one of them is reported here (https://lists.gnu.org/archive/html/qemu-devel/2020-04/msg00004.html). dirty pages for some regions are not able to be collected correctly, especially for memory range from 3G to 4G. even without this bug, qemu still got stuck in middle before reaching stop-and-copy phase and cannot be killed by admin. still in debugging of this problem. Thanks Yan
On Wed, 1 Apr 2020 02:41:54 -0400 Yan Zhao <yan.y.zhao@intel.com> wrote: > On Wed, Apr 01, 2020 at 02:34:24AM +0800, Alex Williamson wrote: > > On Wed, 25 Mar 2020 02:38:58 +0530 > > Kirti Wankhede <kwankhede@nvidia.com> wrote: > > > > > Hi, > > > > > > This Patch set adds migration support for VFIO devices in QEMU. > > > > Hi Kirti, > > > > Do you have any migration data you can share to show that this solution > > is viable and useful? I was chatting with Dave Gilbert and there still > > seems to be a concern that we actually have a real-world practical > > solution. We know this is inefficient with QEMU today, vendor pinned > > memory will get copied multiple times if we're lucky. If we're not > > lucky we may be copying all of guest RAM repeatedly. There are known > > inefficiencies with vIOMMU, etc. QEMU could learn new heuristics to > > account for some of this and we could potentially report different > > bitmaps in different phases through vfio, but let's make sure that > > there are useful cases enabled by this first implementation. > > > > With a reasonably sized VM, running a reasonable graphics demo or > > workload, can we achieve reasonably live migration? What kind of > > downtime do we achieve and what is the working set size of the pinned > > memory? Intel folks, if you've been able to port to this or similar > > code base, please report your results as well, open source consumers > > are arguably even more important. Thanks, > > > hi Alex > we're in the process of porting to this code, and now it's able to > migrate successfully without dirty pages. > > when there're dirty pages, we met several issues. > one of them is reported here > (https://lists.gnu.org/archive/html/qemu-devel/2020-04/msg00004.html). > dirty pages for some regions are not able to be collected correctly, > especially for memory range from 3G to 4G. > > even without this bug, qemu still got stuck in middle before > reaching stop-and-copy phase and cannot be killed by admin. > still in debugging of this problem. Thanks, Yan. So it seems we have various bugs, known limitations, and we haven't actually proven that this implementation provides a useful feature, at least for the open source consumer. This doesn't give me much confidence to consider the kernel portion ready for v5.7 given how late we are already :-\ Thanks, Alex