Message ID | 1600817059-26721-9-git-send-email-kwankhede@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add migration support for VFIO devices | expand |
I met a problem when trying this patch. Mostly the problem happens if a device doesn't set any pending bytes in the iteration stage, which shows this device doesn't have a stage of iteration. The QEMU in the destination machine will complain out-of-memory. After some investigation, it seems the vendor-specific bit stream is not complete and the QEMU in the destination machine will wrongly take a signature as the size of the section and failed to allocate the memory. Not sure if others meet the same problem. I solved this problem by the following fix and the qemu version I am using is v5.0.0.0. commit 13a80adc2cdddd48d76acf6a5dd715bcbf42b577 Author: Zhi Wang <zhi.wang.linux@gmail.com> Date: Tue Sep 15 15:58:45 2020 +0300 fix Signed-off-by: Zhi Wang <zhi.wang.linux@gmail.com> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 09eec9c..e741319 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -453,10 +458,12 @@ static int vfio_save_iterate(QEMUFile *f, void *opaque) return ret; } - if (migration->pending_bytes == 0) { - /* indicates data finished, goto complete phase */ - return 1; - } + if (migration->pending_bytes == 0) { + /* indicates data finished, goto complete phase */ + qemu_put_be64(f, 0); + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + return 1; + } } data_size = vfio_save_buffer(f, vbasedev); -----Original Message----- From: Kirti Wankhede <kwankhede@nvidia.com> Sent: Wednesday, September 23, 2020 2:24 AM To: alex.williamson@redhat.com; cjia@nvidia.com Cc: Tian, Kevin <kevin.tian@intel.com>; Yang, Ziye <ziye.yang@intel.com>; Liu, Changpeng <changpeng.liu@intel.com>; Liu, Yi L <yi.l.liu@intel.com>; mlevitsk@redhat.com; eskultet@redhat.com; cohuck@redhat.com; dgilbert@redhat.com; jonathan.davies@nutanix.com; eauger@redhat.com; aik@ozlabs.ru; pasic@linux.ibm.com; felipe@nutanix.com; Zhengxiao.zx@Alibaba-inc.com; shuangtai.tst@alibaba-inc.com; Ken.Xue@amd.com; Wang, Zhi A <zhi.a.wang@intel.com>; Zhao, Yan Y <yan.y.zhao@intel.com>; pbonzini@redhat.com; quintela@redhat.com; eblake@redhat.com; armbru@redhat.com; peterx@redhat.com; qemu-devel@nongnu.org; Kirti Wankhede <kwankhede@nvidia.com> Subject: [PATCH v26 08/17] vfio: Add save state functions to SaveVMHandlers Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy functions. These functions handles pre-copy and stop-and-copy phase. In _SAVING|_RUNNING device state or pre-copy phase: - read pending_bytes. If pending_bytes > 0, go through below steps. - read data_offset - indicates kernel driver to write data to staging buffer. - read data_size - amount of data in bytes written by vendor driver in migration region. - read data_size bytes of data from data_offset in the migration region. - Write data packet to file stream as below: {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data, VFIO_MIG_FLAG_END_OF_STATE } In _SAVING device state or stop-and-copy phase a. read config space of device and save to migration file stream. This doesn't need to be from vendor driver. Any other special config state from driver can be saved as data in following iteration. b. read pending_bytes. If pending_bytes > 0, go through below steps. c. read data_offset - indicates kernel driver to write data to staging buffer. d. read data_size - amount of data in bytes written by vendor driver in migration region. e. read data_size bytes of data from data_offset in the migration region. f. Write data packet as below: {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data} g. iterate through steps b to f while (pending_bytes > 0) h. Write {VFIO_MIG_FLAG_END_OF_STATE} When data region is mapped, its user's responsibility to read data from data_offset of data_size before moving to next steps. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Neo Jia <cjia@nvidia.com> --- hw/vfio/migration.c | 273 ++++++++++++++++++++++++++++++++++++++++++ hw/vfio/trace-events | 6 + include/hw/vfio/vfio-common.h | 1 + 3 files changed, 280 insertions(+) diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 8e8adaa25779..4611bb972228 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -180,6 +180,154 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t mask, return 0; } +static void *get_data_section_size(VFIORegion *region, uint64_t data_offset, + uint64_t data_size, uint64_t *size) +{ + void *ptr = NULL; + uint64_t limit = 0; + int i; + + if (!region->mmaps) { + if (size) { + *size = data_size; + } + return ptr; + } + + for (i = 0; i < region->nr_mmaps; i++) { + VFIOMmap *map = region->mmaps + i; + + if ((data_offset >= map->offset) && + (data_offset < map->offset + map->size)) { + + /* check if data_offset is within sparse mmap areas */ + ptr = map->mmap + data_offset - map->offset; + if (size) { + *size = MIN(data_size, map->offset + map->size - data_offset); + } + break; + } else if ((data_offset < map->offset) && + (!limit || limit > map->offset)) { + /* + * data_offset is not within sparse mmap areas, find size of + * non-mapped area. Check through all list since region->mmaps list + * is not sorted. + */ + limit = map->offset; + } + } + + if (!ptr && size) { + *size = limit ? limit - data_offset : data_size; + } + return ptr; +} + +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t +*size) { + VFIOMigration *migration = vbasedev->migration; + VFIORegion *region = &migration->region; + uint64_t data_offset = 0, data_size = 0, sz; + int ret; + + ret = vfio_mig_read(vbasedev, &data_offset, sizeof(data_offset), + region->fd_offset + offsetof(struct vfio_device_migration_info, + data_offset)); + if (ret < 0) { + return ret; + } + + ret = vfio_mig_read(vbasedev, &data_size, sizeof(data_size), + region->fd_offset + offsetof(struct vfio_device_migration_info, + data_size)); + if (ret < 0) { + return ret; + } + + trace_vfio_save_buffer(vbasedev->name, data_offset, data_size, + migration->pending_bytes); + + qemu_put_be64(f, data_size); + sz = data_size; + + while (sz) { + void *buf = NULL; + uint64_t sec_size; + bool buf_allocated = false; + + buf = get_data_section_size(region, data_offset, sz, + &sec_size); + + if (!buf) { + buf = g_try_malloc(sec_size); + if (!buf) { + error_report("%s: Error allocating buffer ", __func__); + return -ENOMEM; + } + buf_allocated = true; + + ret = vfio_mig_read(vbasedev, buf, sec_size, + region->fd_offset + data_offset); + if (ret < 0) { + g_free(buf); + return ret; + } + } + + qemu_put_buffer(f, buf, sec_size); + + if (buf_allocated) { + g_free(buf); + } + sz -= sec_size; + data_offset += sec_size; + } + + ret = qemu_file_get_error(f); + + if (!ret && size) { + *size = data_size; + } + + return ret; +} + +static int vfio_update_pending(VFIODevice *vbasedev) { + VFIOMigration *migration = vbasedev->migration; + VFIORegion *region = &migration->region; + uint64_t pending_bytes = 0; + int ret; + + ret = vfio_mig_read(vbasedev, &pending_bytes, sizeof(pending_bytes), + region->fd_offset + offsetof(struct vfio_device_migration_info, + pending_bytes)); + if (ret < 0) { + migration->pending_bytes = 0; + return ret; + } + + migration->pending_bytes = pending_bytes; + trace_vfio_update_pending(vbasedev->name, pending_bytes); + return 0; +} + +static int vfio_save_device_config_state(QEMUFile *f, void *opaque) { + VFIODevice *vbasedev = opaque; + + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE); + + if (vbasedev->ops && vbasedev->ops->vfio_save_config) { + vbasedev->ops->vfio_save_config(vbasedev, f); + } + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + trace_vfio_save_device_config_state(vbasedev->name); + + return qemu_file_get_error(f); +} + /* ---------------------------------------------------------------------- */ static int vfio_save_setup(QEMUFile *f, void *opaque) @@ -232,9 +380,134 @@ static void vfio_save_cleanup(void *opaque) trace_vfio_save_cleanup(vbasedev->name); } +static void vfio_save_pending(QEMUFile *f, void *opaque, + uint64_t threshold_size, + uint64_t *res_precopy_only, + uint64_t *res_compatible, + uint64_t *res_postcopy_only) { + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + int ret; + + ret = vfio_update_pending(vbasedev); + if (ret) { + return; + } + + *res_precopy_only += migration->pending_bytes; + + trace_vfio_save_pending(vbasedev->name, *res_precopy_only, + *res_postcopy_only, *res_compatible); } + +static int vfio_save_iterate(QEMUFile *f, void *opaque) { + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + uint64_t data_size; + int ret; + + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); + + if (migration->pending_bytes == 0) { + ret = vfio_update_pending(vbasedev); + if (ret) { + return ret; + } + + if (migration->pending_bytes == 0) { + /* indicates data finished, goto complete phase */ + return 1; + } + } + + ret = vfio_save_buffer(f, vbasedev, &data_size); + + if (ret) { + error_report("%s: vfio_save_buffer failed %s", vbasedev->name, + strerror(errno)); + return ret; + } + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + + trace_vfio_save_iterate(vbasedev->name, data_size); + + return 0; +} + +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) { + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + uint64_t data_size; + int ret; + + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_RUNNING, + VFIO_DEVICE_STATE_SAVING); + if (ret) { + error_report("%s: Failed to set state STOP and SAVING", + vbasedev->name); + return ret; + } + + ret = vfio_save_device_config_state(f, opaque); + if (ret) { + return ret; + } + + ret = vfio_update_pending(vbasedev); + if (ret) { + return ret; + } + + while (migration->pending_bytes > 0) { + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); + ret = vfio_save_buffer(f, vbasedev, &data_size); + if (ret < 0) { + error_report("%s: Failed to save buffer", vbasedev->name); + return ret; + } + + if (data_size == 0) { + break; + } + + ret = vfio_update_pending(vbasedev); + if (ret) { + return ret; + } + } + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_SAVING, 0); + if (ret) { + error_report("%s: Failed to set state STOPPED", vbasedev->name); + return ret; + } + + trace_vfio_save_complete_precopy(vbasedev->name); + return ret; +} + static SaveVMHandlers savevm_vfio_handlers = { .save_setup = vfio_save_setup, .save_cleanup = vfio_save_cleanup, + .save_live_pending = vfio_save_pending, + .save_live_iterate = vfio_save_iterate, + .save_live_complete_precopy = vfio_save_complete_precopy, }; /* ---------------------------------------------------------------------- */ diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 982d8dccb219..118b5547c921 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -154,3 +154,9 @@ vfio_vmstate_change(const char *name, int running, const char *reason, uint32_t vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" vfio_save_setup(const char *name) " (%s)" vfio_save_cleanup(const char *name) " (%s)" +vfio_save_buffer(const char *name, uint64_t data_offset, uint64_t +data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" +pending 0x%"PRIx64 vfio_update_pending(const char *name, uint64_t pending) " (%s) pending 0x%"PRIx64 vfio_save_device_config_state(const char *name) " (%s)" +vfio_save_pending(const char *name, uint64_t precopy, uint64_t +postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d" +vfio_save_complete_precopy(const char *name) " (%s)" diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 49c7c7a0e29a..471e444a364c 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -60,6 +60,7 @@ typedef struct VFIORegion { typedef struct VFIOMigration { VFIORegion region; + uint64_t pending_bytes; } VFIOMigration; typedef struct VFIOAddressSpace { -- 2.7.0
On Wed, 23 Sep 2020 04:54:10 +0530 Kirti Wankhede <kwankhede@nvidia.com> wrote: > Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy > functions. These functions handles pre-copy and stop-and-copy phase. > > In _SAVING|_RUNNING device state or pre-copy phase: > - read pending_bytes. If pending_bytes > 0, go through below steps. > - read data_offset - indicates kernel driver to write data to staging > buffer. > - read data_size - amount of data in bytes written by vendor driver in > migration region. > - read data_size bytes of data from data_offset in the migration region. > - Write data packet to file stream as below: > {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data, > VFIO_MIG_FLAG_END_OF_STATE } > > In _SAVING device state or stop-and-copy phase > a. read config space of device and save to migration file stream. This > doesn't need to be from vendor driver. Any other special config state > from driver can be saved as data in following iteration. > b. read pending_bytes. If pending_bytes > 0, go through below steps. > c. read data_offset - indicates kernel driver to write data to staging > buffer. > d. read data_size - amount of data in bytes written by vendor driver in > migration region. > e. read data_size bytes of data from data_offset in the migration region. > f. Write data packet as below: > {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data} > g. iterate through steps b to f while (pending_bytes > 0) > h. Write {VFIO_MIG_FLAG_END_OF_STATE} > > When data region is mapped, its user's responsibility to read data from > data_offset of data_size before moving to next steps. > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> > Reviewed-by: Neo Jia <cjia@nvidia.com> > --- > hw/vfio/migration.c | 273 ++++++++++++++++++++++++++++++++++++++++++ > hw/vfio/trace-events | 6 + > include/hw/vfio/vfio-common.h | 1 + > 3 files changed, 280 insertions(+) > > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c > index 8e8adaa25779..4611bb972228 100644 > --- a/hw/vfio/migration.c > +++ b/hw/vfio/migration.c > @@ -180,6 +180,154 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t mask, > return 0; > } > > +static void *get_data_section_size(VFIORegion *region, uint64_t data_offset, > + uint64_t data_size, uint64_t *size) > +{ > + void *ptr = NULL; > + uint64_t limit = 0; > + int i; > + > + if (!region->mmaps) { > + if (size) { > + *size = data_size; > + } > + return ptr; > + } > + > + for (i = 0; i < region->nr_mmaps; i++) { > + VFIOMmap *map = region->mmaps + i; > + > + if ((data_offset >= map->offset) && > + (data_offset < map->offset + map->size)) { > + > + /* check if data_offset is within sparse mmap areas */ > + ptr = map->mmap + data_offset - map->offset; > + if (size) { > + *size = MIN(data_size, map->offset + map->size - data_offset); > + } > + break; > + } else if ((data_offset < map->offset) && > + (!limit || limit > map->offset)) { > + /* > + * data_offset is not within sparse mmap areas, find size of > + * non-mapped area. Check through all list since region->mmaps list > + * is not sorted. > + */ > + limit = map->offset; > + } > + } > + > + if (!ptr && size) { > + *size = limit ? limit - data_offset : data_size; 'limit - data_offset' doesn't take data_size into account, this should be MIN(data_size, limit - data_offset). > + } > + return ptr; > +} > + > +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t *size) > +{ > + VFIOMigration *migration = vbasedev->migration; > + VFIORegion *region = &migration->region; > + uint64_t data_offset = 0, data_size = 0, sz; > + int ret; > + > + ret = vfio_mig_read(vbasedev, &data_offset, sizeof(data_offset), > + region->fd_offset + offsetof(struct vfio_device_migration_info, > + data_offset)); > + if (ret < 0) { > + return ret; > + } > + > + ret = vfio_mig_read(vbasedev, &data_size, sizeof(data_size), > + region->fd_offset + offsetof(struct vfio_device_migration_info, > + data_size)); > + if (ret < 0) { > + return ret; > + } > + > + trace_vfio_save_buffer(vbasedev->name, data_offset, data_size, > + migration->pending_bytes); > + > + qemu_put_be64(f, data_size); > + sz = data_size; > + > + while (sz) { > + void *buf = NULL; Unnecessary initialization. > + uint64_t sec_size; > + bool buf_allocated = false; > + > + buf = get_data_section_size(region, data_offset, sz, &sec_size); > + > + if (!buf) { > + buf = g_try_malloc(sec_size); > + if (!buf) { > + error_report("%s: Error allocating buffer ", __func__); > + return -ENOMEM; > + } > + buf_allocated = true; > + > + ret = vfio_mig_read(vbasedev, buf, sec_size, > + region->fd_offset + data_offset); > + if (ret < 0) { > + g_free(buf); > + return ret; > + } > + } > + > + qemu_put_buffer(f, buf, sec_size); > + > + if (buf_allocated) { > + g_free(buf); > + } > + sz -= sec_size; > + data_offset += sec_size; > + } > + > + ret = qemu_file_get_error(f); > + > + if (!ret && size) { > + *size = data_size; > + } > + > + return ret; > +} > + > +static int vfio_update_pending(VFIODevice *vbasedev) > +{ > + VFIOMigration *migration = vbasedev->migration; > + VFIORegion *region = &migration->region; > + uint64_t pending_bytes = 0; > + int ret; > + > + ret = vfio_mig_read(vbasedev, &pending_bytes, sizeof(pending_bytes), > + region->fd_offset + offsetof(struct vfio_device_migration_info, > + pending_bytes)); > + if (ret < 0) { > + migration->pending_bytes = 0; > + return ret; > + } > + > + migration->pending_bytes = pending_bytes; > + trace_vfio_update_pending(vbasedev->name, pending_bytes); > + return 0; > +} > + > +static int vfio_save_device_config_state(QEMUFile *f, void *opaque) > +{ > + VFIODevice *vbasedev = opaque; > + > + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE); > + > + if (vbasedev->ops && vbasedev->ops->vfio_save_config) { > + vbasedev->ops->vfio_save_config(vbasedev, f); > + } > + > + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); > + > + trace_vfio_save_device_config_state(vbasedev->name); > + > + return qemu_file_get_error(f); > +} > + > /* ---------------------------------------------------------------------- */ > > static int vfio_save_setup(QEMUFile *f, void *opaque) > @@ -232,9 +380,134 @@ static void vfio_save_cleanup(void *opaque) > trace_vfio_save_cleanup(vbasedev->name); > } > > +static void vfio_save_pending(QEMUFile *f, void *opaque, > + uint64_t threshold_size, > + uint64_t *res_precopy_only, > + uint64_t *res_compatible, > + uint64_t *res_postcopy_only) > +{ > + VFIODevice *vbasedev = opaque; > + VFIOMigration *migration = vbasedev->migration; > + int ret; > + > + ret = vfio_update_pending(vbasedev); > + if (ret) { > + return; > + } > + > + *res_precopy_only += migration->pending_bytes; > + > + trace_vfio_save_pending(vbasedev->name, *res_precopy_only, > + *res_postcopy_only, *res_compatible); > +} > + > +static int vfio_save_iterate(QEMUFile *f, void *opaque) > +{ > + VFIODevice *vbasedev = opaque; > + VFIOMigration *migration = vbasedev->migration; > + uint64_t data_size; > + int ret; > + > + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); > + > + if (migration->pending_bytes == 0) { > + ret = vfio_update_pending(vbasedev); > + if (ret) { > + return ret; > + } > + > + if (migration->pending_bytes == 0) { > + /* indicates data finished, goto complete phase */ > + return 1; > + } > + } > + > + ret = vfio_save_buffer(f, vbasedev, &data_size); > + > + if (ret) { > + error_report("%s: vfio_save_buffer failed %s", vbasedev->name, > + strerror(errno)); > + return ret; > + } > + > + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); > + > + ret = qemu_file_get_error(f); > + if (ret) { > + return ret; > + } > + > + trace_vfio_save_iterate(vbasedev->name, data_size); > + > + return 0; > +} > + > +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) > +{ > + VFIODevice *vbasedev = opaque; > + VFIOMigration *migration = vbasedev->migration; > + uint64_t data_size; > + int ret; > + > + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_RUNNING, > + VFIO_DEVICE_STATE_SAVING); > + if (ret) { > + error_report("%s: Failed to set state STOP and SAVING", > + vbasedev->name); > + return ret; > + } This also assumes success implies desired state. > + > + ret = vfio_save_device_config_state(f, opaque); > + if (ret) { > + return ret; > + } > + > + ret = vfio_update_pending(vbasedev); > + if (ret) { > + return ret; > + } > + > + while (migration->pending_bytes > 0) { > + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); > + ret = vfio_save_buffer(f, vbasedev, &data_size); > + if (ret < 0) { > + error_report("%s: Failed to save buffer", vbasedev->name); > + return ret; > + } > + > + if (data_size == 0) { > + break; > + } > + > + ret = vfio_update_pending(vbasedev); > + if (ret) { > + return ret; > + } > + } > + > + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); > + > + ret = qemu_file_get_error(f); > + if (ret) { > + return ret; > + } > + > + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_SAVING, 0); > + if (ret) { > + error_report("%s: Failed to set state STOPPED", vbasedev->name); > + return ret; > + } And another. Thanks, Alex > + > + trace_vfio_save_complete_precopy(vbasedev->name); > + return ret; > +} > + > static SaveVMHandlers savevm_vfio_handlers = { > .save_setup = vfio_save_setup, > .save_cleanup = vfio_save_cleanup, > + .save_live_pending = vfio_save_pending, > + .save_live_iterate = vfio_save_iterate, > + .save_live_complete_precopy = vfio_save_complete_precopy, > }; > > /* ---------------------------------------------------------------------- */ > diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events > index 982d8dccb219..118b5547c921 100644 > --- a/hw/vfio/trace-events > +++ b/hw/vfio/trace-events > @@ -154,3 +154,9 @@ vfio_vmstate_change(const char *name, int running, const char *reason, uint32_t > vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" > vfio_save_setup(const char *name) " (%s)" > vfio_save_cleanup(const char *name) " (%s)" > +vfio_save_buffer(const char *name, uint64_t data_offset, uint64_t data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" pending 0x%"PRIx64 > +vfio_update_pending(const char *name, uint64_t pending) " (%s) pending 0x%"PRIx64 > +vfio_save_device_config_state(const char *name) " (%s)" > +vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 > +vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d" > +vfio_save_complete_precopy(const char *name) " (%s)" > diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h > index 49c7c7a0e29a..471e444a364c 100644 > --- a/include/hw/vfio/vfio-common.h > +++ b/include/hw/vfio/vfio-common.h > @@ -60,6 +60,7 @@ typedef struct VFIORegion { > > typedef struct VFIOMigration { > VFIORegion region; > + uint64_t pending_bytes; > } VFIOMigration; > > typedef struct VFIOAddressSpace {
On 9/26/2020 2:32 AM, Alex Williamson wrote: > On Wed, 23 Sep 2020 04:54:10 +0530 > Kirti Wankhede <kwankhede@nvidia.com> wrote: > >> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy >> functions. These functions handles pre-copy and stop-and-copy phase. >> >> In _SAVING|_RUNNING device state or pre-copy phase: >> - read pending_bytes. If pending_bytes > 0, go through below steps. >> - read data_offset - indicates kernel driver to write data to staging >> buffer. >> - read data_size - amount of data in bytes written by vendor driver in >> migration region. >> - read data_size bytes of data from data_offset in the migration region. >> - Write data packet to file stream as below: >> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data, >> VFIO_MIG_FLAG_END_OF_STATE } >> >> In _SAVING device state or stop-and-copy phase >> a. read config space of device and save to migration file stream. This >> doesn't need to be from vendor driver. Any other special config state >> from driver can be saved as data in following iteration. >> b. read pending_bytes. If pending_bytes > 0, go through below steps. >> c. read data_offset - indicates kernel driver to write data to staging >> buffer. >> d. read data_size - amount of data in bytes written by vendor driver in >> migration region. >> e. read data_size bytes of data from data_offset in the migration region. >> f. Write data packet as below: >> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data} >> g. iterate through steps b to f while (pending_bytes > 0) >> h. Write {VFIO_MIG_FLAG_END_OF_STATE} >> >> When data region is mapped, its user's responsibility to read data from >> data_offset of data_size before moving to next steps. >> >> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> >> Reviewed-by: Neo Jia <cjia@nvidia.com> >> --- >> hw/vfio/migration.c | 273 ++++++++++++++++++++++++++++++++++++++++++ >> hw/vfio/trace-events | 6 + >> include/hw/vfio/vfio-common.h | 1 + >> 3 files changed, 280 insertions(+) >> >> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c >> index 8e8adaa25779..4611bb972228 100644 >> --- a/hw/vfio/migration.c >> +++ b/hw/vfio/migration.c >> @@ -180,6 +180,154 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t mask, >> return 0; >> } >> >> +static void *get_data_section_size(VFIORegion *region, uint64_t data_offset, >> + uint64_t data_size, uint64_t *size) >> +{ >> + void *ptr = NULL; >> + uint64_t limit = 0; >> + int i; >> + >> + if (!region->mmaps) { >> + if (size) { >> + *size = data_size; >> + } >> + return ptr; >> + } >> + >> + for (i = 0; i < region->nr_mmaps; i++) { >> + VFIOMmap *map = region->mmaps + i; >> + >> + if ((data_offset >= map->offset) && >> + (data_offset < map->offset + map->size)) { >> + >> + /* check if data_offset is within sparse mmap areas */ >> + ptr = map->mmap + data_offset - map->offset; >> + if (size) { >> + *size = MIN(data_size, map->offset + map->size - data_offset); >> + } >> + break; >> + } else if ((data_offset < map->offset) && >> + (!limit || limit > map->offset)) { >> + /* >> + * data_offset is not within sparse mmap areas, find size of >> + * non-mapped area. Check through all list since region->mmaps list >> + * is not sorted. >> + */ >> + limit = map->offset; >> + } >> + } >> + >> + if (!ptr && size) { >> + *size = limit ? limit - data_offset : data_size; > > 'limit - data_offset' doesn't take data_size into account, this should > be MIN(data_size, limit - data_offset). > Done. >> + } >> + return ptr; >> +} >> + >> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t *size) >> +{ >> + VFIOMigration *migration = vbasedev->migration; >> + VFIORegion *region = &migration->region; >> + uint64_t data_offset = 0, data_size = 0, sz; >> + int ret; >> + >> + ret = vfio_mig_read(vbasedev, &data_offset, sizeof(data_offset), >> + region->fd_offset + offsetof(struct vfio_device_migration_info, >> + data_offset)); >> + if (ret < 0) { >> + return ret; >> + } >> + >> + ret = vfio_mig_read(vbasedev, &data_size, sizeof(data_size), >> + region->fd_offset + offsetof(struct vfio_device_migration_info, >> + data_size)); >> + if (ret < 0) { >> + return ret; >> + } >> + >> + trace_vfio_save_buffer(vbasedev->name, data_offset, data_size, >> + migration->pending_bytes); >> + >> + qemu_put_be64(f, data_size); >> + sz = data_size; >> + >> + while (sz) { >> + void *buf = NULL; > > Unnecessary initialization. > >> + uint64_t sec_size; >> + bool buf_allocated = false; >> + >> + buf = get_data_section_size(region, data_offset, sz, &sec_size); >> + >> + if (!buf) { >> + buf = g_try_malloc(sec_size); >> + if (!buf) { >> + error_report("%s: Error allocating buffer ", __func__); >> + return -ENOMEM; >> + } >> + buf_allocated = true; >> + >> + ret = vfio_mig_read(vbasedev, buf, sec_size, >> + region->fd_offset + data_offset); >> + if (ret < 0) { >> + g_free(buf); >> + return ret; >> + } >> + } >> + >> + qemu_put_buffer(f, buf, sec_size); >> + >> + if (buf_allocated) { >> + g_free(buf); >> + } >> + sz -= sec_size; >> + data_offset += sec_size; >> + } >> + >> + ret = qemu_file_get_error(f); >> + >> + if (!ret && size) { >> + *size = data_size; >> + } >> + >> + return ret; >> +} >> + >> +static int vfio_update_pending(VFIODevice *vbasedev) >> +{ >> + VFIOMigration *migration = vbasedev->migration; >> + VFIORegion *region = &migration->region; >> + uint64_t pending_bytes = 0; >> + int ret; >> + >> + ret = vfio_mig_read(vbasedev, &pending_bytes, sizeof(pending_bytes), >> + region->fd_offset + offsetof(struct vfio_device_migration_info, >> + pending_bytes)); >> + if (ret < 0) { >> + migration->pending_bytes = 0; >> + return ret; >> + } >> + >> + migration->pending_bytes = pending_bytes; >> + trace_vfio_update_pending(vbasedev->name, pending_bytes); >> + return 0; >> +} >> + >> +static int vfio_save_device_config_state(QEMUFile *f, void *opaque) >> +{ >> + VFIODevice *vbasedev = opaque; >> + >> + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE); >> + >> + if (vbasedev->ops && vbasedev->ops->vfio_save_config) { >> + vbasedev->ops->vfio_save_config(vbasedev, f); >> + } >> + >> + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); >> + >> + trace_vfio_save_device_config_state(vbasedev->name); >> + >> + return qemu_file_get_error(f); >> +} >> + >> /* ---------------------------------------------------------------------- */ >> >> static int vfio_save_setup(QEMUFile *f, void *opaque) >> @@ -232,9 +380,134 @@ static void vfio_save_cleanup(void *opaque) >> trace_vfio_save_cleanup(vbasedev->name); >> } >> >> +static void vfio_save_pending(QEMUFile *f, void *opaque, >> + uint64_t threshold_size, >> + uint64_t *res_precopy_only, >> + uint64_t *res_compatible, >> + uint64_t *res_postcopy_only) >> +{ >> + VFIODevice *vbasedev = opaque; >> + VFIOMigration *migration = vbasedev->migration; >> + int ret; >> + >> + ret = vfio_update_pending(vbasedev); >> + if (ret) { >> + return; >> + } >> + >> + *res_precopy_only += migration->pending_bytes; >> + >> + trace_vfio_save_pending(vbasedev->name, *res_precopy_only, >> + *res_postcopy_only, *res_compatible); >> +} >> + >> +static int vfio_save_iterate(QEMUFile *f, void *opaque) >> +{ >> + VFIODevice *vbasedev = opaque; >> + VFIOMigration *migration = vbasedev->migration; >> + uint64_t data_size; >> + int ret; >> + >> + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); >> + >> + if (migration->pending_bytes == 0) { >> + ret = vfio_update_pending(vbasedev); >> + if (ret) { >> + return ret; >> + } >> + >> + if (migration->pending_bytes == 0) { >> + /* indicates data finished, goto complete phase */ >> + return 1; >> + } >> + } >> + >> + ret = vfio_save_buffer(f, vbasedev, &data_size); >> + >> + if (ret) { >> + error_report("%s: vfio_save_buffer failed %s", vbasedev->name, >> + strerror(errno)); >> + return ret; >> + } >> + >> + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); >> + >> + ret = qemu_file_get_error(f); >> + if (ret) { >> + return ret; >> + } >> + >> + trace_vfio_save_iterate(vbasedev->name, data_size); >> + >> + return 0; >> +} >> + >> +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) >> +{ >> + VFIODevice *vbasedev = opaque; >> + VFIOMigration *migration = vbasedev->migration; >> + uint64_t data_size; >> + int ret; >> + >> + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_RUNNING, >> + VFIO_DEVICE_STATE_SAVING); >> + if (ret) { >> + error_report("%s: Failed to set state STOP and SAVING", >> + vbasedev->name); >> + return ret; >> + } > > This also assumes success implies desired state. > >> + >> + ret = vfio_save_device_config_state(f, opaque); >> + if (ret) { >> + return ret; >> + } >> + >> + ret = vfio_update_pending(vbasedev); >> + if (ret) { >> + return ret; >> + } >> + >> + while (migration->pending_bytes > 0) { >> + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); >> + ret = vfio_save_buffer(f, vbasedev, &data_size); >> + if (ret < 0) { >> + error_report("%s: Failed to save buffer", vbasedev->name); >> + return ret; >> + } >> + >> + if (data_size == 0) { >> + break; >> + } >> + >> + ret = vfio_update_pending(vbasedev); >> + if (ret) { >> + return ret; >> + } >> + } >> + >> + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); >> + >> + ret = qemu_file_get_error(f); >> + if (ret) { >> + return ret; >> + } >> + >> + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_SAVING, 0); >> + if (ret) { >> + error_report("%s: Failed to set state STOPPED", vbasedev->name); >> + return ret; >> + } > > And another. Thanks, > Fixing in patch 5. Thanks, Kirti > Alex > >> + >> + trace_vfio_save_complete_precopy(vbasedev->name); >> + return ret; >> +} >> + >> static SaveVMHandlers savevm_vfio_handlers = { >> .save_setup = vfio_save_setup, >> .save_cleanup = vfio_save_cleanup, >> + .save_live_pending = vfio_save_pending, >> + .save_live_iterate = vfio_save_iterate, >> + .save_live_complete_precopy = vfio_save_complete_precopy, >> }; >> >> /* ---------------------------------------------------------------------- */ >> diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events >> index 982d8dccb219..118b5547c921 100644 >> --- a/hw/vfio/trace-events >> +++ b/hw/vfio/trace-events >> @@ -154,3 +154,9 @@ vfio_vmstate_change(const char *name, int running, const char *reason, uint32_t >> vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" >> vfio_save_setup(const char *name) " (%s)" >> vfio_save_cleanup(const char *name) " (%s)" >> +vfio_save_buffer(const char *name, uint64_t data_offset, uint64_t data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" pending 0x%"PRIx64 >> +vfio_update_pending(const char *name, uint64_t pending) " (%s) pending 0x%"PRIx64 >> +vfio_save_device_config_state(const char *name) " (%s)" >> +vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 >> +vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d" >> +vfio_save_complete_precopy(const char *name) " (%s)" >> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h >> index 49c7c7a0e29a..471e444a364c 100644 >> --- a/include/hw/vfio/vfio-common.h >> +++ b/include/hw/vfio/vfio-common.h >> @@ -60,6 +60,7 @@ typedef struct VFIORegion { >> >> typedef struct VFIOMigration { >> VFIORegion region; >> + uint64_t pending_bytes; >> } VFIOMigration; >> >> typedef struct VFIOAddressSpace { >
On 9/23/2020 5:12 PM, Wang, Zhi A wrote: > I met a problem when trying this patch. Mostly the problem happens if a device doesn't set any pending bytes in the iteration stage, which shows this device doesn't have a stage of iteration. I tried to reproduce this issue at my end and I didn't hit this case even if I do multiple iteration in pre-copy phase in live migration. In pre-copy phase, vendor driver receives read(pending_bytes) call back from .save_live_pending. If vendor driver don't have any data to send then return 0 as pending bytes. But I could reproduce this issue with savevm/VM snapshot from qemu-monitor. Agree with your fix, pulling in my next iteration. The QEMU in the destination machine will complain out-of-memory. After some investigation, it seems the vendor-specific bit stream is not complete and the QEMU in the destination machine will wrongly take a signature as the size of the section and failed to allocate the memory. Not sure if others meet the same problem. > > I solved this problem by the following fix and the qemu version I am using is v5.0.0.0. > Only this didn't fix the problem for me, have to reset pending_bytes from vfio_save_iterate() as .save_live_pending is not called during savevm/snapshot. Thanks for testing. Kirti > commit 13a80adc2cdddd48d76acf6a5dd715bcbf42b577 > Author: Zhi Wang <zhi.wang.linux@gmail.com> > Date: Tue Sep 15 15:58:45 2020 +0300 > > fix > > Signed-off-by: Zhi Wang <zhi.wang.linux@gmail.com> > > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c > index 09eec9c..e741319 100644 > --- a/hw/vfio/migration.c > +++ b/hw/vfio/migration.c > @@ -453,10 +458,12 @@ static int vfio_save_iterate(QEMUFile *f, void *opaque) > return ret; > } > > - if (migration->pending_bytes == 0) { > - /* indicates data finished, goto complete phase */ > - return 1; > - } > + if (migration->pending_bytes == 0) { > + /* indicates data finished, goto complete phase */ > + qemu_put_be64(f, 0); > + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); > + return 1; > + } > } > > data_size = vfio_save_buffer(f, vbasedev); > > -----Original Message----- > From: Kirti Wankhede <kwankhede@nvidia.com> > Sent: Wednesday, September 23, 2020 2:24 AM > To: alex.williamson@redhat.com; cjia@nvidia.com > Cc: Tian, Kevin <kevin.tian@intel.com>; Yang, Ziye <ziye.yang@intel.com>; Liu, Changpeng <changpeng.liu@intel.com>; Liu, Yi L <yi.l.liu@intel.com>; mlevitsk@redhat.com; eskultet@redhat.com; cohuck@redhat.com; dgilbert@redhat.com; jonathan.davies@nutanix.com; eauger@redhat.com; aik@ozlabs.ru; pasic@linux.ibm.com; felipe@nutanix.com; Zhengxiao.zx@Alibaba-inc.com; shuangtai.tst@alibaba-inc.com; Ken.Xue@amd.com; Wang, Zhi A <zhi.a.wang@intel.com>; Zhao, Yan Y <yan.y.zhao@intel.com>; pbonzini@redhat.com; quintela@redhat.com; eblake@redhat.com; armbru@redhat.com; peterx@redhat.com; qemu-devel@nongnu.org; Kirti Wankhede <kwankhede@nvidia.com> > Subject: [PATCH v26 08/17] vfio: Add save state functions to SaveVMHandlers > > Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy functions. These functions handles pre-copy and stop-and-copy phase. > > In _SAVING|_RUNNING device state or pre-copy phase: > - read pending_bytes. If pending_bytes > 0, go through below steps. > - read data_offset - indicates kernel driver to write data to staging > buffer. > - read data_size - amount of data in bytes written by vendor driver in > migration region. > - read data_size bytes of data from data_offset in the migration region. > - Write data packet to file stream as below: > {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data, VFIO_MIG_FLAG_END_OF_STATE } > > In _SAVING device state or stop-and-copy phase a. read config space of device and save to migration file stream. This > doesn't need to be from vendor driver. Any other special config state > from driver can be saved as data in following iteration. > b. read pending_bytes. If pending_bytes > 0, go through below steps. > c. read data_offset - indicates kernel driver to write data to staging > buffer. > d. read data_size - amount of data in bytes written by vendor driver in > migration region. > e. read data_size bytes of data from data_offset in the migration region. > f. Write data packet as below: > {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data} g. iterate through steps b to f while (pending_bytes > 0) h. Write {VFIO_MIG_FLAG_END_OF_STATE} > > When data region is mapped, its user's responsibility to read data from data_offset of data_size before moving to next steps. > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> > Reviewed-by: Neo Jia <cjia@nvidia.com> > --- > hw/vfio/migration.c | 273 ++++++++++++++++++++++++++++++++++++++++++ > hw/vfio/trace-events | 6 + > include/hw/vfio/vfio-common.h | 1 + > 3 files changed, 280 insertions(+) > > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 8e8adaa25779..4611bb972228 100644 > --- a/hw/vfio/migration.c > +++ b/hw/vfio/migration.c > @@ -180,6 +180,154 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t mask, > return 0; > } > > +static void *get_data_section_size(VFIORegion *region, uint64_t data_offset, > + uint64_t data_size, uint64_t *size) > +{ > + void *ptr = NULL; > + uint64_t limit = 0; > + int i; > + > + if (!region->mmaps) { > + if (size) { > + *size = data_size; > + } > + return ptr; > + } > + > + for (i = 0; i < region->nr_mmaps; i++) { > + VFIOMmap *map = region->mmaps + i; > + > + if ((data_offset >= map->offset) && > + (data_offset < map->offset + map->size)) { > + > + /* check if data_offset is within sparse mmap areas */ > + ptr = map->mmap + data_offset - map->offset; > + if (size) { > + *size = MIN(data_size, map->offset + map->size - data_offset); > + } > + break; > + } else if ((data_offset < map->offset) && > + (!limit || limit > map->offset)) { > + /* > + * data_offset is not within sparse mmap areas, find size of > + * non-mapped area. Check through all list since region->mmaps list > + * is not sorted. > + */ > + limit = map->offset; > + } > + } > + > + if (!ptr && size) { > + *size = limit ? limit - data_offset : data_size; > + } > + return ptr; > +} > + > +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t > +*size) { > + VFIOMigration *migration = vbasedev->migration; > + VFIORegion *region = &migration->region; > + uint64_t data_offset = 0, data_size = 0, sz; > + int ret; > + > + ret = vfio_mig_read(vbasedev, &data_offset, sizeof(data_offset), > + region->fd_offset + offsetof(struct vfio_device_migration_info, > + data_offset)); > + if (ret < 0) { > + return ret; > + } > + > + ret = vfio_mig_read(vbasedev, &data_size, sizeof(data_size), > + region->fd_offset + offsetof(struct vfio_device_migration_info, > + data_size)); > + if (ret < 0) { > + return ret; > + } > + > + trace_vfio_save_buffer(vbasedev->name, data_offset, data_size, > + migration->pending_bytes); > + > + qemu_put_be64(f, data_size); > + sz = data_size; > + > + while (sz) { > + void *buf = NULL; > + uint64_t sec_size; > + bool buf_allocated = false; > + > + buf = get_data_section_size(region, data_offset, sz, > + &sec_size); > + > + if (!buf) { > + buf = g_try_malloc(sec_size); > + if (!buf) { > + error_report("%s: Error allocating buffer ", __func__); > + return -ENOMEM; > + } > + buf_allocated = true; > + > + ret = vfio_mig_read(vbasedev, buf, sec_size, > + region->fd_offset + data_offset); > + if (ret < 0) { > + g_free(buf); > + return ret; > + } > + } > + > + qemu_put_buffer(f, buf, sec_size); > + > + if (buf_allocated) { > + g_free(buf); > + } > + sz -= sec_size; > + data_offset += sec_size; > + } > + > + ret = qemu_file_get_error(f); > + > + if (!ret && size) { > + *size = data_size; > + } > + > + return ret; > +} > + > +static int vfio_update_pending(VFIODevice *vbasedev) { > + VFIOMigration *migration = vbasedev->migration; > + VFIORegion *region = &migration->region; > + uint64_t pending_bytes = 0; > + int ret; > + > + ret = vfio_mig_read(vbasedev, &pending_bytes, sizeof(pending_bytes), > + region->fd_offset + offsetof(struct vfio_device_migration_info, > + pending_bytes)); > + if (ret < 0) { > + migration->pending_bytes = 0; > + return ret; > + } > + > + migration->pending_bytes = pending_bytes; > + trace_vfio_update_pending(vbasedev->name, pending_bytes); > + return 0; > +} > + > +static int vfio_save_device_config_state(QEMUFile *f, void *opaque) { > + VFIODevice *vbasedev = opaque; > + > + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE); > + > + if (vbasedev->ops && vbasedev->ops->vfio_save_config) { > + vbasedev->ops->vfio_save_config(vbasedev, f); > + } > + > + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); > + > + trace_vfio_save_device_config_state(vbasedev->name); > + > + return qemu_file_get_error(f); > +} > + > /* ---------------------------------------------------------------------- */ > > static int vfio_save_setup(QEMUFile *f, void *opaque) @@ -232,9 +380,134 @@ static void vfio_save_cleanup(void *opaque) > trace_vfio_save_cleanup(vbasedev->name); > } > > +static void vfio_save_pending(QEMUFile *f, void *opaque, > + uint64_t threshold_size, > + uint64_t *res_precopy_only, > + uint64_t *res_compatible, > + uint64_t *res_postcopy_only) { > + VFIODevice *vbasedev = opaque; > + VFIOMigration *migration = vbasedev->migration; > + int ret; > + > + ret = vfio_update_pending(vbasedev); > + if (ret) { > + return; > + } > + > + *res_precopy_only += migration->pending_bytes; > + > + trace_vfio_save_pending(vbasedev->name, *res_precopy_only, > + *res_postcopy_only, *res_compatible); } > + > +static int vfio_save_iterate(QEMUFile *f, void *opaque) { > + VFIODevice *vbasedev = opaque; > + VFIOMigration *migration = vbasedev->migration; > + uint64_t data_size; > + int ret; > + > + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); > + > + if (migration->pending_bytes == 0) { > + ret = vfio_update_pending(vbasedev); > + if (ret) { > + return ret; > + } > + > + if (migration->pending_bytes == 0) { > + /* indicates data finished, goto complete phase */ > + return 1; > + } > + } > + > + ret = vfio_save_buffer(f, vbasedev, &data_size); > + > + if (ret) { > + error_report("%s: vfio_save_buffer failed %s", vbasedev->name, > + strerror(errno)); > + return ret; > + } > + > + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); > + > + ret = qemu_file_get_error(f); > + if (ret) { > + return ret; > + } > + > + trace_vfio_save_iterate(vbasedev->name, data_size); > + > + return 0; > +} > + > +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) { > + VFIODevice *vbasedev = opaque; > + VFIOMigration *migration = vbasedev->migration; > + uint64_t data_size; > + int ret; > + > + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_RUNNING, > + VFIO_DEVICE_STATE_SAVING); > + if (ret) { > + error_report("%s: Failed to set state STOP and SAVING", > + vbasedev->name); > + return ret; > + } > + > + ret = vfio_save_device_config_state(f, opaque); > + if (ret) { > + return ret; > + } > + > + ret = vfio_update_pending(vbasedev); > + if (ret) { > + return ret; > + } > + > + while (migration->pending_bytes > 0) { > + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); > + ret = vfio_save_buffer(f, vbasedev, &data_size); > + if (ret < 0) { > + error_report("%s: Failed to save buffer", vbasedev->name); > + return ret; > + } > + > + if (data_size == 0) { > + break; > + } > + > + ret = vfio_update_pending(vbasedev); > + if (ret) { > + return ret; > + } > + } > + > + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); > + > + ret = qemu_file_get_error(f); > + if (ret) { > + return ret; > + } > + > + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_SAVING, 0); > + if (ret) { > + error_report("%s: Failed to set state STOPPED", vbasedev->name); > + return ret; > + } > + > + trace_vfio_save_complete_precopy(vbasedev->name); > + return ret; > +} > + > static SaveVMHandlers savevm_vfio_handlers = { > .save_setup = vfio_save_setup, > .save_cleanup = vfio_save_cleanup, > + .save_live_pending = vfio_save_pending, > + .save_live_iterate = vfio_save_iterate, > + .save_live_complete_precopy = vfio_save_complete_precopy, > }; > > /* ---------------------------------------------------------------------- */ diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 982d8dccb219..118b5547c921 100644 > --- a/hw/vfio/trace-events > +++ b/hw/vfio/trace-events > @@ -154,3 +154,9 @@ vfio_vmstate_change(const char *name, int running, const char *reason, uint32_t vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" > vfio_save_setup(const char *name) " (%s)" > vfio_save_cleanup(const char *name) " (%s)" > +vfio_save_buffer(const char *name, uint64_t data_offset, uint64_t > +data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" > +pending 0x%"PRIx64 vfio_update_pending(const char *name, uint64_t pending) " (%s) pending 0x%"PRIx64 vfio_save_device_config_state(const char *name) " (%s)" > +vfio_save_pending(const char *name, uint64_t precopy, uint64_t > +postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d" > +vfio_save_complete_precopy(const char *name) " (%s)" > diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 49c7c7a0e29a..471e444a364c 100644 > --- a/include/hw/vfio/vfio-common.h > +++ b/include/hw/vfio/vfio-common.h > @@ -60,6 +60,7 @@ typedef struct VFIORegion { > > typedef struct VFIOMigration { > VFIORegion region; > + uint64_t pending_bytes; > } VFIOMigration; > > typedef struct VFIOAddressSpace { > -- > 2.7.0 >
diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 8e8adaa25779..4611bb972228 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -180,6 +180,154 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t mask, return 0; } +static void *get_data_section_size(VFIORegion *region, uint64_t data_offset, + uint64_t data_size, uint64_t *size) +{ + void *ptr = NULL; + uint64_t limit = 0; + int i; + + if (!region->mmaps) { + if (size) { + *size = data_size; + } + return ptr; + } + + for (i = 0; i < region->nr_mmaps; i++) { + VFIOMmap *map = region->mmaps + i; + + if ((data_offset >= map->offset) && + (data_offset < map->offset + map->size)) { + + /* check if data_offset is within sparse mmap areas */ + ptr = map->mmap + data_offset - map->offset; + if (size) { + *size = MIN(data_size, map->offset + map->size - data_offset); + } + break; + } else if ((data_offset < map->offset) && + (!limit || limit > map->offset)) { + /* + * data_offset is not within sparse mmap areas, find size of + * non-mapped area. Check through all list since region->mmaps list + * is not sorted. + */ + limit = map->offset; + } + } + + if (!ptr && size) { + *size = limit ? limit - data_offset : data_size; + } + return ptr; +} + +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t *size) +{ + VFIOMigration *migration = vbasedev->migration; + VFIORegion *region = &migration->region; + uint64_t data_offset = 0, data_size = 0, sz; + int ret; + + ret = vfio_mig_read(vbasedev, &data_offset, sizeof(data_offset), + region->fd_offset + offsetof(struct vfio_device_migration_info, + data_offset)); + if (ret < 0) { + return ret; + } + + ret = vfio_mig_read(vbasedev, &data_size, sizeof(data_size), + region->fd_offset + offsetof(struct vfio_device_migration_info, + data_size)); + if (ret < 0) { + return ret; + } + + trace_vfio_save_buffer(vbasedev->name, data_offset, data_size, + migration->pending_bytes); + + qemu_put_be64(f, data_size); + sz = data_size; + + while (sz) { + void *buf = NULL; + uint64_t sec_size; + bool buf_allocated = false; + + buf = get_data_section_size(region, data_offset, sz, &sec_size); + + if (!buf) { + buf = g_try_malloc(sec_size); + if (!buf) { + error_report("%s: Error allocating buffer ", __func__); + return -ENOMEM; + } + buf_allocated = true; + + ret = vfio_mig_read(vbasedev, buf, sec_size, + region->fd_offset + data_offset); + if (ret < 0) { + g_free(buf); + return ret; + } + } + + qemu_put_buffer(f, buf, sec_size); + + if (buf_allocated) { + g_free(buf); + } + sz -= sec_size; + data_offset += sec_size; + } + + ret = qemu_file_get_error(f); + + if (!ret && size) { + *size = data_size; + } + + return ret; +} + +static int vfio_update_pending(VFIODevice *vbasedev) +{ + VFIOMigration *migration = vbasedev->migration; + VFIORegion *region = &migration->region; + uint64_t pending_bytes = 0; + int ret; + + ret = vfio_mig_read(vbasedev, &pending_bytes, sizeof(pending_bytes), + region->fd_offset + offsetof(struct vfio_device_migration_info, + pending_bytes)); + if (ret < 0) { + migration->pending_bytes = 0; + return ret; + } + + migration->pending_bytes = pending_bytes; + trace_vfio_update_pending(vbasedev->name, pending_bytes); + return 0; +} + +static int vfio_save_device_config_state(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE); + + if (vbasedev->ops && vbasedev->ops->vfio_save_config) { + vbasedev->ops->vfio_save_config(vbasedev, f); + } + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + trace_vfio_save_device_config_state(vbasedev->name); + + return qemu_file_get_error(f); +} + /* ---------------------------------------------------------------------- */ static int vfio_save_setup(QEMUFile *f, void *opaque) @@ -232,9 +380,134 @@ static void vfio_save_cleanup(void *opaque) trace_vfio_save_cleanup(vbasedev->name); } +static void vfio_save_pending(QEMUFile *f, void *opaque, + uint64_t threshold_size, + uint64_t *res_precopy_only, + uint64_t *res_compatible, + uint64_t *res_postcopy_only) +{ + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + int ret; + + ret = vfio_update_pending(vbasedev); + if (ret) { + return; + } + + *res_precopy_only += migration->pending_bytes; + + trace_vfio_save_pending(vbasedev->name, *res_precopy_only, + *res_postcopy_only, *res_compatible); +} + +static int vfio_save_iterate(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + uint64_t data_size; + int ret; + + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); + + if (migration->pending_bytes == 0) { + ret = vfio_update_pending(vbasedev); + if (ret) { + return ret; + } + + if (migration->pending_bytes == 0) { + /* indicates data finished, goto complete phase */ + return 1; + } + } + + ret = vfio_save_buffer(f, vbasedev, &data_size); + + if (ret) { + error_report("%s: vfio_save_buffer failed %s", vbasedev->name, + strerror(errno)); + return ret; + } + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + + trace_vfio_save_iterate(vbasedev->name, data_size); + + return 0; +} + +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + uint64_t data_size; + int ret; + + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_RUNNING, + VFIO_DEVICE_STATE_SAVING); + if (ret) { + error_report("%s: Failed to set state STOP and SAVING", + vbasedev->name); + return ret; + } + + ret = vfio_save_device_config_state(f, opaque); + if (ret) { + return ret; + } + + ret = vfio_update_pending(vbasedev); + if (ret) { + return ret; + } + + while (migration->pending_bytes > 0) { + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); + ret = vfio_save_buffer(f, vbasedev, &data_size); + if (ret < 0) { + error_report("%s: Failed to save buffer", vbasedev->name); + return ret; + } + + if (data_size == 0) { + break; + } + + ret = vfio_update_pending(vbasedev); + if (ret) { + return ret; + } + } + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_SAVING, 0); + if (ret) { + error_report("%s: Failed to set state STOPPED", vbasedev->name); + return ret; + } + + trace_vfio_save_complete_precopy(vbasedev->name); + return ret; +} + static SaveVMHandlers savevm_vfio_handlers = { .save_setup = vfio_save_setup, .save_cleanup = vfio_save_cleanup, + .save_live_pending = vfio_save_pending, + .save_live_iterate = vfio_save_iterate, + .save_live_complete_precopy = vfio_save_complete_precopy, }; /* ---------------------------------------------------------------------- */ diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 982d8dccb219..118b5547c921 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -154,3 +154,9 @@ vfio_vmstate_change(const char *name, int running, const char *reason, uint32_t vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" vfio_save_setup(const char *name) " (%s)" vfio_save_cleanup(const char *name) " (%s)" +vfio_save_buffer(const char *name, uint64_t data_offset, uint64_t data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" pending 0x%"PRIx64 +vfio_update_pending(const char *name, uint64_t pending) " (%s) pending 0x%"PRIx64 +vfio_save_device_config_state(const char *name) " (%s)" +vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 +vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d" +vfio_save_complete_precopy(const char *name) " (%s)" diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 49c7c7a0e29a..471e444a364c 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -60,6 +60,7 @@ typedef struct VFIORegion { typedef struct VFIOMigration { VFIORegion region; + uint64_t pending_bytes; } VFIOMigration; typedef struct VFIOAddressSpace {