diff mbox series

[v4,08/13] vfio: Add save state functions to SaveVMHandlers

Message ID 1561041461-22326-9-git-send-email-kwankhede@nvidia.com (mailing list archive)
State New, archived
Headers show
Series Add migration support for VFIO device | expand

Commit Message

Kirti Wankhede June 20, 2019, 2:37 p.m. UTC
Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
functions. These functions handles pre-copy and stop-and-copy phase.

In _SAVING|_RUNNING device state or pre-copy phase:
- read pending_bytes
- read data_offset - indicates kernel driver to write data to staging
  buffer which is mmapped.
- read data_size - amount of data in bytes written by vendor driver in migration
  region.
- if data section is trapped, pread() number of bytes in data_size, from
  data_offset.
- if data section is mmaped, read mmaped buffer of size data_size.
- Write data packet to file stream as below:
{VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
VFIO_MIG_FLAG_END_OF_STATE }

In _SAVING device state or stop-and-copy phase
a. read config space of device and save to migration file stream. This
   doesn't need to be from vendor driver. Any other special config state
   from driver can be saved as data in following iteration.
b. read pending_bytes - indicates kernel driver to write data to staging
   buffer which is mmapped.
c. read data_size - amount of data in bytes written by vendor driver in
   migration region.
d. if data section is trapped, pread() from data_offset of size data_size.
e. if data section is mmaped, read mmaped buffer of size data_size.
f. Write data packet as below:
   {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
g. iterate through steps b to f until (pending_bytes > 0)
h. Write {VFIO_MIG_FLAG_END_OF_STATE}

.save_live_iterate runs outside the iothread lock in the migration case, which
could race with asynchronous call to get dirty page list causing data corruption
in mapped migration region. Mutex added here to serial migration buffer read
operation.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 212 insertions(+)

Comments

Alex Williamson June 20, 2019, 7:25 p.m. UTC | #1
On Thu, 20 Jun 2019 20:07:36 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> functions. These functions handles pre-copy and stop-and-copy phase.
> 
> In _SAVING|_RUNNING device state or pre-copy phase:
> - read pending_bytes
> - read data_offset - indicates kernel driver to write data to staging
>   buffer which is mmapped.

Why is data_offset the trigger rather than data_size?  It seems that
data_offset can't really change dynamically since it might be mmap'd,
so it seems unnatural to bother re-reading it.

> - read data_size - amount of data in bytes written by vendor driver in migration
>   region.
> - if data section is trapped, pread() number of bytes in data_size, from
>   data_offset.
> - if data section is mmaped, read mmaped buffer of size data_size.
> - Write data packet to file stream as below:
> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> VFIO_MIG_FLAG_END_OF_STATE }
> 
> In _SAVING device state or stop-and-copy phase
> a. read config space of device and save to migration file stream. This
>    doesn't need to be from vendor driver. Any other special config state
>    from driver can be saved as data in following iteration.
> b. read pending_bytes - indicates kernel driver to write data to staging
>    buffer which is mmapped.

Is it pending_bytes or data_offset that triggers the write out of
data?  Why pending_bytes vs data_size?  I was interpreting
pending_bytes as the total data size while data_size is the size
available to read now, so assumed data_size would be more closely
aligned to making the data available.

> c. read data_size - amount of data in bytes written by vendor driver in
>    migration region.
> d. if data section is trapped, pread() from data_offset of size data_size.
> e. if data section is mmaped, read mmaped buffer of size data_size.

Should this read as "pread() from data_offset of data_size, or
optionally if mmap is supported on the data area, read data_size from
start of mapped buffer"?  IOW, pread should always work.  Same in
previous section.

> f. Write data packet as below:
>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> g. iterate through steps b to f until (pending_bytes > 0)

s/until/while/

> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> 
> .save_live_iterate runs outside the iothread lock in the migration case, which
> could race with asynchronous call to get dirty page list causing data corruption
> in mapped migration region. Mutex added here to serial migration buffer read
> operation.

Would we be ahead to use different offsets within the region for device
data vs dirty bitmap to avoid this?
 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 212 insertions(+)
> 
> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> index fe0887c27664..0a2f30872316 100644
> --- a/hw/vfio/migration.c
> +++ b/hw/vfio/migration.c
> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
>      return 0;
>  }
>  
> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> +{
> +    VFIOMigration *migration = vbasedev->migration;
> +    VFIORegion *region = &migration->region.buffer;
> +    uint64_t data_offset = 0, data_size = 0;
> +    int ret;
> +
> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> +                                             data_offset));
> +    if (ret != sizeof(data_offset)) {
> +        error_report("Failed to get migration buffer data offset %d",
> +                     ret);
> +        return -EINVAL;
> +    }
> +
> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> +                                             data_size));
> +    if (ret != sizeof(data_size)) {
> +        error_report("Failed to get migration buffer data size %d",
> +                     ret);
> +        return -EINVAL;
> +    }
> +
> +    if (data_size > 0) {
> +        void *buf = NULL;
> +        bool buffer_mmaped = false;
> +
> +        if (region->mmaps) {
> +            int i;
> +
> +            for (i = 0; i < region->nr_mmaps; i++) {
> +                if ((data_offset >= region->mmaps[i].offset) &&
> +                    (data_offset < region->mmaps[i].offset +
> +                                   region->mmaps[i].size)) {
> +                    buf = region->mmaps[i].mmap + (data_offset -
> +                                                   region->mmaps[i].offset);

So you're expecting that data_offset is somewhere within the data
area.  Why doesn't the data always simply start at the beginning of the
data area?  ie. data_offset would coincide with the beginning of the
mmap'able area (if supported) and be static.  Does this enable some
functionality in the vendor driver?  Does resume data need to be
written from the same offset where it's read?

> +                    buffer_mmaped = true;
> +                    break;
> +                }
> +            }
> +        }
> +
> +        if (!buffer_mmaped) {
> +            buf = g_malloc0(data_size);
> +            ret = pread(vbasedev->fd, buf, data_size,
> +                        region->fd_offset + data_offset);
> +            if (ret != data_size) {
> +                error_report("Failed to get migration data %d", ret);
> +                g_free(buf);
> +                return -EINVAL;
> +            }
> +        }
> +
> +        qemu_put_be64(f, data_size);
> +        qemu_put_buffer(f, buf, data_size);
> +
> +        if (!buffer_mmaped) {
> +            g_free(buf);
> +        }
> +        migration->pending_bytes -= data_size;
> +    } else {
> +        qemu_put_be64(f, data_size);
> +    }
> +
> +    ret = qemu_file_get_error(f);
> +
> +    return data_size;
> +}
> +
> +static int vfio_update_pending(VFIODevice *vbasedev)
> +{
> +    VFIOMigration *migration = vbasedev->migration;
> +    VFIORegion *region = &migration->region.buffer;
> +    uint64_t pending_bytes = 0;
> +    int ret;
> +
> +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> +                                             pending_bytes));

Did this trigger the vendor driver to write out to the data area when
we don't need it to?

> +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
> +        error_report("Failed to get pending bytes %d", ret);
> +        migration->pending_bytes = 0;
> +        return (ret < 0) ? ret : -EINVAL;
> +    }
> +
> +    migration->pending_bytes = pending_bytes;
> +    return 0;
> +}
> +
> +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
> +{
> +    VFIODevice *vbasedev = opaque;
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
> +
> +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
> +        vfio_pci_save_config(vbasedev, f);
> +    }
> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> +
> +    return qemu_file_get_error(f);
> +}
> +
>  /* ---------------------------------------------------------------------- */
>  
>  static int vfio_save_setup(QEMUFile *f, void *opaque)
> @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
>      }
>  }
>  
> +static void vfio_save_pending(QEMUFile *f, void *opaque,
> +                              uint64_t threshold_size,
> +                              uint64_t *res_precopy_only,
> +                              uint64_t *res_compatible,
> +                              uint64_t *res_postcopy_only)
> +{
> +    VFIODevice *vbasedev = opaque;
> +    VFIOMigration *migration = vbasedev->migration;
> +    int ret;
> +
> +    ret = vfio_update_pending(vbasedev);
> +    if (ret) {
> +        return;
> +    }
> +
> +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
> +        *res_precopy_only += migration->pending_bytes;
> +    } else {
> +        *res_postcopy_only += migration->pending_bytes;
> +    }
> +    *res_compatible += 0;
> +}
> +
> +static int vfio_save_iterate(QEMUFile *f, void *opaque)
> +{
> +    VFIODevice *vbasedev = opaque;
> +    VFIOMigration *migration = vbasedev->migration;
> +    int ret;
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> +
> +    qemu_mutex_lock(&migration->lock);
> +    ret = vfio_save_buffer(f, vbasedev);
> +    qemu_mutex_unlock(&migration->lock);
> +
> +    if (ret < 0) {
> +        error_report("vfio_save_buffer failed %s",
> +                     strerror(errno));
> +        return ret;
> +    }
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> +
> +    ret = qemu_file_get_error(f);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    return ret;
> +}
> +
> +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> +{
> +    VFIODevice *vbasedev = opaque;
> +    VFIOMigration *migration = vbasedev->migration;
> +    int ret;
> +
> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
> +    if (ret) {
> +        error_report("Failed to set state STOP and SAVING");
> +        return ret;
> +    }
> +
> +    ret = vfio_save_device_config_state(f, opaque);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    ret = vfio_update_pending(vbasedev);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    while (migration->pending_bytes > 0) {
> +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> +        ret = vfio_save_buffer(f, vbasedev);
> +        if (ret < 0) {
> +            error_report("Failed to save buffer");
> +            return ret;
> +        } else if (ret == 0) {
> +            break;
> +        }
> +
> +        ret = vfio_update_pending(vbasedev);
> +        if (ret) {
> +            return ret;
> +        }
> +    }
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> +
> +    ret = qemu_file_get_error(f);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
> +    if (ret) {
> +        error_report("Failed to set state STOPPED");
> +        return ret;
> +    }
> +    return ret;
> +}
> +
>  static SaveVMHandlers savevm_vfio_handlers = {
>      .save_setup = vfio_save_setup,
>      .save_cleanup = vfio_save_cleanup,
> +    .save_live_pending = vfio_save_pending,
> +    .save_live_iterate = vfio_save_iterate,
> +    .save_live_complete_precopy = vfio_save_complete_precopy,
>  };
>  
>  /* ---------------------------------------------------------------------- */
Yan Zhao June 21, 2019, 12:31 a.m. UTC | #2
On Thu, Jun 20, 2019 at 10:37:36PM +0800, Kirti Wankhede wrote:
> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> functions. These functions handles pre-copy and stop-and-copy phase.
> 
> In _SAVING|_RUNNING device state or pre-copy phase:
> - read pending_bytes
> - read data_offset - indicates kernel driver to write data to staging
>   buffer which is mmapped.
> - read data_size - amount of data in bytes written by vendor driver in migration
>   region.
> - if data section is trapped, pread() number of bytes in data_size, from
>   data_offset.
> - if data section is mmaped, read mmaped buffer of size data_size.
> - Write data packet to file stream as below:
> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> VFIO_MIG_FLAG_END_OF_STATE }
> 
> In _SAVING device state or stop-and-copy phase
> a. read config space of device and save to migration file stream. This
>    doesn't need to be from vendor driver. Any other special config state
>    from driver can be saved as data in following iteration.
> b. read pending_bytes - indicates kernel driver to write data to staging
>    buffer which is mmapped.
> c. read data_size - amount of data in bytes written by vendor driver in
>    migration region.
> d. if data section is trapped, pread() from data_offset of size data_size.
> e. if data section is mmaped, read mmaped buffer of size data_size.
> f. Write data packet as below:
>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> g. iterate through steps b to f until (pending_bytes > 0)
> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> 
> .save_live_iterate runs outside the iothread lock in the migration case, which
> could race with asynchronous call to get dirty page list causing data corruption
> in mapped migration region. Mutex added here to serial migration buffer read
> operation.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 212 insertions(+)
> 
> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> index fe0887c27664..0a2f30872316 100644
> --- a/hw/vfio/migration.c
> +++ b/hw/vfio/migration.c
> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
>      return 0;
>  }
>  
> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> +{
> +    VFIOMigration *migration = vbasedev->migration;
> +    VFIORegion *region = &migration->region.buffer;
> +    uint64_t data_offset = 0, data_size = 0;
> +    int ret;
> +
> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> +                                             data_offset));
> +    if (ret != sizeof(data_offset)) {
> +        error_report("Failed to get migration buffer data offset %d",
> +                     ret);
> +        return -EINVAL;
> +    }
> +
> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> +                                             data_size));
> +    if (ret != sizeof(data_size)) {
> +        error_report("Failed to get migration buffer data size %d",
> +                     ret);
> +        return -EINVAL;
> +    }
> +
how big is the data_size ? 
if this size is too big, it may take too much time and block others.

> +    if (data_size > 0) {
> +        void *buf = NULL;
> +        bool buffer_mmaped = false;
> +
> +        if (region->mmaps) {
> +            int i;
> +
> +            for (i = 0; i < region->nr_mmaps; i++) {
> +                if ((data_offset >= region->mmaps[i].offset) &&
> +                    (data_offset < region->mmaps[i].offset +
> +                                   region->mmaps[i].size)) {
> +                    buf = region->mmaps[i].mmap + (data_offset -
> +                                                   region->mmaps[i].offset);
> +                    buffer_mmaped = true;
> +                    break;
> +                }
> +            }
> +        }
> +
> +        if (!buffer_mmaped) {
> +            buf = g_malloc0(data_size);
> +            ret = pread(vbasedev->fd, buf, data_size,
> +                        region->fd_offset + data_offset);
> +            if (ret != data_size) {
> +                error_report("Failed to get migration data %d", ret);
> +                g_free(buf);
> +                return -EINVAL;
> +            }
> +        }
> +
> +        qemu_put_be64(f, data_size);
> +        qemu_put_buffer(f, buf, data_size);
> +
> +        if (!buffer_mmaped) {
> +            g_free(buf);
> +        }
> +        migration->pending_bytes -= data_size;
> +    } else {
> +        qemu_put_be64(f, data_size);
> +    }
> +
> +    ret = qemu_file_get_error(f);
> +
> +    return data_size;
> +}
> +
> +static int vfio_update_pending(VFIODevice *vbasedev)
> +{
> +    VFIOMigration *migration = vbasedev->migration;
> +    VFIORegion *region = &migration->region.buffer;
> +    uint64_t pending_bytes = 0;
> +    int ret;
> +
> +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> +                                             pending_bytes));
> +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
> +        error_report("Failed to get pending bytes %d", ret);
> +        migration->pending_bytes = 0;
> +        return (ret < 0) ? ret : -EINVAL;
> +    }
> +
> +    migration->pending_bytes = pending_bytes;
> +    return 0;
> +}
> +
> +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
> +{
> +    VFIODevice *vbasedev = opaque;
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
> +
> +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
> +        vfio_pci_save_config(vbasedev, f);
> +    }
> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> +
> +    return qemu_file_get_error(f);
> +}
> +
>  /* ---------------------------------------------------------------------- */
>  
>  static int vfio_save_setup(QEMUFile *f, void *opaque)
> @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
>      }
>  }
>  
> +static void vfio_save_pending(QEMUFile *f, void *opaque,
> +                              uint64_t threshold_size,
> +                              uint64_t *res_precopy_only,
> +                              uint64_t *res_compatible,
> +                              uint64_t *res_postcopy_only)
> +{
> +    VFIODevice *vbasedev = opaque;
> +    VFIOMigration *migration = vbasedev->migration;
> +    int ret;
> +
> +    ret = vfio_update_pending(vbasedev);
> +    if (ret) {
> +        return;
> +    }
> +
> +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
> +        *res_precopy_only += migration->pending_bytes;
> +    } else {
> +        *res_postcopy_only += migration->pending_bytes;
> +    }
> +    *res_compatible += 0;
> +}
> +
> +static int vfio_save_iterate(QEMUFile *f, void *opaque)
> +{
> +    VFIODevice *vbasedev = opaque;
> +    VFIOMigration *migration = vbasedev->migration;
> +    int ret;
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> +
> +    qemu_mutex_lock(&migration->lock);
> +    ret = vfio_save_buffer(f, vbasedev);
> +    qemu_mutex_unlock(&migration->lock);
> +
> +    if (ret < 0) {
> +        error_report("vfio_save_buffer failed %s",
> +                     strerror(errno));
> +        return ret;
> +    }
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> +
> +    ret = qemu_file_get_error(f);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    return ret;
> +}
> +
> +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> +{
> +    VFIODevice *vbasedev = opaque;
> +    VFIOMigration *migration = vbasedev->migration;
> +    int ret;
> +
> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
> +    if (ret) {
> +        error_report("Failed to set state STOP and SAVING");
> +        return ret;
> +    }
> +
> +    ret = vfio_save_device_config_state(f, opaque);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    ret = vfio_update_pending(vbasedev);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    while (migration->pending_bytes > 0) {
> +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> +        ret = vfio_save_buffer(f, vbasedev);
> +        if (ret < 0) {
> +            error_report("Failed to save buffer");
> +            return ret;
> +        } else if (ret == 0) {
> +            break;
> +        }
> +
> +        ret = vfio_update_pending(vbasedev);
> +        if (ret) {
> +            return ret;
> +        }
> +    }
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> +
> +    ret = qemu_file_get_error(f);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
> +    if (ret) {
> +        error_report("Failed to set state STOPPED");
> +        return ret;
> +    }
> +    return ret;
> +}
> +
>  static SaveVMHandlers savevm_vfio_handlers = {
>      .save_setup = vfio_save_setup,
>      .save_cleanup = vfio_save_cleanup,
> +    .save_live_pending = vfio_save_pending,
> +    .save_live_iterate = vfio_save_iterate,
> +    .save_live_complete_precopy = vfio_save_complete_precopy,
>  };
>  
>  /* ---------------------------------------------------------------------- */
> -- 
> 2.7.0
>
Kirti Wankhede June 21, 2019, 6:38 a.m. UTC | #3
On 6/21/2019 12:55 AM, Alex Williamson wrote:
> On Thu, 20 Jun 2019 20:07:36 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
>> functions. These functions handles pre-copy and stop-and-copy phase.
>>
>> In _SAVING|_RUNNING device state or pre-copy phase:
>> - read pending_bytes
>> - read data_offset - indicates kernel driver to write data to staging
>>   buffer which is mmapped.
> 
> Why is data_offset the trigger rather than data_size?  It seems that
> data_offset can't really change dynamically since it might be mmap'd,
> so it seems unnatural to bother re-reading it.
> 

Vendor driver can change data_offset, he can have different data_offset
for device data and dirty pages bitmap.

>> - read data_size - amount of data in bytes written by vendor driver in migration
>>   region.
>> - if data section is trapped, pread() number of bytes in data_size, from
>>   data_offset.
>> - if data section is mmaped, read mmaped buffer of size data_size.
>> - Write data packet to file stream as below:
>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
>> VFIO_MIG_FLAG_END_OF_STATE }
>>
>> In _SAVING device state or stop-and-copy phase
>> a. read config space of device and save to migration file stream. This
>>    doesn't need to be from vendor driver. Any other special config state
>>    from driver can be saved as data in following iteration.
>> b. read pending_bytes - indicates kernel driver to write data to staging
>>    buffer which is mmapped.
> 
> Is it pending_bytes or data_offset that triggers the write out of
> data?  Why pending_bytes vs data_size?  I was interpreting
> pending_bytes as the total data size while data_size is the size
> available to read now, so assumed data_size would be more closely
> aligned to making the data available.
> 

Sorry, that's my mistake while editing, its read data_offset as in above
case.

>> c. read data_size - amount of data in bytes written by vendor driver in
>>    migration region.
>> d. if data section is trapped, pread() from data_offset of size data_size.
>> e. if data section is mmaped, read mmaped buffer of size data_size.
> 
> Should this read as "pread() from data_offset of data_size, or
> optionally if mmap is supported on the data area, read data_size from
> start of mapped buffer"?  IOW, pread should always work.  Same in
> previous section.
> 

ok. I'll update.

>> f. Write data packet as below:
>>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
>> g. iterate through steps b to f until (pending_bytes > 0)
> 
> s/until/while/

Ok.

> 
>> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
>>
>> .save_live_iterate runs outside the iothread lock in the migration case, which
>> could race with asynchronous call to get dirty page list causing data corruption
>> in mapped migration region. Mutex added here to serial migration buffer read
>> operation.
> 
> Would we be ahead to use different offsets within the region for device
> data vs dirty bitmap to avoid this?
>

Lock will still be required to serialize the read/write operations on
vfio_device_migration_info structure in the region.


>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 212 insertions(+)
>>
>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
>> index fe0887c27664..0a2f30872316 100644
>> --- a/hw/vfio/migration.c
>> +++ b/hw/vfio/migration.c
>> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
>>      return 0;
>>  }
>>  
>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
>> +{
>> +    VFIOMigration *migration = vbasedev->migration;
>> +    VFIORegion *region = &migration->region.buffer;
>> +    uint64_t data_offset = 0, data_size = 0;
>> +    int ret;
>> +
>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>> +                                             data_offset));
>> +    if (ret != sizeof(data_offset)) {
>> +        error_report("Failed to get migration buffer data offset %d",
>> +                     ret);
>> +        return -EINVAL;
>> +    }
>> +
>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>> +                                             data_size));
>> +    if (ret != sizeof(data_size)) {
>> +        error_report("Failed to get migration buffer data size %d",
>> +                     ret);
>> +        return -EINVAL;
>> +    }
>> +
>> +    if (data_size > 0) {
>> +        void *buf = NULL;
>> +        bool buffer_mmaped = false;
>> +
>> +        if (region->mmaps) {
>> +            int i;
>> +
>> +            for (i = 0; i < region->nr_mmaps; i++) {
>> +                if ((data_offset >= region->mmaps[i].offset) &&
>> +                    (data_offset < region->mmaps[i].offset +
>> +                                   region->mmaps[i].size)) {
>> +                    buf = region->mmaps[i].mmap + (data_offset -
>> +                                                   region->mmaps[i].offset);
> 
> So you're expecting that data_offset is somewhere within the data
> area.  Why doesn't the data always simply start at the beginning of the
> data area?  ie. data_offset would coincide with the beginning of the
> mmap'able area (if supported) and be static.  Does this enable some
> functionality in the vendor driver?

Do you want to enforce that to vendor driver?
From the feedback on previous version I thought vendor driver should
define data_offset within the region
"I'd suggest that the vendor driver expose a read-only
data_offset that matches a sparse mmap capability entry should the
driver support mmap.  The use should always read or write data from the
vendor defined data_offset"

This also adds flexibility to vendor driver such that vendor driver can
define different data_offset for device data and dirty page bitmap
within same mmaped region.

>  Does resume data need to be
> written from the same offset where it's read?

No, resume data should be written from the data_offset that vendor
driver provided during resume.

> 
>> +                    buffer_mmaped = true;
>> +                    break;
>> +                }
>> +            }
>> +        }
>> +
>> +        if (!buffer_mmaped) {
>> +            buf = g_malloc0(data_size);
>> +            ret = pread(vbasedev->fd, buf, data_size,
>> +                        region->fd_offset + data_offset);
>> +            if (ret != data_size) {
>> +                error_report("Failed to get migration data %d", ret);
>> +                g_free(buf);
>> +                return -EINVAL;
>> +            }
>> +        }
>> +
>> +        qemu_put_be64(f, data_size);
>> +        qemu_put_buffer(f, buf, data_size);
>> +
>> +        if (!buffer_mmaped) {
>> +            g_free(buf);
>> +        }
>> +        migration->pending_bytes -= data_size;
>> +    } else {
>> +        qemu_put_be64(f, data_size);
>> +    }
>> +
>> +    ret = qemu_file_get_error(f);
>> +
>> +    return data_size;
>> +}
>> +
>> +static int vfio_update_pending(VFIODevice *vbasedev)
>> +{
>> +    VFIOMigration *migration = vbasedev->migration;
>> +    VFIORegion *region = &migration->region.buffer;
>> +    uint64_t pending_bytes = 0;
>> +    int ret;
>> +
>> +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>> +                                             pending_bytes));
> 
> Did this trigger the vendor driver to write out to the data area when
> we don't need it to?
> 

No, as I mentioned above, I'll update the description.

Thanks,
Kirti

>> +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
>> +        error_report("Failed to get pending bytes %d", ret);
>> +        migration->pending_bytes = 0;
>> +        return (ret < 0) ? ret : -EINVAL;
>> +    }
>> +
>> +    migration->pending_bytes = pending_bytes;
>> +    return 0;
>> +}
>> +
>> +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
>> +{
>> +    VFIODevice *vbasedev = opaque;
>> +
>> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
>> +
>> +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
>> +        vfio_pci_save_config(vbasedev, f);
>> +    }
>> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
>> +
>> +    return qemu_file_get_error(f);
>> +}
>> +
>>  /* ---------------------------------------------------------------------- */
>>  
>>  static int vfio_save_setup(QEMUFile *f, void *opaque)
>> @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
>>      }
>>  }
>>  
>> +static void vfio_save_pending(QEMUFile *f, void *opaque,
>> +                              uint64_t threshold_size,
>> +                              uint64_t *res_precopy_only,
>> +                              uint64_t *res_compatible,
>> +                              uint64_t *res_postcopy_only)
>> +{
>> +    VFIODevice *vbasedev = opaque;
>> +    VFIOMigration *migration = vbasedev->migration;
>> +    int ret;
>> +
>> +    ret = vfio_update_pending(vbasedev);
>> +    if (ret) {
>> +        return;
>> +    }
>> +
>> +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
>> +        *res_precopy_only += migration->pending_bytes;
>> +    } else {
>> +        *res_postcopy_only += migration->pending_bytes;
>> +    }
>> +    *res_compatible += 0;
>> +}
>> +
>> +static int vfio_save_iterate(QEMUFile *f, void *opaque)
>> +{
>> +    VFIODevice *vbasedev = opaque;
>> +    VFIOMigration *migration = vbasedev->migration;
>> +    int ret;
>> +
>> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
>> +
>> +    qemu_mutex_lock(&migration->lock);
>> +    ret = vfio_save_buffer(f, vbasedev);
>> +    qemu_mutex_unlock(&migration->lock);
>> +
>> +    if (ret < 0) {
>> +        error_report("vfio_save_buffer failed %s",
>> +                     strerror(errno));
>> +        return ret;
>> +    }
>> +
>> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
>> +
>> +    ret = qemu_file_get_error(f);
>> +    if (ret) {
>> +        return ret;
>> +    }
>> +
>> +    return ret;
>> +}
>> +
>> +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
>> +{
>> +    VFIODevice *vbasedev = opaque;
>> +    VFIOMigration *migration = vbasedev->migration;
>> +    int ret;
>> +
>> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
>> +    if (ret) {
>> +        error_report("Failed to set state STOP and SAVING");
>> +        return ret;
>> +    }
>> +
>> +    ret = vfio_save_device_config_state(f, opaque);
>> +    if (ret) {
>> +        return ret;
>> +    }
>> +
>> +    ret = vfio_update_pending(vbasedev);
>> +    if (ret) {
>> +        return ret;
>> +    }
>> +
>> +    while (migration->pending_bytes > 0) {
>> +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
>> +        ret = vfio_save_buffer(f, vbasedev);
>> +        if (ret < 0) {
>> +            error_report("Failed to save buffer");
>> +            return ret;
>> +        } else if (ret == 0) {
>> +            break;
>> +        }
>> +
>> +        ret = vfio_update_pending(vbasedev);
>> +        if (ret) {
>> +            return ret;
>> +        }
>> +    }
>> +
>> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
>> +
>> +    ret = qemu_file_get_error(f);
>> +    if (ret) {
>> +        return ret;
>> +    }
>> +
>> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
>> +    if (ret) {
>> +        error_report("Failed to set state STOPPED");
>> +        return ret;
>> +    }
>> +    return ret;
>> +}
>> +
>>  static SaveVMHandlers savevm_vfio_handlers = {
>>      .save_setup = vfio_save_setup,
>>      .save_cleanup = vfio_save_cleanup,
>> +    .save_live_pending = vfio_save_pending,
>> +    .save_live_iterate = vfio_save_iterate,
>> +    .save_live_complete_precopy = vfio_save_complete_precopy,
>>  };
>>  
>>  /* ---------------------------------------------------------------------- */
>
Alex Williamson June 21, 2019, 3:16 p.m. UTC | #4
On Fri, 21 Jun 2019 12:08:26 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 6/21/2019 12:55 AM, Alex Williamson wrote:
> > On Thu, 20 Jun 2019 20:07:36 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >   
> >> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> >> functions. These functions handles pre-copy and stop-and-copy phase.
> >>
> >> In _SAVING|_RUNNING device state or pre-copy phase:
> >> - read pending_bytes
> >> - read data_offset - indicates kernel driver to write data to staging
> >>   buffer which is mmapped.  
> > 
> > Why is data_offset the trigger rather than data_size?  It seems that
> > data_offset can't really change dynamically since it might be mmap'd,
> > so it seems unnatural to bother re-reading it.
> >   
> 
> Vendor driver can change data_offset, he can have different data_offset
> for device data and dirty pages bitmap.
> 
> >> - read data_size - amount of data in bytes written by vendor driver in migration
> >>   region.
> >> - if data section is trapped, pread() number of bytes in data_size, from
> >>   data_offset.
> >> - if data section is mmaped, read mmaped buffer of size data_size.
> >> - Write data packet to file stream as below:
> >> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> >> VFIO_MIG_FLAG_END_OF_STATE }
> >>
> >> In _SAVING device state or stop-and-copy phase
> >> a. read config space of device and save to migration file stream. This
> >>    doesn't need to be from vendor driver. Any other special config state
> >>    from driver can be saved as data in following iteration.
> >> b. read pending_bytes - indicates kernel driver to write data to staging
> >>    buffer which is mmapped.  
> > 
> > Is it pending_bytes or data_offset that triggers the write out of
> > data?  Why pending_bytes vs data_size?  I was interpreting
> > pending_bytes as the total data size while data_size is the size
> > available to read now, so assumed data_size would be more closely
> > aligned to making the data available.
> >   
> 
> Sorry, that's my mistake while editing, its read data_offset as in above
> case.
> 
> >> c. read data_size - amount of data in bytes written by vendor driver in
> >>    migration region.
> >> d. if data section is trapped, pread() from data_offset of size data_size.
> >> e. if data section is mmaped, read mmaped buffer of size data_size.  
> > 
> > Should this read as "pread() from data_offset of data_size, or
> > optionally if mmap is supported on the data area, read data_size from
> > start of mapped buffer"?  IOW, pread should always work.  Same in
> > previous section.
> >   
> 
> ok. I'll update.
> 
> >> f. Write data packet as below:
> >>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> >> g. iterate through steps b to f until (pending_bytes > 0)  
> > 
> > s/until/while/  
> 
> Ok.
> 
> >   
> >> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> >>
> >> .save_live_iterate runs outside the iothread lock in the migration case, which
> >> could race with asynchronous call to get dirty page list causing data corruption
> >> in mapped migration region. Mutex added here to serial migration buffer read
> >> operation.  
> > 
> > Would we be ahead to use different offsets within the region for device
> > data vs dirty bitmap to avoid this?
> >  
> 
> Lock will still be required to serialize the read/write operations on
> vfio_device_migration_info structure in the region.
> 
> 
> >> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >> ---
> >>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>  1 file changed, 212 insertions(+)
> >>
> >> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> >> index fe0887c27664..0a2f30872316 100644
> >> --- a/hw/vfio/migration.c
> >> +++ b/hw/vfio/migration.c
> >> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
> >>      return 0;
> >>  }
> >>  
> >> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> >> +{
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +    VFIORegion *region = &migration->region.buffer;
> >> +    uint64_t data_offset = 0, data_size = 0;
> >> +    int ret;
> >> +
> >> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> >> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >> +                                             data_offset));
> >> +    if (ret != sizeof(data_offset)) {
> >> +        error_report("Failed to get migration buffer data offset %d",
> >> +                     ret);
> >> +        return -EINVAL;
> >> +    }
> >> +
> >> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> >> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >> +                                             data_size));
> >> +    if (ret != sizeof(data_size)) {
> >> +        error_report("Failed to get migration buffer data size %d",
> >> +                     ret);
> >> +        return -EINVAL;
> >> +    }
> >> +
> >> +    if (data_size > 0) {
> >> +        void *buf = NULL;
> >> +        bool buffer_mmaped = false;
> >> +
> >> +        if (region->mmaps) {
> >> +            int i;
> >> +
> >> +            for (i = 0; i < region->nr_mmaps; i++) {
> >> +                if ((data_offset >= region->mmaps[i].offset) &&
> >> +                    (data_offset < region->mmaps[i].offset +
> >> +                                   region->mmaps[i].size)) {
> >> +                    buf = region->mmaps[i].mmap + (data_offset -
> >> +                                                   region->mmaps[i].offset);  
> > 
> > So you're expecting that data_offset is somewhere within the data
> > area.  Why doesn't the data always simply start at the beginning of the
> > data area?  ie. data_offset would coincide with the beginning of the
> > mmap'able area (if supported) and be static.  Does this enable some
> > functionality in the vendor driver?  
> 
> Do you want to enforce that to vendor driver?
> From the feedback on previous version I thought vendor driver should
> define data_offset within the region
> "I'd suggest that the vendor driver expose a read-only
> data_offset that matches a sparse mmap capability entry should the
> driver support mmap.  The use should always read or write data from the
> vendor defined data_offset"
> 
> This also adds flexibility to vendor driver such that vendor driver can
> define different data_offset for device data and dirty page bitmap
> within same mmaped region.

I agree, it adds flexibility, the protocol was not evident to me until
I got here though.

> >  Does resume data need to be
> > written from the same offset where it's read?  
> 
> No, resume data should be written from the data_offset that vendor
> driver provided during resume.

s/resume/save/?

Or is this saying that on resume that the vendor driver is requesting a
specific block of data via data_offset?  I think resume is going to be
directed by the user, writing in the same order they received the
data.  Thanks,

Alex

> >> +                    buffer_mmaped = true;
> >> +                    break;
> >> +                }
> >> +            }
> >> +        }
> >> +
> >> +        if (!buffer_mmaped) {
> >> +            buf = g_malloc0(data_size);
> >> +            ret = pread(vbasedev->fd, buf, data_size,
> >> +                        region->fd_offset + data_offset);
> >> +            if (ret != data_size) {
> >> +                error_report("Failed to get migration data %d", ret);
> >> +                g_free(buf);
> >> +                return -EINVAL;
> >> +            }
> >> +        }
> >> +
> >> +        qemu_put_be64(f, data_size);
> >> +        qemu_put_buffer(f, buf, data_size);
> >> +
> >> +        if (!buffer_mmaped) {
> >> +            g_free(buf);
> >> +        }
> >> +        migration->pending_bytes -= data_size;
> >> +    } else {
> >> +        qemu_put_be64(f, data_size);
> >> +    }
> >> +
> >> +    ret = qemu_file_get_error(f);
> >> +
> >> +    return data_size;
> >> +}
> >> +
> >> +static int vfio_update_pending(VFIODevice *vbasedev)
> >> +{
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +    VFIORegion *region = &migration->region.buffer;
> >> +    uint64_t pending_bytes = 0;
> >> +    int ret;
> >> +
> >> +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
> >> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >> +                                             pending_bytes));  
> > 
> > Did this trigger the vendor driver to write out to the data area when
> > we don't need it to?
> >   
> 
> No, as I mentioned above, I'll update the description.
> 
> Thanks,
> Kirti
> 
> >> +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
> >> +        error_report("Failed to get pending bytes %d", ret);
> >> +        migration->pending_bytes = 0;
> >> +        return (ret < 0) ? ret : -EINVAL;
> >> +    }
> >> +
> >> +    migration->pending_bytes = pending_bytes;
> >> +    return 0;
> >> +}
> >> +
> >> +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +
> >> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
> >> +
> >> +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
> >> +        vfio_pci_save_config(vbasedev, f);
> >> +    }
> >> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> >> +
> >> +    return qemu_file_get_error(f);
> >> +}
> >> +
> >>  /* ---------------------------------------------------------------------- */
> >>  
> >>  static int vfio_save_setup(QEMUFile *f, void *opaque)
> >> @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
> >>      }
> >>  }
> >>  
> >> +static void vfio_save_pending(QEMUFile *f, void *opaque,
> >> +                              uint64_t threshold_size,
> >> +                              uint64_t *res_precopy_only,
> >> +                              uint64_t *res_compatible,
> >> +                              uint64_t *res_postcopy_only)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +    int ret;
> >> +
> >> +    ret = vfio_update_pending(vbasedev);
> >> +    if (ret) {
> >> +        return;
> >> +    }
> >> +
> >> +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
> >> +        *res_precopy_only += migration->pending_bytes;
> >> +    } else {
> >> +        *res_postcopy_only += migration->pending_bytes;
> >> +    }
> >> +    *res_compatible += 0;
> >> +}
> >> +
> >> +static int vfio_save_iterate(QEMUFile *f, void *opaque)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +    int ret;
> >> +
> >> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> >> +
> >> +    qemu_mutex_lock(&migration->lock);
> >> +    ret = vfio_save_buffer(f, vbasedev);
> >> +    qemu_mutex_unlock(&migration->lock);
> >> +
> >> +    if (ret < 0) {
> >> +        error_report("vfio_save_buffer failed %s",
> >> +                     strerror(errno));
> >> +        return ret;
> >> +    }
> >> +
> >> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> >> +
> >> +    ret = qemu_file_get_error(f);
> >> +    if (ret) {
> >> +        return ret;
> >> +    }
> >> +
> >> +    return ret;
> >> +}
> >> +
> >> +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +    int ret;
> >> +
> >> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
> >> +    if (ret) {
> >> +        error_report("Failed to set state STOP and SAVING");
> >> +        return ret;
> >> +    }
> >> +
> >> +    ret = vfio_save_device_config_state(f, opaque);
> >> +    if (ret) {
> >> +        return ret;
> >> +    }
> >> +
> >> +    ret = vfio_update_pending(vbasedev);
> >> +    if (ret) {
> >> +        return ret;
> >> +    }
> >> +
> >> +    while (migration->pending_bytes > 0) {
> >> +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> >> +        ret = vfio_save_buffer(f, vbasedev);
> >> +        if (ret < 0) {
> >> +            error_report("Failed to save buffer");
> >> +            return ret;
> >> +        } else if (ret == 0) {
> >> +            break;
> >> +        }
> >> +
> >> +        ret = vfio_update_pending(vbasedev);
> >> +        if (ret) {
> >> +            return ret;
> >> +        }
> >> +    }
> >> +
> >> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> >> +
> >> +    ret = qemu_file_get_error(f);
> >> +    if (ret) {
> >> +        return ret;
> >> +    }
> >> +
> >> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
> >> +    if (ret) {
> >> +        error_report("Failed to set state STOPPED");
> >> +        return ret;
> >> +    }
> >> +    return ret;
> >> +}
> >> +
> >>  static SaveVMHandlers savevm_vfio_handlers = {
> >>      .save_setup = vfio_save_setup,
> >>      .save_cleanup = vfio_save_cleanup,
> >> +    .save_live_pending = vfio_save_pending,
> >> +    .save_live_iterate = vfio_save_iterate,
> >> +    .save_live_complete_precopy = vfio_save_complete_precopy,
> >>  };
> >>  
> >>  /* ---------------------------------------------------------------------- */  
> >
Kirti Wankhede June 21, 2019, 7:38 p.m. UTC | #5
On 6/21/2019 8:46 PM, Alex Williamson wrote:
> On Fri, 21 Jun 2019 12:08:26 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> On 6/21/2019 12:55 AM, Alex Williamson wrote:
>>> On Thu, 20 Jun 2019 20:07:36 +0530
>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>   
>>>> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
>>>> functions. These functions handles pre-copy and stop-and-copy phase.
>>>>
>>>> In _SAVING|_RUNNING device state or pre-copy phase:
>>>> - read pending_bytes
>>>> - read data_offset - indicates kernel driver to write data to staging
>>>>   buffer which is mmapped.  
>>>
>>> Why is data_offset the trigger rather than data_size?  It seems that
>>> data_offset can't really change dynamically since it might be mmap'd,
>>> so it seems unnatural to bother re-reading it.
>>>   
>>
>> Vendor driver can change data_offset, he can have different data_offset
>> for device data and dirty pages bitmap.
>>
>>>> - read data_size - amount of data in bytes written by vendor driver in migration
>>>>   region.
>>>> - if data section is trapped, pread() number of bytes in data_size, from
>>>>   data_offset.
>>>> - if data section is mmaped, read mmaped buffer of size data_size.
>>>> - Write data packet to file stream as below:
>>>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
>>>> VFIO_MIG_FLAG_END_OF_STATE }
>>>>
>>>> In _SAVING device state or stop-and-copy phase
>>>> a. read config space of device and save to migration file stream. This
>>>>    doesn't need to be from vendor driver. Any other special config state
>>>>    from driver can be saved as data in following iteration.
>>>> b. read pending_bytes - indicates kernel driver to write data to staging
>>>>    buffer which is mmapped.  
>>>
>>> Is it pending_bytes or data_offset that triggers the write out of
>>> data?  Why pending_bytes vs data_size?  I was interpreting
>>> pending_bytes as the total data size while data_size is the size
>>> available to read now, so assumed data_size would be more closely
>>> aligned to making the data available.
>>>   
>>
>> Sorry, that's my mistake while editing, its read data_offset as in above
>> case.
>>
>>>> c. read data_size - amount of data in bytes written by vendor driver in
>>>>    migration region.
>>>> d. if data section is trapped, pread() from data_offset of size data_size.
>>>> e. if data section is mmaped, read mmaped buffer of size data_size.  
>>>
>>> Should this read as "pread() from data_offset of data_size, or
>>> optionally if mmap is supported on the data area, read data_size from
>>> start of mapped buffer"?  IOW, pread should always work.  Same in
>>> previous section.
>>>   
>>
>> ok. I'll update.
>>
>>>> f. Write data packet as below:
>>>>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
>>>> g. iterate through steps b to f until (pending_bytes > 0)  
>>>
>>> s/until/while/  
>>
>> Ok.
>>
>>>   
>>>> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
>>>>
>>>> .save_live_iterate runs outside the iothread lock in the migration case, which
>>>> could race with asynchronous call to get dirty page list causing data corruption
>>>> in mapped migration region. Mutex added here to serial migration buffer read
>>>> operation.  
>>>
>>> Would we be ahead to use different offsets within the region for device
>>> data vs dirty bitmap to avoid this?
>>>  
>>
>> Lock will still be required to serialize the read/write operations on
>> vfio_device_migration_info structure in the region.
>>
>>
>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>>>> ---
>>>>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>  1 file changed, 212 insertions(+)
>>>>
>>>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
>>>> index fe0887c27664..0a2f30872316 100644
>>>> --- a/hw/vfio/migration.c
>>>> +++ b/hw/vfio/migration.c
>>>> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
>>>>      return 0;
>>>>  }
>>>>  
>>>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
>>>> +{
>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>> +    VFIORegion *region = &migration->region.buffer;
>>>> +    uint64_t data_offset = 0, data_size = 0;
>>>> +    int ret;
>>>> +
>>>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>> +                                             data_offset));
>>>> +    if (ret != sizeof(data_offset)) {
>>>> +        error_report("Failed to get migration buffer data offset %d",
>>>> +                     ret);
>>>> +        return -EINVAL;
>>>> +    }
>>>> +
>>>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>> +                                             data_size));
>>>> +    if (ret != sizeof(data_size)) {
>>>> +        error_report("Failed to get migration buffer data size %d",
>>>> +                     ret);
>>>> +        return -EINVAL;
>>>> +    }
>>>> +
>>>> +    if (data_size > 0) {
>>>> +        void *buf = NULL;
>>>> +        bool buffer_mmaped = false;
>>>> +
>>>> +        if (region->mmaps) {
>>>> +            int i;
>>>> +
>>>> +            for (i = 0; i < region->nr_mmaps; i++) {
>>>> +                if ((data_offset >= region->mmaps[i].offset) &&
>>>> +                    (data_offset < region->mmaps[i].offset +
>>>> +                                   region->mmaps[i].size)) {
>>>> +                    buf = region->mmaps[i].mmap + (data_offset -
>>>> +                                                   region->mmaps[i].offset);  
>>>
>>> So you're expecting that data_offset is somewhere within the data
>>> area.  Why doesn't the data always simply start at the beginning of the
>>> data area?  ie. data_offset would coincide with the beginning of the
>>> mmap'able area (if supported) and be static.  Does this enable some
>>> functionality in the vendor driver?  
>>
>> Do you want to enforce that to vendor driver?
>> From the feedback on previous version I thought vendor driver should
>> define data_offset within the region
>> "I'd suggest that the vendor driver expose a read-only
>> data_offset that matches a sparse mmap capability entry should the
>> driver support mmap.  The use should always read or write data from the
>> vendor defined data_offset"
>>
>> This also adds flexibility to vendor driver such that vendor driver can
>> define different data_offset for device data and dirty page bitmap
>> within same mmaped region.
> 
> I agree, it adds flexibility, the protocol was not evident to me until
> I got here though.
> 
>>>  Does resume data need to be
>>> written from the same offset where it's read?  
>>
>> No, resume data should be written from the data_offset that vendor
>> driver provided during resume.
> 
> s/resume/save/?
> 
> Or is this saying that on resume that the vendor driver is requesting a
> specific block of data via data_offset? 

Correct.

Thanks,
Kirti

> I think resume is going to be
> directed by the user, writing in the same order they received the
> data.  Thanks,
> 
> Alex
> 
>>>> +                    buffer_mmaped = true;
>>>> +                    break;
>>>> +                }
>>>> +            }
>>>> +        }
>>>> +
>>>> +        if (!buffer_mmaped) {
>>>> +            buf = g_malloc0(data_size);
>>>> +            ret = pread(vbasedev->fd, buf, data_size,
>>>> +                        region->fd_offset + data_offset);
>>>> +            if (ret != data_size) {
>>>> +                error_report("Failed to get migration data %d", ret);
>>>> +                g_free(buf);
>>>> +                return -EINVAL;
>>>> +            }
>>>> +        }
>>>> +
>>>> +        qemu_put_be64(f, data_size);
>>>> +        qemu_put_buffer(f, buf, data_size);
>>>> +
>>>> +        if (!buffer_mmaped) {
>>>> +            g_free(buf);
>>>> +        }
>>>> +        migration->pending_bytes -= data_size;
>>>> +    } else {
>>>> +        qemu_put_be64(f, data_size);
>>>> +    }
>>>> +
>>>> +    ret = qemu_file_get_error(f);
>>>> +
>>>> +    return data_size;
>>>> +}
>>>> +
>>>> +static int vfio_update_pending(VFIODevice *vbasedev)
>>>> +{
>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>> +    VFIORegion *region = &migration->region.buffer;
>>>> +    uint64_t pending_bytes = 0;
>>>> +    int ret;
>>>> +
>>>> +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>> +                                             pending_bytes));  
>>>
>>> Did this trigger the vendor driver to write out to the data area when
>>> we don't need it to?
>>>   
>>
>> No, as I mentioned above, I'll update the description.
>>
>> Thanks,
>> Kirti
>>
>>>> +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
>>>> +        error_report("Failed to get pending bytes %d", ret);
>>>> +        migration->pending_bytes = 0;
>>>> +        return (ret < 0) ? ret : -EINVAL;
>>>> +    }
>>>> +
>>>> +    migration->pending_bytes = pending_bytes;
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
>>>> +{
>>>> +    VFIODevice *vbasedev = opaque;
>>>> +
>>>> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
>>>> +
>>>> +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
>>>> +        vfio_pci_save_config(vbasedev, f);
>>>> +    }
>>>> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
>>>> +
>>>> +    return qemu_file_get_error(f);
>>>> +}
>>>> +
>>>>  /* ---------------------------------------------------------------------- */
>>>>  
>>>>  static int vfio_save_setup(QEMUFile *f, void *opaque)
>>>> @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
>>>>      }
>>>>  }
>>>>  
>>>> +static void vfio_save_pending(QEMUFile *f, void *opaque,
>>>> +                              uint64_t threshold_size,
>>>> +                              uint64_t *res_precopy_only,
>>>> +                              uint64_t *res_compatible,
>>>> +                              uint64_t *res_postcopy_only)
>>>> +{
>>>> +    VFIODevice *vbasedev = opaque;
>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>> +    int ret;
>>>> +
>>>> +    ret = vfio_update_pending(vbasedev);
>>>> +    if (ret) {
>>>> +        return;
>>>> +    }
>>>> +
>>>> +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
>>>> +        *res_precopy_only += migration->pending_bytes;
>>>> +    } else {
>>>> +        *res_postcopy_only += migration->pending_bytes;
>>>> +    }
>>>> +    *res_compatible += 0;
>>>> +}
>>>> +
>>>> +static int vfio_save_iterate(QEMUFile *f, void *opaque)
>>>> +{
>>>> +    VFIODevice *vbasedev = opaque;
>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>> +    int ret;
>>>> +
>>>> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
>>>> +
>>>> +    qemu_mutex_lock(&migration->lock);
>>>> +    ret = vfio_save_buffer(f, vbasedev);
>>>> +    qemu_mutex_unlock(&migration->lock);
>>>> +
>>>> +    if (ret < 0) {
>>>> +        error_report("vfio_save_buffer failed %s",
>>>> +                     strerror(errno));
>>>> +        return ret;
>>>> +    }
>>>> +
>>>> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
>>>> +
>>>> +    ret = qemu_file_get_error(f);
>>>> +    if (ret) {
>>>> +        return ret;
>>>> +    }
>>>> +
>>>> +    return ret;
>>>> +}
>>>> +
>>>> +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
>>>> +{
>>>> +    VFIODevice *vbasedev = opaque;
>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>> +    int ret;
>>>> +
>>>> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
>>>> +    if (ret) {
>>>> +        error_report("Failed to set state STOP and SAVING");
>>>> +        return ret;
>>>> +    }
>>>> +
>>>> +    ret = vfio_save_device_config_state(f, opaque);
>>>> +    if (ret) {
>>>> +        return ret;
>>>> +    }
>>>> +
>>>> +    ret = vfio_update_pending(vbasedev);
>>>> +    if (ret) {
>>>> +        return ret;
>>>> +    }
>>>> +
>>>> +    while (migration->pending_bytes > 0) {
>>>> +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
>>>> +        ret = vfio_save_buffer(f, vbasedev);
>>>> +        if (ret < 0) {
>>>> +            error_report("Failed to save buffer");
>>>> +            return ret;
>>>> +        } else if (ret == 0) {
>>>> +            break;
>>>> +        }
>>>> +
>>>> +        ret = vfio_update_pending(vbasedev);
>>>> +        if (ret) {
>>>> +            return ret;
>>>> +        }
>>>> +    }
>>>> +
>>>> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
>>>> +
>>>> +    ret = qemu_file_get_error(f);
>>>> +    if (ret) {
>>>> +        return ret;
>>>> +    }
>>>> +
>>>> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
>>>> +    if (ret) {
>>>> +        error_report("Failed to set state STOPPED");
>>>> +        return ret;
>>>> +    }
>>>> +    return ret;
>>>> +}
>>>> +
>>>>  static SaveVMHandlers savevm_vfio_handlers = {
>>>>      .save_setup = vfio_save_setup,
>>>>      .save_cleanup = vfio_save_cleanup,
>>>> +    .save_live_pending = vfio_save_pending,
>>>> +    .save_live_iterate = vfio_save_iterate,
>>>> +    .save_live_complete_precopy = vfio_save_complete_precopy,
>>>>  };
>>>>  
>>>>  /* ---------------------------------------------------------------------- */  
>>>   
>
Alex Williamson June 21, 2019, 8:02 p.m. UTC | #6
On Sat, 22 Jun 2019 01:08:40 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 6/21/2019 8:46 PM, Alex Williamson wrote:
> > On Fri, 21 Jun 2019 12:08:26 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >   
> >> On 6/21/2019 12:55 AM, Alex Williamson wrote:  
> >>> On Thu, 20 Jun 2019 20:07:36 +0530
> >>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >>>     
> >>>> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> >>>> functions. These functions handles pre-copy and stop-and-copy phase.
> >>>>
> >>>> In _SAVING|_RUNNING device state or pre-copy phase:
> >>>> - read pending_bytes
> >>>> - read data_offset - indicates kernel driver to write data to staging
> >>>>   buffer which is mmapped.    
> >>>
> >>> Why is data_offset the trigger rather than data_size?  It seems that
> >>> data_offset can't really change dynamically since it might be mmap'd,
> >>> so it seems unnatural to bother re-reading it.
> >>>     
> >>
> >> Vendor driver can change data_offset, he can have different data_offset
> >> for device data and dirty pages bitmap.
> >>  
> >>>> - read data_size - amount of data in bytes written by vendor driver in migration
> >>>>   region.
> >>>> - if data section is trapped, pread() number of bytes in data_size, from
> >>>>   data_offset.
> >>>> - if data section is mmaped, read mmaped buffer of size data_size.
> >>>> - Write data packet to file stream as below:
> >>>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> >>>> VFIO_MIG_FLAG_END_OF_STATE }
> >>>>
> >>>> In _SAVING device state or stop-and-copy phase
> >>>> a. read config space of device and save to migration file stream. This
> >>>>    doesn't need to be from vendor driver. Any other special config state
> >>>>    from driver can be saved as data in following iteration.
> >>>> b. read pending_bytes - indicates kernel driver to write data to staging
> >>>>    buffer which is mmapped.    
> >>>
> >>> Is it pending_bytes or data_offset that triggers the write out of
> >>> data?  Why pending_bytes vs data_size?  I was interpreting
> >>> pending_bytes as the total data size while data_size is the size
> >>> available to read now, so assumed data_size would be more closely
> >>> aligned to making the data available.
> >>>     
> >>
> >> Sorry, that's my mistake while editing, its read data_offset as in above
> >> case.
> >>  
> >>>> c. read data_size - amount of data in bytes written by vendor driver in
> >>>>    migration region.
> >>>> d. if data section is trapped, pread() from data_offset of size data_size.
> >>>> e. if data section is mmaped, read mmaped buffer of size data_size.    
> >>>
> >>> Should this read as "pread() from data_offset of data_size, or
> >>> optionally if mmap is supported on the data area, read data_size from
> >>> start of mapped buffer"?  IOW, pread should always work.  Same in
> >>> previous section.
> >>>     
> >>
> >> ok. I'll update.
> >>  
> >>>> f. Write data packet as below:
> >>>>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> >>>> g. iterate through steps b to f until (pending_bytes > 0)    
> >>>
> >>> s/until/while/    
> >>
> >> Ok.
> >>  
> >>>     
> >>>> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> >>>>
> >>>> .save_live_iterate runs outside the iothread lock in the migration case, which
> >>>> could race with asynchronous call to get dirty page list causing data corruption
> >>>> in mapped migration region. Mutex added here to serial migration buffer read
> >>>> operation.    
> >>>
> >>> Would we be ahead to use different offsets within the region for device
> >>> data vs dirty bitmap to avoid this?
> >>>    
> >>
> >> Lock will still be required to serialize the read/write operations on
> >> vfio_device_migration_info structure in the region.
> >>
> >>  
> >>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >>>> ---
> >>>>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>  1 file changed, 212 insertions(+)
> >>>>
> >>>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> >>>> index fe0887c27664..0a2f30872316 100644
> >>>> --- a/hw/vfio/migration.c
> >>>> +++ b/hw/vfio/migration.c
> >>>> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
> >>>>      return 0;
> >>>>  }
> >>>>  
> >>>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> >>>> +{
> >>>> +    VFIOMigration *migration = vbasedev->migration;
> >>>> +    VFIORegion *region = &migration->region.buffer;
> >>>> +    uint64_t data_offset = 0, data_size = 0;
> >>>> +    int ret;
> >>>> +
> >>>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> >>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >>>> +                                             data_offset));
> >>>> +    if (ret != sizeof(data_offset)) {
> >>>> +        error_report("Failed to get migration buffer data offset %d",
> >>>> +                     ret);
> >>>> +        return -EINVAL;
> >>>> +    }
> >>>> +
> >>>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> >>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >>>> +                                             data_size));
> >>>> +    if (ret != sizeof(data_size)) {
> >>>> +        error_report("Failed to get migration buffer data size %d",
> >>>> +                     ret);
> >>>> +        return -EINVAL;
> >>>> +    }
> >>>> +
> >>>> +    if (data_size > 0) {
> >>>> +        void *buf = NULL;
> >>>> +        bool buffer_mmaped = false;
> >>>> +
> >>>> +        if (region->mmaps) {
> >>>> +            int i;
> >>>> +
> >>>> +            for (i = 0; i < region->nr_mmaps; i++) {
> >>>> +                if ((data_offset >= region->mmaps[i].offset) &&
> >>>> +                    (data_offset < region->mmaps[i].offset +
> >>>> +                                   region->mmaps[i].size)) {
> >>>> +                    buf = region->mmaps[i].mmap + (data_offset -
> >>>> +                                                   region->mmaps[i].offset);    
> >>>
> >>> So you're expecting that data_offset is somewhere within the data
> >>> area.  Why doesn't the data always simply start at the beginning of the
> >>> data area?  ie. data_offset would coincide with the beginning of the
> >>> mmap'able area (if supported) and be static.  Does this enable some
> >>> functionality in the vendor driver?    
> >>
> >> Do you want to enforce that to vendor driver?
> >> From the feedback on previous version I thought vendor driver should
> >> define data_offset within the region
> >> "I'd suggest that the vendor driver expose a read-only
> >> data_offset that matches a sparse mmap capability entry should the
> >> driver support mmap.  The use should always read or write data from the
> >> vendor defined data_offset"
> >>
> >> This also adds flexibility to vendor driver such that vendor driver can
> >> define different data_offset for device data and dirty page bitmap
> >> within same mmaped region.  
> > 
> > I agree, it adds flexibility, the protocol was not evident to me until
> > I got here though.
> >   
> >>>  Does resume data need to be
> >>> written from the same offset where it's read?    
> >>
> >> No, resume data should be written from the data_offset that vendor
> >> driver provided during resume.  

A)

> > s/resume/save/?

B)
 
> > Or is this saying that on resume that the vendor driver is requesting a
> > specific block of data via data_offset?   
> 
> Correct.

Which one is correct?  Thanks,

Alex

> > I think resume is going to be
> > directed by the user, writing in the same order they received the
> > data.  Thanks,
Kirti Wankhede June 21, 2019, 8:07 p.m. UTC | #7
On 6/22/2019 1:32 AM, Alex Williamson wrote:
> On Sat, 22 Jun 2019 01:08:40 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> On 6/21/2019 8:46 PM, Alex Williamson wrote:
>>> On Fri, 21 Jun 2019 12:08:26 +0530
>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>   
>>>> On 6/21/2019 12:55 AM, Alex Williamson wrote:  
>>>>> On Thu, 20 Jun 2019 20:07:36 +0530
>>>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>>>     
>>>>>> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
>>>>>> functions. These functions handles pre-copy and stop-and-copy phase.
>>>>>>
>>>>>> In _SAVING|_RUNNING device state or pre-copy phase:
>>>>>> - read pending_bytes
>>>>>> - read data_offset - indicates kernel driver to write data to staging
>>>>>>   buffer which is mmapped.    
>>>>>
>>>>> Why is data_offset the trigger rather than data_size?  It seems that
>>>>> data_offset can't really change dynamically since it might be mmap'd,
>>>>> so it seems unnatural to bother re-reading it.
>>>>>     
>>>>
>>>> Vendor driver can change data_offset, he can have different data_offset
>>>> for device data and dirty pages bitmap.
>>>>  
>>>>>> - read data_size - amount of data in bytes written by vendor driver in migration
>>>>>>   region.
>>>>>> - if data section is trapped, pread() number of bytes in data_size, from
>>>>>>   data_offset.
>>>>>> - if data section is mmaped, read mmaped buffer of size data_size.
>>>>>> - Write data packet to file stream as below:
>>>>>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
>>>>>> VFIO_MIG_FLAG_END_OF_STATE }
>>>>>>
>>>>>> In _SAVING device state or stop-and-copy phase
>>>>>> a. read config space of device and save to migration file stream. This
>>>>>>    doesn't need to be from vendor driver. Any other special config state
>>>>>>    from driver can be saved as data in following iteration.
>>>>>> b. read pending_bytes - indicates kernel driver to write data to staging
>>>>>>    buffer which is mmapped.    
>>>>>
>>>>> Is it pending_bytes or data_offset that triggers the write out of
>>>>> data?  Why pending_bytes vs data_size?  I was interpreting
>>>>> pending_bytes as the total data size while data_size is the size
>>>>> available to read now, so assumed data_size would be more closely
>>>>> aligned to making the data available.
>>>>>     
>>>>
>>>> Sorry, that's my mistake while editing, its read data_offset as in above
>>>> case.
>>>>  
>>>>>> c. read data_size - amount of data in bytes written by vendor driver in
>>>>>>    migration region.
>>>>>> d. if data section is trapped, pread() from data_offset of size data_size.
>>>>>> e. if data section is mmaped, read mmaped buffer of size data_size.    
>>>>>
>>>>> Should this read as "pread() from data_offset of data_size, or
>>>>> optionally if mmap is supported on the data area, read data_size from
>>>>> start of mapped buffer"?  IOW, pread should always work.  Same in
>>>>> previous section.
>>>>>     
>>>>
>>>> ok. I'll update.
>>>>  
>>>>>> f. Write data packet as below:
>>>>>>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
>>>>>> g. iterate through steps b to f until (pending_bytes > 0)    
>>>>>
>>>>> s/until/while/    
>>>>
>>>> Ok.
>>>>  
>>>>>     
>>>>>> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
>>>>>>
>>>>>> .save_live_iterate runs outside the iothread lock in the migration case, which
>>>>>> could race with asynchronous call to get dirty page list causing data corruption
>>>>>> in mapped migration region. Mutex added here to serial migration buffer read
>>>>>> operation.    
>>>>>
>>>>> Would we be ahead to use different offsets within the region for device
>>>>> data vs dirty bitmap to avoid this?
>>>>>    
>>>>
>>>> Lock will still be required to serialize the read/write operations on
>>>> vfio_device_migration_info structure in the region.
>>>>
>>>>  
>>>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>>>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>>>>>> ---
>>>>>>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>  1 file changed, 212 insertions(+)
>>>>>>
>>>>>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
>>>>>> index fe0887c27664..0a2f30872316 100644
>>>>>> --- a/hw/vfio/migration.c
>>>>>> +++ b/hw/vfio/migration.c
>>>>>> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
>>>>>>      return 0;
>>>>>>  }
>>>>>>  
>>>>>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
>>>>>> +{
>>>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>>>> +    VFIORegion *region = &migration->region.buffer;
>>>>>> +    uint64_t data_offset = 0, data_size = 0;
>>>>>> +    int ret;
>>>>>> +
>>>>>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>>>> +                                             data_offset));
>>>>>> +    if (ret != sizeof(data_offset)) {
>>>>>> +        error_report("Failed to get migration buffer data offset %d",
>>>>>> +                     ret);
>>>>>> +        return -EINVAL;
>>>>>> +    }
>>>>>> +
>>>>>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>>>> +                                             data_size));
>>>>>> +    if (ret != sizeof(data_size)) {
>>>>>> +        error_report("Failed to get migration buffer data size %d",
>>>>>> +                     ret);
>>>>>> +        return -EINVAL;
>>>>>> +    }
>>>>>> +
>>>>>> +    if (data_size > 0) {
>>>>>> +        void *buf = NULL;
>>>>>> +        bool buffer_mmaped = false;
>>>>>> +
>>>>>> +        if (region->mmaps) {
>>>>>> +            int i;
>>>>>> +
>>>>>> +            for (i = 0; i < region->nr_mmaps; i++) {
>>>>>> +                if ((data_offset >= region->mmaps[i].offset) &&
>>>>>> +                    (data_offset < region->mmaps[i].offset +
>>>>>> +                                   region->mmaps[i].size)) {
>>>>>> +                    buf = region->mmaps[i].mmap + (data_offset -
>>>>>> +                                                   region->mmaps[i].offset);    
>>>>>
>>>>> So you're expecting that data_offset is somewhere within the data
>>>>> area.  Why doesn't the data always simply start at the beginning of the
>>>>> data area?  ie. data_offset would coincide with the beginning of the
>>>>> mmap'able area (if supported) and be static.  Does this enable some
>>>>> functionality in the vendor driver?    
>>>>
>>>> Do you want to enforce that to vendor driver?
>>>> From the feedback on previous version I thought vendor driver should
>>>> define data_offset within the region
>>>> "I'd suggest that the vendor driver expose a read-only
>>>> data_offset that matches a sparse mmap capability entry should the
>>>> driver support mmap.  The use should always read or write data from the
>>>> vendor defined data_offset"
>>>>
>>>> This also adds flexibility to vendor driver such that vendor driver can
>>>> define different data_offset for device data and dirty page bitmap
>>>> within same mmaped region.  
>>>
>>> I agree, it adds flexibility, the protocol was not evident to me until
>>> I got here though.
>>>   
>>>>>  Does resume data need to be
>>>>> written from the same offset where it's read?    
>>>>
>>>> No, resume data should be written from the data_offset that vendor
>>>> driver provided during resume.  
> 
> A)
> 
>>> s/resume/save/?
> 
> B)
>  
>>> Or is this saying that on resume that the vendor driver is requesting a
>>> specific block of data via data_offset?   
>>
>> Correct.
> 
> Which one is correct?  Thanks,
> 

B is correct.

Thanks,
Kirti


> Alex
> 
>>> I think resume is going to be
>>> directed by the user, writing in the same order they received the
>>> data.  Thanks,
Alex Williamson June 21, 2019, 8:32 p.m. UTC | #8
On Sat, 22 Jun 2019 01:37:47 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 6/22/2019 1:32 AM, Alex Williamson wrote:
> > On Sat, 22 Jun 2019 01:08:40 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >   
> >> On 6/21/2019 8:46 PM, Alex Williamson wrote:  
> >>> On Fri, 21 Jun 2019 12:08:26 +0530
> >>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >>>     
> >>>> On 6/21/2019 12:55 AM, Alex Williamson wrote:    
> >>>>> On Thu, 20 Jun 2019 20:07:36 +0530
> >>>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >>>>>       
> >>>>>> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> >>>>>> functions. These functions handles pre-copy and stop-and-copy phase.
> >>>>>>
> >>>>>> In _SAVING|_RUNNING device state or pre-copy phase:
> >>>>>> - read pending_bytes
> >>>>>> - read data_offset - indicates kernel driver to write data to staging
> >>>>>>   buffer which is mmapped.      
> >>>>>
> >>>>> Why is data_offset the trigger rather than data_size?  It seems that
> >>>>> data_offset can't really change dynamically since it might be mmap'd,
> >>>>> so it seems unnatural to bother re-reading it.
> >>>>>       
> >>>>
> >>>> Vendor driver can change data_offset, he can have different data_offset
> >>>> for device data and dirty pages bitmap.
> >>>>    
> >>>>>> - read data_size - amount of data in bytes written by vendor driver in migration
> >>>>>>   region.
> >>>>>> - if data section is trapped, pread() number of bytes in data_size, from
> >>>>>>   data_offset.
> >>>>>> - if data section is mmaped, read mmaped buffer of size data_size.
> >>>>>> - Write data packet to file stream as below:
> >>>>>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> >>>>>> VFIO_MIG_FLAG_END_OF_STATE }
> >>>>>>
> >>>>>> In _SAVING device state or stop-and-copy phase
> >>>>>> a. read config space of device and save to migration file stream. This
> >>>>>>    doesn't need to be from vendor driver. Any other special config state
> >>>>>>    from driver can be saved as data in following iteration.
> >>>>>> b. read pending_bytes - indicates kernel driver to write data to staging
> >>>>>>    buffer which is mmapped.      
> >>>>>
> >>>>> Is it pending_bytes or data_offset that triggers the write out of
> >>>>> data?  Why pending_bytes vs data_size?  I was interpreting
> >>>>> pending_bytes as the total data size while data_size is the size
> >>>>> available to read now, so assumed data_size would be more closely
> >>>>> aligned to making the data available.
> >>>>>       
> >>>>
> >>>> Sorry, that's my mistake while editing, its read data_offset as in above
> >>>> case.
> >>>>    
> >>>>>> c. read data_size - amount of data in bytes written by vendor driver in
> >>>>>>    migration region.
> >>>>>> d. if data section is trapped, pread() from data_offset of size data_size.
> >>>>>> e. if data section is mmaped, read mmaped buffer of size data_size.      
> >>>>>
> >>>>> Should this read as "pread() from data_offset of data_size, or
> >>>>> optionally if mmap is supported on the data area, read data_size from
> >>>>> start of mapped buffer"?  IOW, pread should always work.  Same in
> >>>>> previous section.
> >>>>>       
> >>>>
> >>>> ok. I'll update.
> >>>>    
> >>>>>> f. Write data packet as below:
> >>>>>>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> >>>>>> g. iterate through steps b to f until (pending_bytes > 0)      
> >>>>>
> >>>>> s/until/while/      
> >>>>
> >>>> Ok.
> >>>>    
> >>>>>       
> >>>>>> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> >>>>>>
> >>>>>> .save_live_iterate runs outside the iothread lock in the migration case, which
> >>>>>> could race with asynchronous call to get dirty page list causing data corruption
> >>>>>> in mapped migration region. Mutex added here to serial migration buffer read
> >>>>>> operation.      
> >>>>>
> >>>>> Would we be ahead to use different offsets within the region for device
> >>>>> data vs dirty bitmap to avoid this?
> >>>>>      
> >>>>
> >>>> Lock will still be required to serialize the read/write operations on
> >>>> vfio_device_migration_info structure in the region.
> >>>>
> >>>>    
> >>>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >>>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >>>>>> ---
> >>>>>>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>>  1 file changed, 212 insertions(+)
> >>>>>>
> >>>>>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> >>>>>> index fe0887c27664..0a2f30872316 100644
> >>>>>> --- a/hw/vfio/migration.c
> >>>>>> +++ b/hw/vfio/migration.c
> >>>>>> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
> >>>>>>      return 0;
> >>>>>>  }
> >>>>>>  
> >>>>>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> >>>>>> +{
> >>>>>> +    VFIOMigration *migration = vbasedev->migration;
> >>>>>> +    VFIORegion *region = &migration->region.buffer;
> >>>>>> +    uint64_t data_offset = 0, data_size = 0;
> >>>>>> +    int ret;
> >>>>>> +
> >>>>>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> >>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >>>>>> +                                             data_offset));
> >>>>>> +    if (ret != sizeof(data_offset)) {
> >>>>>> +        error_report("Failed to get migration buffer data offset %d",
> >>>>>> +                     ret);
> >>>>>> +        return -EINVAL;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> >>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >>>>>> +                                             data_size));
> >>>>>> +    if (ret != sizeof(data_size)) {
> >>>>>> +        error_report("Failed to get migration buffer data size %d",
> >>>>>> +                     ret);
> >>>>>> +        return -EINVAL;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    if (data_size > 0) {
> >>>>>> +        void *buf = NULL;
> >>>>>> +        bool buffer_mmaped = false;
> >>>>>> +
> >>>>>> +        if (region->mmaps) {
> >>>>>> +            int i;
> >>>>>> +
> >>>>>> +            for (i = 0; i < region->nr_mmaps; i++) {
> >>>>>> +                if ((data_offset >= region->mmaps[i].offset) &&
> >>>>>> +                    (data_offset < region->mmaps[i].offset +
> >>>>>> +                                   region->mmaps[i].size)) {
> >>>>>> +                    buf = region->mmaps[i].mmap + (data_offset -
> >>>>>> +                                                   region->mmaps[i].offset);      
> >>>>>
> >>>>> So you're expecting that data_offset is somewhere within the data
> >>>>> area.  Why doesn't the data always simply start at the beginning of the
> >>>>> data area?  ie. data_offset would coincide with the beginning of the
> >>>>> mmap'able area (if supported) and be static.  Does this enable some
> >>>>> functionality in the vendor driver?      
> >>>>
> >>>> Do you want to enforce that to vendor driver?
> >>>> From the feedback on previous version I thought vendor driver should
> >>>> define data_offset within the region
> >>>> "I'd suggest that the vendor driver expose a read-only
> >>>> data_offset that matches a sparse mmap capability entry should the
> >>>> driver support mmap.  The use should always read or write data from the
> >>>> vendor defined data_offset"
> >>>>
> >>>> This also adds flexibility to vendor driver such that vendor driver can
> >>>> define different data_offset for device data and dirty page bitmap
> >>>> within same mmaped region.    
> >>>
> >>> I agree, it adds flexibility, the protocol was not evident to me until
> >>> I got here though.
> >>>     
> >>>>>  Does resume data need to be
> >>>>> written from the same offset where it's read?      
> >>>>
> >>>> No, resume data should be written from the data_offset that vendor
> >>>> driver provided during resume.    
> > 
> > A)
> >   
> >>> s/resume/save/?  
> > 
> > B)
> >    
> >>> Or is this saying that on resume that the vendor driver is requesting a
> >>> specific block of data via data_offset?     
> >>
> >> Correct.  
> > 
> > Which one is correct?  Thanks,
> >   
> 
> B is correct.

Shouldn't data_offset be stored in the migration stream then so we can
at least verify that source and target are in sync?  I'm not getting a
sense that this protocol involves any sort of sanity or integrity
testing on the vendor driver end, the user can just feed garbage into
the device on resume and watch the results :-\  Thanks,

Alex
Kirti Wankhede June 21, 2019, 9:05 p.m. UTC | #9
On 6/22/2019 2:02 AM, Alex Williamson wrote:
> On Sat, 22 Jun 2019 01:37:47 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> On 6/22/2019 1:32 AM, Alex Williamson wrote:
>>> On Sat, 22 Jun 2019 01:08:40 +0530
>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>   
>>>> On 6/21/2019 8:46 PM, Alex Williamson wrote:  
>>>>> On Fri, 21 Jun 2019 12:08:26 +0530
>>>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>>>     
>>>>>> On 6/21/2019 12:55 AM, Alex Williamson wrote:    
>>>>>>> On Thu, 20 Jun 2019 20:07:36 +0530
>>>>>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>>>>>       
>>>>>>>> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
>>>>>>>> functions. These functions handles pre-copy and stop-and-copy phase.
>>>>>>>>
>>>>>>>> In _SAVING|_RUNNING device state or pre-copy phase:
>>>>>>>> - read pending_bytes
>>>>>>>> - read data_offset - indicates kernel driver to write data to staging
>>>>>>>>   buffer which is mmapped.      
>>>>>>>
>>>>>>> Why is data_offset the trigger rather than data_size?  It seems that
>>>>>>> data_offset can't really change dynamically since it might be mmap'd,
>>>>>>> so it seems unnatural to bother re-reading it.
>>>>>>>       
>>>>>>
>>>>>> Vendor driver can change data_offset, he can have different data_offset
>>>>>> for device data and dirty pages bitmap.
>>>>>>    
>>>>>>>> - read data_size - amount of data in bytes written by vendor driver in migration
>>>>>>>>   region.
>>>>>>>> - if data section is trapped, pread() number of bytes in data_size, from
>>>>>>>>   data_offset.
>>>>>>>> - if data section is mmaped, read mmaped buffer of size data_size.
>>>>>>>> - Write data packet to file stream as below:
>>>>>>>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
>>>>>>>> VFIO_MIG_FLAG_END_OF_STATE }
>>>>>>>>
>>>>>>>> In _SAVING device state or stop-and-copy phase
>>>>>>>> a. read config space of device and save to migration file stream. This
>>>>>>>>    doesn't need to be from vendor driver. Any other special config state
>>>>>>>>    from driver can be saved as data in following iteration.
>>>>>>>> b. read pending_bytes - indicates kernel driver to write data to staging
>>>>>>>>    buffer which is mmapped.      
>>>>>>>
>>>>>>> Is it pending_bytes or data_offset that triggers the write out of
>>>>>>> data?  Why pending_bytes vs data_size?  I was interpreting
>>>>>>> pending_bytes as the total data size while data_size is the size
>>>>>>> available to read now, so assumed data_size would be more closely
>>>>>>> aligned to making the data available.
>>>>>>>       
>>>>>>
>>>>>> Sorry, that's my mistake while editing, its read data_offset as in above
>>>>>> case.
>>>>>>    
>>>>>>>> c. read data_size - amount of data in bytes written by vendor driver in
>>>>>>>>    migration region.
>>>>>>>> d. if data section is trapped, pread() from data_offset of size data_size.
>>>>>>>> e. if data section is mmaped, read mmaped buffer of size data_size.      
>>>>>>>
>>>>>>> Should this read as "pread() from data_offset of data_size, or
>>>>>>> optionally if mmap is supported on the data area, read data_size from
>>>>>>> start of mapped buffer"?  IOW, pread should always work.  Same in
>>>>>>> previous section.
>>>>>>>       
>>>>>>
>>>>>> ok. I'll update.
>>>>>>    
>>>>>>>> f. Write data packet as below:
>>>>>>>>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
>>>>>>>> g. iterate through steps b to f until (pending_bytes > 0)      
>>>>>>>
>>>>>>> s/until/while/      
>>>>>>
>>>>>> Ok.
>>>>>>    
>>>>>>>       
>>>>>>>> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
>>>>>>>>
>>>>>>>> .save_live_iterate runs outside the iothread lock in the migration case, which
>>>>>>>> could race with asynchronous call to get dirty page list causing data corruption
>>>>>>>> in mapped migration region. Mutex added here to serial migration buffer read
>>>>>>>> operation.      
>>>>>>>
>>>>>>> Would we be ahead to use different offsets within the region for device
>>>>>>> data vs dirty bitmap to avoid this?
>>>>>>>      
>>>>>>
>>>>>> Lock will still be required to serialize the read/write operations on
>>>>>> vfio_device_migration_info structure in the region.
>>>>>>
>>>>>>    
>>>>>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>>>>>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>>>>>>>> ---
>>>>>>>>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>>>  1 file changed, 212 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
>>>>>>>> index fe0887c27664..0a2f30872316 100644
>>>>>>>> --- a/hw/vfio/migration.c
>>>>>>>> +++ b/hw/vfio/migration.c
>>>>>>>> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
>>>>>>>>      return 0;
>>>>>>>>  }
>>>>>>>>  
>>>>>>>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
>>>>>>>> +{
>>>>>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>>>>>> +    VFIORegion *region = &migration->region.buffer;
>>>>>>>> +    uint64_t data_offset = 0, data_size = 0;
>>>>>>>> +    int ret;
>>>>>>>> +
>>>>>>>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
>>>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>>>>>> +                                             data_offset));
>>>>>>>> +    if (ret != sizeof(data_offset)) {
>>>>>>>> +        error_report("Failed to get migration buffer data offset %d",
>>>>>>>> +                     ret);
>>>>>>>> +        return -EINVAL;
>>>>>>>> +    }
>>>>>>>> +
>>>>>>>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
>>>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>>>>>> +                                             data_size));
>>>>>>>> +    if (ret != sizeof(data_size)) {
>>>>>>>> +        error_report("Failed to get migration buffer data size %d",
>>>>>>>> +                     ret);
>>>>>>>> +        return -EINVAL;
>>>>>>>> +    }
>>>>>>>> +
>>>>>>>> +    if (data_size > 0) {
>>>>>>>> +        void *buf = NULL;
>>>>>>>> +        bool buffer_mmaped = false;
>>>>>>>> +
>>>>>>>> +        if (region->mmaps) {
>>>>>>>> +            int i;
>>>>>>>> +
>>>>>>>> +            for (i = 0; i < region->nr_mmaps; i++) {
>>>>>>>> +                if ((data_offset >= region->mmaps[i].offset) &&
>>>>>>>> +                    (data_offset < region->mmaps[i].offset +
>>>>>>>> +                                   region->mmaps[i].size)) {
>>>>>>>> +                    buf = region->mmaps[i].mmap + (data_offset -
>>>>>>>> +                                                   region->mmaps[i].offset);      
>>>>>>>
>>>>>>> So you're expecting that data_offset is somewhere within the data
>>>>>>> area.  Why doesn't the data always simply start at the beginning of the
>>>>>>> data area?  ie. data_offset would coincide with the beginning of the
>>>>>>> mmap'able area (if supported) and be static.  Does this enable some
>>>>>>> functionality in the vendor driver?      
>>>>>>
>>>>>> Do you want to enforce that to vendor driver?
>>>>>> From the feedback on previous version I thought vendor driver should
>>>>>> define data_offset within the region
>>>>>> "I'd suggest that the vendor driver expose a read-only
>>>>>> data_offset that matches a sparse mmap capability entry should the
>>>>>> driver support mmap.  The use should always read or write data from the
>>>>>> vendor defined data_offset"
>>>>>>
>>>>>> This also adds flexibility to vendor driver such that vendor driver can
>>>>>> define different data_offset for device data and dirty page bitmap
>>>>>> within same mmaped region.    
>>>>>
>>>>> I agree, it adds flexibility, the protocol was not evident to me until
>>>>> I got here though.
>>>>>     
>>>>>>>  Does resume data need to be
>>>>>>> written from the same offset where it's read?      
>>>>>>
>>>>>> No, resume data should be written from the data_offset that vendor
>>>>>> driver provided during resume.    
>>>
>>> A)
>>>   
>>>>> s/resume/save/?  
>>>
>>> B)
>>>    
>>>>> Or is this saying that on resume that the vendor driver is requesting a
>>>>> specific block of data via data_offset?     
>>>>
>>>> Correct.  
>>>
>>> Which one is correct?  Thanks,
>>>   
>>
>> B is correct.
> 
> Shouldn't data_offset be stored in the migration stream then so we can
> at least verify that source and target are in sync? 

Why? data_offset is offset within migration region, nothing to do with
data stream. While resuming vendor driver can ask data at different
offset in migration region.

> I'm not getting a
> sense that this protocol involves any sort of sanity or integrity
> testing on the vendor driver end, the user can just feed garbage into
> the device on resume and watch the results :-\  Thanks,
>

vendor driver should be able to do sanity and integrity check within its
opaque data. If that sanity fails, return failure for access on field in
migration region structure.

Thanks,
Kirti

> Alex
>
Alex Williamson June 21, 2019, 10:13 p.m. UTC | #10
On Sat, 22 Jun 2019 02:35:02 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 6/22/2019 2:02 AM, Alex Williamson wrote:
> > On Sat, 22 Jun 2019 01:37:47 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >   
> >> On 6/22/2019 1:32 AM, Alex Williamson wrote:  
> >>> On Sat, 22 Jun 2019 01:08:40 +0530
> >>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >>>     
> >>>> On 6/21/2019 8:46 PM, Alex Williamson wrote:    
> >>>>> On Fri, 21 Jun 2019 12:08:26 +0530
> >>>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >>>>>       
> >>>>>> On 6/21/2019 12:55 AM, Alex Williamson wrote:      
> >>>>>>> On Thu, 20 Jun 2019 20:07:36 +0530
> >>>>>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >>>>>>>         
> >>>>>>>> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> >>>>>>>> functions. These functions handles pre-copy and stop-and-copy phase.
> >>>>>>>>
> >>>>>>>> In _SAVING|_RUNNING device state or pre-copy phase:
> >>>>>>>> - read pending_bytes
> >>>>>>>> - read data_offset - indicates kernel driver to write data to staging
> >>>>>>>>   buffer which is mmapped.        
> >>>>>>>
> >>>>>>> Why is data_offset the trigger rather than data_size?  It seems that
> >>>>>>> data_offset can't really change dynamically since it might be mmap'd,
> >>>>>>> so it seems unnatural to bother re-reading it.
> >>>>>>>         
> >>>>>>
> >>>>>> Vendor driver can change data_offset, he can have different data_offset
> >>>>>> for device data and dirty pages bitmap.
> >>>>>>      
> >>>>>>>> - read data_size - amount of data in bytes written by vendor driver in migration
> >>>>>>>>   region.
> >>>>>>>> - if data section is trapped, pread() number of bytes in data_size, from
> >>>>>>>>   data_offset.
> >>>>>>>> - if data section is mmaped, read mmaped buffer of size data_size.
> >>>>>>>> - Write data packet to file stream as below:
> >>>>>>>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> >>>>>>>> VFIO_MIG_FLAG_END_OF_STATE }
> >>>>>>>>
> >>>>>>>> In _SAVING device state or stop-and-copy phase
> >>>>>>>> a. read config space of device and save to migration file stream. This
> >>>>>>>>    doesn't need to be from vendor driver. Any other special config state
> >>>>>>>>    from driver can be saved as data in following iteration.
> >>>>>>>> b. read pending_bytes - indicates kernel driver to write data to staging
> >>>>>>>>    buffer which is mmapped.        
> >>>>>>>
> >>>>>>> Is it pending_bytes or data_offset that triggers the write out of
> >>>>>>> data?  Why pending_bytes vs data_size?  I was interpreting
> >>>>>>> pending_bytes as the total data size while data_size is the size
> >>>>>>> available to read now, so assumed data_size would be more closely
> >>>>>>> aligned to making the data available.
> >>>>>>>         
> >>>>>>
> >>>>>> Sorry, that's my mistake while editing, its read data_offset as in above
> >>>>>> case.
> >>>>>>      
> >>>>>>>> c. read data_size - amount of data in bytes written by vendor driver in
> >>>>>>>>    migration region.
> >>>>>>>> d. if data section is trapped, pread() from data_offset of size data_size.
> >>>>>>>> e. if data section is mmaped, read mmaped buffer of size data_size.        
> >>>>>>>
> >>>>>>> Should this read as "pread() from data_offset of data_size, or
> >>>>>>> optionally if mmap is supported on the data area, read data_size from
> >>>>>>> start of mapped buffer"?  IOW, pread should always work.  Same in
> >>>>>>> previous section.
> >>>>>>>         
> >>>>>>
> >>>>>> ok. I'll update.
> >>>>>>      
> >>>>>>>> f. Write data packet as below:
> >>>>>>>>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> >>>>>>>> g. iterate through steps b to f until (pending_bytes > 0)        
> >>>>>>>
> >>>>>>> s/until/while/        
> >>>>>>
> >>>>>> Ok.
> >>>>>>      
> >>>>>>>         
> >>>>>>>> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> >>>>>>>>
> >>>>>>>> .save_live_iterate runs outside the iothread lock in the migration case, which
> >>>>>>>> could race with asynchronous call to get dirty page list causing data corruption
> >>>>>>>> in mapped migration region. Mutex added here to serial migration buffer read
> >>>>>>>> operation.        
> >>>>>>>
> >>>>>>> Would we be ahead to use different offsets within the region for device
> >>>>>>> data vs dirty bitmap to avoid this?
> >>>>>>>        
> >>>>>>
> >>>>>> Lock will still be required to serialize the read/write operations on
> >>>>>> vfio_device_migration_info structure in the region.
> >>>>>>
> >>>>>>      
> >>>>>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >>>>>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >>>>>>>> ---
> >>>>>>>>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>>>>  1 file changed, 212 insertions(+)
> >>>>>>>>
> >>>>>>>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> >>>>>>>> index fe0887c27664..0a2f30872316 100644
> >>>>>>>> --- a/hw/vfio/migration.c
> >>>>>>>> +++ b/hw/vfio/migration.c
> >>>>>>>> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
> >>>>>>>>      return 0;
> >>>>>>>>  }
> >>>>>>>>  
> >>>>>>>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> >>>>>>>> +{
> >>>>>>>> +    VFIOMigration *migration = vbasedev->migration;
> >>>>>>>> +    VFIORegion *region = &migration->region.buffer;
> >>>>>>>> +    uint64_t data_offset = 0, data_size = 0;
> >>>>>>>> +    int ret;
> >>>>>>>> +
> >>>>>>>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> >>>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >>>>>>>> +                                             data_offset));
> >>>>>>>> +    if (ret != sizeof(data_offset)) {
> >>>>>>>> +        error_report("Failed to get migration buffer data offset %d",
> >>>>>>>> +                     ret);
> >>>>>>>> +        return -EINVAL;
> >>>>>>>> +    }
> >>>>>>>> +
> >>>>>>>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> >>>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >>>>>>>> +                                             data_size));
> >>>>>>>> +    if (ret != sizeof(data_size)) {
> >>>>>>>> +        error_report("Failed to get migration buffer data size %d",
> >>>>>>>> +                     ret);
> >>>>>>>> +        return -EINVAL;
> >>>>>>>> +    }
> >>>>>>>> +
> >>>>>>>> +    if (data_size > 0) {
> >>>>>>>> +        void *buf = NULL;
> >>>>>>>> +        bool buffer_mmaped = false;
> >>>>>>>> +
> >>>>>>>> +        if (region->mmaps) {
> >>>>>>>> +            int i;
> >>>>>>>> +
> >>>>>>>> +            for (i = 0; i < region->nr_mmaps; i++) {
> >>>>>>>> +                if ((data_offset >= region->mmaps[i].offset) &&
> >>>>>>>> +                    (data_offset < region->mmaps[i].offset +
> >>>>>>>> +                                   region->mmaps[i].size)) {
> >>>>>>>> +                    buf = region->mmaps[i].mmap + (data_offset -
> >>>>>>>> +                                                   region->mmaps[i].offset);        
> >>>>>>>
> >>>>>>> So you're expecting that data_offset is somewhere within the data
> >>>>>>> area.  Why doesn't the data always simply start at the beginning of the
> >>>>>>> data area?  ie. data_offset would coincide with the beginning of the
> >>>>>>> mmap'able area (if supported) and be static.  Does this enable some
> >>>>>>> functionality in the vendor driver?        
> >>>>>>
> >>>>>> Do you want to enforce that to vendor driver?
> >>>>>> From the feedback on previous version I thought vendor driver should
> >>>>>> define data_offset within the region
> >>>>>> "I'd suggest that the vendor driver expose a read-only
> >>>>>> data_offset that matches a sparse mmap capability entry should the
> >>>>>> driver support mmap.  The use should always read or write data from the
> >>>>>> vendor defined data_offset"
> >>>>>>
> >>>>>> This also adds flexibility to vendor driver such that vendor driver can
> >>>>>> define different data_offset for device data and dirty page bitmap
> >>>>>> within same mmaped region.      
> >>>>>
> >>>>> I agree, it adds flexibility, the protocol was not evident to me until
> >>>>> I got here though.
> >>>>>       
> >>>>>>>  Does resume data need to be
> >>>>>>> written from the same offset where it's read?        
> >>>>>>
> >>>>>> No, resume data should be written from the data_offset that vendor
> >>>>>> driver provided during resume.      
> >>>
> >>> A)
> >>>     
> >>>>> s/resume/save/?    
> >>>
> >>> B)
> >>>      
> >>>>> Or is this saying that on resume that the vendor driver is requesting a
> >>>>> specific block of data via data_offset?       
> >>>>
> >>>> Correct.    
> >>>
> >>> Which one is correct?  Thanks,
> >>>     
> >>
> >> B is correct.  
> > 
> > Shouldn't data_offset be stored in the migration stream then so we can
> > at least verify that source and target are in sync?   
> 
> Why? data_offset is offset within migration region, nothing to do with
> data stream. While resuming vendor driver can ask data at different
> offset in migration region.

So the data is opaque and the sequencing is opaque, the user should
have no expectation that there's any relationship between where the
data was read from while saving versus where the target device is
requesting the next block be written while resuming.  We have a data
blob and a size and we do what we're told.

> > I'm not getting a
> > sense that this protocol involves any sort of sanity or integrity
> > testing on the vendor driver end, the user can just feed garbage into
> > the device on resume and watch the results :-\  Thanks,
> >  
> 
> vendor driver should be able to do sanity and integrity check within its
> opaque data. If that sanity fails, return failure for access on field in
> migration region structure.

Would that be a synchronous failure on the write of data_size, which
should result in the device_state moving to invalid?  Thanks,

Alex
Kirti Wankhede June 24, 2019, 2:31 p.m. UTC | #11
<snip>
>>>>>>>>>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
>>>>>>>>>> +{
>>>>>>>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>>>>>>>> +    VFIORegion *region = &migration->region.buffer;
>>>>>>>>>> +    uint64_t data_offset = 0, data_size = 0;
>>>>>>>>>> +    int ret;
>>>>>>>>>> +
>>>>>>>>>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
>>>>>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>>>>>>>> +                                             data_offset));
>>>>>>>>>> +    if (ret != sizeof(data_offset)) {
>>>>>>>>>> +        error_report("Failed to get migration buffer data offset %d",
>>>>>>>>>> +                     ret);
>>>>>>>>>> +        return -EINVAL;
>>>>>>>>>> +    }
>>>>>>>>>> +
>>>>>>>>>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
>>>>>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>>>>>>>> +                                             data_size));
>>>>>>>>>> +    if (ret != sizeof(data_size)) {
>>>>>>>>>> +        error_report("Failed to get migration buffer data size %d",
>>>>>>>>>> +                     ret);
>>>>>>>>>> +        return -EINVAL;
>>>>>>>>>> +    }
>>>>>>>>>> +
>>>>>>>>>> +    if (data_size > 0) {
>>>>>>>>>> +        void *buf = NULL;
>>>>>>>>>> +        bool buffer_mmaped = false;
>>>>>>>>>> +
>>>>>>>>>> +        if (region->mmaps) {
>>>>>>>>>> +            int i;
>>>>>>>>>> +
>>>>>>>>>> +            for (i = 0; i < region->nr_mmaps; i++) {
>>>>>>>>>> +                if ((data_offset >= region->mmaps[i].offset) &&
>>>>>>>>>> +                    (data_offset < region->mmaps[i].offset +
>>>>>>>>>> +                                   region->mmaps[i].size)) {
>>>>>>>>>> +                    buf = region->mmaps[i].mmap + (data_offset -
>>>>>>>>>> +                                                   region->mmaps[i].offset);        
>>>>>>>>>
>>>>>>>>> So you're expecting that data_offset is somewhere within the data
>>>>>>>>> area.  Why doesn't the data always simply start at the beginning of the
>>>>>>>>> data area?  ie. data_offset would coincide with the beginning of the
>>>>>>>>> mmap'able area (if supported) and be static.  Does this enable some
>>>>>>>>> functionality in the vendor driver?        
>>>>>>>>
>>>>>>>> Do you want to enforce that to vendor driver?
>>>>>>>> From the feedback on previous version I thought vendor driver should
>>>>>>>> define data_offset within the region
>>>>>>>> "I'd suggest that the vendor driver expose a read-only
>>>>>>>> data_offset that matches a sparse mmap capability entry should the
>>>>>>>> driver support mmap.  The use should always read or write data from the
>>>>>>>> vendor defined data_offset"
>>>>>>>>
>>>>>>>> This also adds flexibility to vendor driver such that vendor driver can
>>>>>>>> define different data_offset for device data and dirty page bitmap
>>>>>>>> within same mmaped region.      
>>>>>>>
>>>>>>> I agree, it adds flexibility, the protocol was not evident to me until
>>>>>>> I got here though.
>>>>>>>       
>>>>>>>>>  Does resume data need to be
>>>>>>>>> written from the same offset where it's read?        
>>>>>>>>
>>>>>>>> No, resume data should be written from the data_offset that vendor
>>>>>>>> driver provided during resume.      
>>>>>
>>>>> A)
>>>>>     
>>>>>>> s/resume/save/?    
>>>>>
>>>>> B)
>>>>>      
>>>>>>> Or is this saying that on resume that the vendor driver is requesting a
>>>>>>> specific block of data via data_offset?       
>>>>>>
>>>>>> Correct.    
>>>>>
>>>>> Which one is correct?  Thanks,
>>>>>     
>>>>
>>>> B is correct.  
>>>
>>> Shouldn't data_offset be stored in the migration stream then so we can
>>> at least verify that source and target are in sync?   
>>
>> Why? data_offset is offset within migration region, nothing to do with
>> data stream. While resuming vendor driver can ask data at different
>> offset in migration region.
> 
> So the data is opaque and the sequencing is opaque, the user should
> have no expectation that there's any relationship between where the
> data was read from while saving versus where the target device is
> requesting the next block be written while resuming.  We have a data
> blob and a size and we do what we're told.
> 

That's correct.

>>> I'm not getting a
>>> sense that this protocol involves any sort of sanity or integrity
>>> testing on the vendor driver end, the user can just feed garbage into
>>> the device on resume and watch the results :-\  Thanks,
>>>  
>>
>> vendor driver should be able to do sanity and integrity check within its
>> opaque data. If that sanity fails, return failure for access on field in
>> migration region structure.
> 
> Would that be a synchronous failure on the write of data_size, which
> should result in the device_state moving to invalid?  Thanks,
> 

If data section of migration region is mapped, then on write to
data_size, vendor driver should read staging buffer, validate data and
return sizeof(data_size) if success or return error (< 0). If data
section is trapped, then write on data section should return accordingly
on receiving data. On error, migration/restore would fail.

Thanks,
Kirti
Yan Zhao June 25, 2019, 3:30 a.m. UTC | #12
On Fri, Jun 21, 2019 at 08:31:53AM +0800, Yan Zhao wrote:
> On Thu, Jun 20, 2019 at 10:37:36PM +0800, Kirti Wankhede wrote:
> > Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> > functions. These functions handles pre-copy and stop-and-copy phase.
> > 
> > In _SAVING|_RUNNING device state or pre-copy phase:
> > - read pending_bytes
> > - read data_offset - indicates kernel driver to write data to staging
> >   buffer which is mmapped.
> > - read data_size - amount of data in bytes written by vendor driver in migration
> >   region.
> > - if data section is trapped, pread() number of bytes in data_size, from
> >   data_offset.
> > - if data section is mmaped, read mmaped buffer of size data_size.
> > - Write data packet to file stream as below:
> > {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> > VFIO_MIG_FLAG_END_OF_STATE }
> > 
> > In _SAVING device state or stop-and-copy phase
> > a. read config space of device and save to migration file stream. This
> >    doesn't need to be from vendor driver. Any other special config state
> >    from driver can be saved as data in following iteration.
> > b. read pending_bytes - indicates kernel driver to write data to staging
> >    buffer which is mmapped.
> > c. read data_size - amount of data in bytes written by vendor driver in
> >    migration region.
> > d. if data section is trapped, pread() from data_offset of size data_size.
> > e. if data section is mmaped, read mmaped buffer of size data_size.
> > f. Write data packet as below:
> >    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> > g. iterate through steps b to f until (pending_bytes > 0)
> > h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> > 
> > .save_live_iterate runs outside the iothread lock in the migration case, which
> > could race with asynchronous call to get dirty page list causing data corruption
> > in mapped migration region. Mutex added here to serial migration buffer read
> > operation.
> > 
> > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > ---
> >  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 212 insertions(+)
> > 
> > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> > index fe0887c27664..0a2f30872316 100644
> > --- a/hw/vfio/migration.c
> > +++ b/hw/vfio/migration.c
> > @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
> >      return 0;
> >  }
> >  
> > +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> > +{
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    VFIORegion *region = &migration->region.buffer;
> > +    uint64_t data_offset = 0, data_size = 0;
> > +    int ret;
> > +
> > +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > +                                             data_offset));
> > +    if (ret != sizeof(data_offset)) {
> > +        error_report("Failed to get migration buffer data offset %d",
> > +                     ret);
> > +        return -EINVAL;
> > +    }
> > +
> > +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > +                                             data_size));
> > +    if (ret != sizeof(data_size)) {
> > +        error_report("Failed to get migration buffer data size %d",
> > +                     ret);
> > +        return -EINVAL;
> > +    }
> > +
> how big is the data_size ? 
> if this size is too big, it may take too much time and block others.
> 
> > +    if (data_size > 0) {
> > +        void *buf = NULL;
> > +        bool buffer_mmaped = false;
> > +
> > +        if (region->mmaps) {
> > +            int i;
> > +
> > +            for (i = 0; i < region->nr_mmaps; i++) {
> > +                if ((data_offset >= region->mmaps[i].offset) &&
> > +                    (data_offset < region->mmaps[i].offset +
> > +                                   region->mmaps[i].size)) {
> > +                    buf = region->mmaps[i].mmap + (data_offset -
> > +                                                   region->mmaps[i].offset);
> > +                    buffer_mmaped = true;
> > +                    break;
> > +                }
> > +            }
> > +        }
> > +
> > +        if (!buffer_mmaped) {
> > +            buf = g_malloc0(data_size);
> > +            ret = pread(vbasedev->fd, buf, data_size,
> > +                        region->fd_offset + data_offset);
> > +            if (ret != data_size) {
> > +                error_report("Failed to get migration data %d", ret);
> > +                g_free(buf);
> > +                return -EINVAL;
> > +            }
> > +        }
> > +
> > +        qemu_put_be64(f, data_size);
> > +        qemu_put_buffer(f, buf, data_size);
> > +
> > +        if (!buffer_mmaped) {
> > +            g_free(buf);
> > +        }
> > +        migration->pending_bytes -= data_size;
> > +    } else {
> > +        qemu_put_be64(f, data_size);
> > +    }
> > +
> > +    ret = qemu_file_get_error(f);
> > +
> > +    return data_size;
> > +}
> > +
> > +static int vfio_update_pending(VFIODevice *vbasedev)
> > +{
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    VFIORegion *region = &migration->region.buffer;
> > +    uint64_t pending_bytes = 0;
> > +    int ret;
> > +
> > +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
> > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > +                                             pending_bytes));
> > +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
> > +        error_report("Failed to get pending bytes %d", ret);
> > +        migration->pending_bytes = 0;
> > +        return (ret < 0) ? ret : -EINVAL;
> > +    }
> > +
> > +    migration->pending_bytes = pending_bytes;
> > +    return 0;
> > +}
> > +
> > +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
> > +{
> > +    VFIODevice *vbasedev = opaque;
> > +
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
> > +
> > +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
> > +        vfio_pci_save_config(vbasedev, f);
> > +    }
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > +
> > +    return qemu_file_get_error(f);
> > +}
> > +
> >  /* ---------------------------------------------------------------------- */
> >  
> >  static int vfio_save_setup(QEMUFile *f, void *opaque)
> > @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
> >      }
> >  }
> >  
> > +static void vfio_save_pending(QEMUFile *f, void *opaque,
> > +                              uint64_t threshold_size,
> > +                              uint64_t *res_precopy_only,
> > +                              uint64_t *res_compatible,
> > +                              uint64_t *res_postcopy_only)
> > +{
> > +    VFIODevice *vbasedev = opaque;
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    int ret;
> > +
> > +    ret = vfio_update_pending(vbasedev);
> > +    if (ret) {
> > +        return;
> > +    }
> > +
> > +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
> > +        *res_precopy_only += migration->pending_bytes;
> > +    } else {
> > +        *res_postcopy_only += migration->pending_bytes;
> > +    }
by definition,
- res_precopy_only is for data which must be migrated in precopy phase
   or in stopped state, in other words - before target vm start
- res_postcopy_only is for data which must be migrated in postcopy phase
  or in stopped state, in other words - after source vm stop
So, we can only determining data type by the nature of the data. i.e.
if it is device state data which must be copied after source vm stop and
before target vm start, it belongs to res_precopy_only.

It is not right to determining data type by current device state.

Thanks
Yan

> > +    *res_compatible += 0;
> > +}
> > +
> > +static int vfio_save_iterate(QEMUFile *f, void *opaque)
> > +{
> > +    VFIODevice *vbasedev = opaque;
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    int ret;
> > +
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> > +
> > +    qemu_mutex_lock(&migration->lock);
> > +    ret = vfio_save_buffer(f, vbasedev);
> > +    qemu_mutex_unlock(&migration->lock);
> > +
> > +    if (ret < 0) {
> > +        error_report("vfio_save_buffer failed %s",
> > +                     strerror(errno));
> > +        return ret;
> > +    }
> > +
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > +
> > +    ret = qemu_file_get_error(f);
> > +    if (ret) {
> > +        return ret;
> > +    }
> > +
> > +    return ret;
> > +}
> > +
> > +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> > +{
> > +    VFIODevice *vbasedev = opaque;
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    int ret;
> > +
> > +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
> > +    if (ret) {
> > +        error_report("Failed to set state STOP and SAVING");
> > +        return ret;
> > +    }
> > +
> > +    ret = vfio_save_device_config_state(f, opaque);
> > +    if (ret) {
> > +        return ret;
> > +    }
> > +
> > +    ret = vfio_update_pending(vbasedev);
> > +    if (ret) {
> > +        return ret;
> > +    }
> > +
> > +    while (migration->pending_bytes > 0) {
> > +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> > +        ret = vfio_save_buffer(f, vbasedev);
> > +        if (ret < 0) {
> > +            error_report("Failed to save buffer");
> > +            return ret;
> > +        } else if (ret == 0) {
> > +            break;
> > +        }
> > +
> > +        ret = vfio_update_pending(vbasedev);
> > +        if (ret) {
> > +            return ret;
> > +        }
> > +    }
> > +
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > +
> > +    ret = qemu_file_get_error(f);
> > +    if (ret) {
> > +        return ret;
> > +    }
> > +
> > +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
> > +    if (ret) {
> > +        error_report("Failed to set state STOPPED");
> > +        return ret;
> > +    }
> > +    return ret;
> > +}
> > +
> >  static SaveVMHandlers savevm_vfio_handlers = {
> >      .save_setup = vfio_save_setup,
> >      .save_cleanup = vfio_save_cleanup,
> > +    .save_live_pending = vfio_save_pending,
> > +    .save_live_iterate = vfio_save_iterate,
> > +    .save_live_complete_precopy = vfio_save_complete_precopy,
> >  };
> >  
> >  /* ---------------------------------------------------------------------- */
> > -- 
> > 2.7.0
> > 
>
Dr. David Alan Gilbert June 28, 2019, 8:50 a.m. UTC | #13
* Yan Zhao (yan.y.zhao@intel.com) wrote:
> On Fri, Jun 21, 2019 at 08:31:53AM +0800, Yan Zhao wrote:
> > On Thu, Jun 20, 2019 at 10:37:36PM +0800, Kirti Wankhede wrote:
> > > Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> > > functions. These functions handles pre-copy and stop-and-copy phase.
> > > 
> > > In _SAVING|_RUNNING device state or pre-copy phase:
> > > - read pending_bytes
> > > - read data_offset - indicates kernel driver to write data to staging
> > >   buffer which is mmapped.
> > > - read data_size - amount of data in bytes written by vendor driver in migration
> > >   region.
> > > - if data section is trapped, pread() number of bytes in data_size, from
> > >   data_offset.
> > > - if data section is mmaped, read mmaped buffer of size data_size.
> > > - Write data packet to file stream as below:
> > > {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> > > VFIO_MIG_FLAG_END_OF_STATE }
> > > 
> > > In _SAVING device state or stop-and-copy phase
> > > a. read config space of device and save to migration file stream. This
> > >    doesn't need to be from vendor driver. Any other special config state
> > >    from driver can be saved as data in following iteration.
> > > b. read pending_bytes - indicates kernel driver to write data to staging
> > >    buffer which is mmapped.
> > > c. read data_size - amount of data in bytes written by vendor driver in
> > >    migration region.
> > > d. if data section is trapped, pread() from data_offset of size data_size.
> > > e. if data section is mmaped, read mmaped buffer of size data_size.
> > > f. Write data packet as below:
> > >    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> > > g. iterate through steps b to f until (pending_bytes > 0)
> > > h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> > > 
> > > .save_live_iterate runs outside the iothread lock in the migration case, which
> > > could race with asynchronous call to get dirty page list causing data corruption
> > > in mapped migration region. Mutex added here to serial migration buffer read
> > > operation.
> > > 
> > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > ---
> > >  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 212 insertions(+)
> > > 
> > > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> > > index fe0887c27664..0a2f30872316 100644
> > > --- a/hw/vfio/migration.c
> > > +++ b/hw/vfio/migration.c
> > > @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
> > >      return 0;
> > >  }
> > >  
> > > +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> > > +{
> > > +    VFIOMigration *migration = vbasedev->migration;
> > > +    VFIORegion *region = &migration->region.buffer;
> > > +    uint64_t data_offset = 0, data_size = 0;
> > > +    int ret;
> > > +
> > > +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> > > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > > +                                             data_offset));
> > > +    if (ret != sizeof(data_offset)) {
> > > +        error_report("Failed to get migration buffer data offset %d",
> > > +                     ret);
> > > +        return -EINVAL;
> > > +    }
> > > +
> > > +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> > > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > > +                                             data_size));
> > > +    if (ret != sizeof(data_size)) {
> > > +        error_report("Failed to get migration buffer data size %d",
> > > +                     ret);
> > > +        return -EINVAL;
> > > +    }
> > > +
> > how big is the data_size ? 
> > if this size is too big, it may take too much time and block others.
> > 
> > > +    if (data_size > 0) {
> > > +        void *buf = NULL;
> > > +        bool buffer_mmaped = false;
> > > +
> > > +        if (region->mmaps) {
> > > +            int i;
> > > +
> > > +            for (i = 0; i < region->nr_mmaps; i++) {
> > > +                if ((data_offset >= region->mmaps[i].offset) &&
> > > +                    (data_offset < region->mmaps[i].offset +
> > > +                                   region->mmaps[i].size)) {
> > > +                    buf = region->mmaps[i].mmap + (data_offset -
> > > +                                                   region->mmaps[i].offset);
> > > +                    buffer_mmaped = true;
> > > +                    break;
> > > +                }
> > > +            }
> > > +        }
> > > +
> > > +        if (!buffer_mmaped) {
> > > +            buf = g_malloc0(data_size);
> > > +            ret = pread(vbasedev->fd, buf, data_size,
> > > +                        region->fd_offset + data_offset);
> > > +            if (ret != data_size) {
> > > +                error_report("Failed to get migration data %d", ret);
> > > +                g_free(buf);
> > > +                return -EINVAL;
> > > +            }
> > > +        }
> > > +
> > > +        qemu_put_be64(f, data_size);
> > > +        qemu_put_buffer(f, buf, data_size);
> > > +
> > > +        if (!buffer_mmaped) {
> > > +            g_free(buf);
> > > +        }
> > > +        migration->pending_bytes -= data_size;
> > > +    } else {
> > > +        qemu_put_be64(f, data_size);
> > > +    }
> > > +
> > > +    ret = qemu_file_get_error(f);
> > > +
> > > +    return data_size;
> > > +}
> > > +
> > > +static int vfio_update_pending(VFIODevice *vbasedev)
> > > +{
> > > +    VFIOMigration *migration = vbasedev->migration;
> > > +    VFIORegion *region = &migration->region.buffer;
> > > +    uint64_t pending_bytes = 0;
> > > +    int ret;
> > > +
> > > +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
> > > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > > +                                             pending_bytes));
> > > +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
> > > +        error_report("Failed to get pending bytes %d", ret);
> > > +        migration->pending_bytes = 0;
> > > +        return (ret < 0) ? ret : -EINVAL;
> > > +    }
> > > +
> > > +    migration->pending_bytes = pending_bytes;
> > > +    return 0;
> > > +}
> > > +
> > > +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
> > > +{
> > > +    VFIODevice *vbasedev = opaque;
> > > +
> > > +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
> > > +
> > > +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
> > > +        vfio_pci_save_config(vbasedev, f);
> > > +    }
> > > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > +
> > > +    return qemu_file_get_error(f);
> > > +}
> > > +
> > >  /* ---------------------------------------------------------------------- */
> > >  
> > >  static int vfio_save_setup(QEMUFile *f, void *opaque)
> > > @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
> > >      }
> > >  }
> > >  
> > > +static void vfio_save_pending(QEMUFile *f, void *opaque,
> > > +                              uint64_t threshold_size,
> > > +                              uint64_t *res_precopy_only,
> > > +                              uint64_t *res_compatible,
> > > +                              uint64_t *res_postcopy_only)
> > > +{
> > > +    VFIODevice *vbasedev = opaque;
> > > +    VFIOMigration *migration = vbasedev->migration;
> > > +    int ret;
> > > +
> > > +    ret = vfio_update_pending(vbasedev);
> > > +    if (ret) {
> > > +        return;
> > > +    }
> > > +
> > > +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
> > > +        *res_precopy_only += migration->pending_bytes;
> > > +    } else {
> > > +        *res_postcopy_only += migration->pending_bytes;
> > > +    }
> by definition,
> - res_precopy_only is for data which must be migrated in precopy phase
>    or in stopped state, in other words - before target vm start
> - res_postcopy_only is for data which must be migrated in postcopy phase
>   or in stopped state, in other words - after source vm stop
> So, we can only determining data type by the nature of the data. i.e.
> if it is device state data which must be copied after source vm stop and
> before target vm start, it belongs to res_precopy_only.
> 
> It is not right to determining data type by current device state.

Right; you can determine it by whether postcopy is *enabled* or not.
However, since this isn't ready for postcopy yet anyway, just add it to
res_postcopy_only all the time;  then you can come back to postcopy
later.

Dave

> Thanks
> Yan
> 
> > > +    *res_compatible += 0;
> > > +}
> > > +
> > > +static int vfio_save_iterate(QEMUFile *f, void *opaque)
> > > +{
> > > +    VFIODevice *vbasedev = opaque;
> > > +    VFIOMigration *migration = vbasedev->migration;
> > > +    int ret;
> > > +
> > > +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> > > +
> > > +    qemu_mutex_lock(&migration->lock);
> > > +    ret = vfio_save_buffer(f, vbasedev);
> > > +    qemu_mutex_unlock(&migration->lock);
> > > +
> > > +    if (ret < 0) {
> > > +        error_report("vfio_save_buffer failed %s",
> > > +                     strerror(errno));
> > > +        return ret;
> > > +    }
> > > +
> > > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > +
> > > +    ret = qemu_file_get_error(f);
> > > +    if (ret) {
> > > +        return ret;
> > > +    }
> > > +
> > > +    return ret;
> > > +}
> > > +
> > > +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> > > +{
> > > +    VFIODevice *vbasedev = opaque;
> > > +    VFIOMigration *migration = vbasedev->migration;
> > > +    int ret;
> > > +
> > > +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
> > > +    if (ret) {
> > > +        error_report("Failed to set state STOP and SAVING");
> > > +        return ret;
> > > +    }
> > > +
> > > +    ret = vfio_save_device_config_state(f, opaque);
> > > +    if (ret) {
> > > +        return ret;
> > > +    }
> > > +
> > > +    ret = vfio_update_pending(vbasedev);
> > > +    if (ret) {
> > > +        return ret;
> > > +    }
> > > +
> > > +    while (migration->pending_bytes > 0) {
> > > +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> > > +        ret = vfio_save_buffer(f, vbasedev);
> > > +        if (ret < 0) {
> > > +            error_report("Failed to save buffer");
> > > +            return ret;
> > > +        } else if (ret == 0) {
> > > +            break;
> > > +        }
> > > +
> > > +        ret = vfio_update_pending(vbasedev);
> > > +        if (ret) {
> > > +            return ret;
> > > +        }
> > > +    }
> > > +
> > > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > +
> > > +    ret = qemu_file_get_error(f);
> > > +    if (ret) {
> > > +        return ret;
> > > +    }
> > > +
> > > +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
> > > +    if (ret) {
> > > +        error_report("Failed to set state STOPPED");
> > > +        return ret;
> > > +    }
> > > +    return ret;
> > > +}
> > > +
> > >  static SaveVMHandlers savevm_vfio_handlers = {
> > >      .save_setup = vfio_save_setup,
> > >      .save_cleanup = vfio_save_cleanup,
> > > +    .save_live_pending = vfio_save_pending,
> > > +    .save_live_iterate = vfio_save_iterate,
> > > +    .save_live_complete_precopy = vfio_save_complete_precopy,
> > >  };
> > >  
> > >  /* ---------------------------------------------------------------------- */
> > > -- 
> > > 2.7.0
> > > 
> > 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Dr. David Alan Gilbert June 28, 2019, 9:09 a.m. UTC | #14
* Kirti Wankhede (kwankhede@nvidia.com) wrote:
> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> functions. These functions handles pre-copy and stop-and-copy phase.
> 
> In _SAVING|_RUNNING device state or pre-copy phase:
> - read pending_bytes
> - read data_offset - indicates kernel driver to write data to staging
>   buffer which is mmapped.
> - read data_size - amount of data in bytes written by vendor driver in migration
>   region.
> - if data section is trapped, pread() number of bytes in data_size, from
>   data_offset.
> - if data section is mmaped, read mmaped buffer of size data_size.
> - Write data packet to file stream as below:
> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> VFIO_MIG_FLAG_END_OF_STATE }
> 
> In _SAVING device state or stop-and-copy phase
> a. read config space of device and save to migration file stream. This
>    doesn't need to be from vendor driver. Any other special config state
>    from driver can be saved as data in following iteration.
> b. read pending_bytes - indicates kernel driver to write data to staging
>    buffer which is mmapped.
> c. read data_size - amount of data in bytes written by vendor driver in
>    migration region.
> d. if data section is trapped, pread() from data_offset of size data_size.
> e. if data section is mmaped, read mmaped buffer of size data_size.
> f. Write data packet as below:
>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> g. iterate through steps b to f until (pending_bytes > 0)
> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> 
> .save_live_iterate runs outside the iothread lock in the migration case, which
> could race with asynchronous call to get dirty page list causing data corruption
> in mapped migration region. Mutex added here to serial migration buffer read
> operation.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 212 insertions(+)
> 
> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> index fe0887c27664..0a2f30872316 100644
> --- a/hw/vfio/migration.c
> +++ b/hw/vfio/migration.c
> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
>      return 0;
>  }
>  
> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> +{
> +    VFIOMigration *migration = vbasedev->migration;
> +    VFIORegion *region = &migration->region.buffer;
> +    uint64_t data_offset = 0, data_size = 0;
> +    int ret;
> +
> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> +                                             data_offset));
> +    if (ret != sizeof(data_offset)) {
> +        error_report("Failed to get migration buffer data offset %d",
> +                     ret);
> +        return -EINVAL;
> +    }

It feels like you need a helper function, something so that you can do
something like:

       if (!vfio_dev_read(vbasedev, &data_offset, sizeof(data_offset),
                          region->fd_offset + offsetof(struct vfio_device_migration_info,
                                                data_offset),
                          "data offset")) {
           return -EINVAL;
       }

> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> +                                             data_size));
> +    if (ret != sizeof(data_size)) {
> +        error_report("Failed to get migration buffer data size %d",
> +                     ret);
> +        return -EINVAL;
> +    }
> +
> +    if (data_size > 0) {
> +        void *buf = NULL;
> +        bool buffer_mmaped = false;
> +
> +        if (region->mmaps) {
> +            int i;
> +
> +            for (i = 0; i < region->nr_mmaps; i++) {
> +                if ((data_offset >= region->mmaps[i].offset) &&
> +                    (data_offset < region->mmaps[i].offset +
> +                                   region->mmaps[i].size)) {
> +                    buf = region->mmaps[i].mmap + (data_offset -
> +                                                   region->mmaps[i].offset);
> +                    buffer_mmaped = true;
> +                    break;
> +                }
> +            }
> +        }
> +
> +        if (!buffer_mmaped) {
> +            buf = g_malloc0(data_size);
> +            ret = pread(vbasedev->fd, buf, data_size,
> +                        region->fd_offset + data_offset);
> +            if (ret != data_size) {
> +                error_report("Failed to get migration data %d", ret);
> +                g_free(buf);
> +                return -EINVAL;
> +            }
> +        }
> +
> +        qemu_put_be64(f, data_size);
> +        qemu_put_buffer(f, buf, data_size);
> +
> +        if (!buffer_mmaped) {
> +            g_free(buf);
> +        }
> +        migration->pending_bytes -= data_size;
> +    } else {
> +        qemu_put_be64(f, data_size);
> +    }
> +
> +    ret = qemu_file_get_error(f);

You're ignoring that return value;  it's not that
important to check for errors on the saving side - although
you should if you're looping on data to fail quickly; it's more
of an issue on the load side.

> +    return data_size;
> +}
> +
> +static int vfio_update_pending(VFIODevice *vbasedev)
> +{
> +    VFIOMigration *migration = vbasedev->migration;
> +    VFIORegion *region = &migration->region.buffer;
> +    uint64_t pending_bytes = 0;
> +    int ret;
> +
> +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> +                                             pending_bytes));
> +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
> +        error_report("Failed to get pending bytes %d", ret);
> +        migration->pending_bytes = 0;
> +        return (ret < 0) ? ret : -EINVAL;
> +    }
> +
> +    migration->pending_bytes = pending_bytes;
> +    return 0;
> +}
> +
> +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
> +{
> +    VFIODevice *vbasedev = opaque;
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
> +
> +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
> +        vfio_pci_save_config(vbasedev, f);
> +    }
> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> +
> +    return qemu_file_get_error(f);
> +}
> +
>  /* ---------------------------------------------------------------------- */
>  
>  static int vfio_save_setup(QEMUFile *f, void *opaque)
> @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
>      }
>  }
>  
> +static void vfio_save_pending(QEMUFile *f, void *opaque,
> +                              uint64_t threshold_size,
> +                              uint64_t *res_precopy_only,
> +                              uint64_t *res_compatible,
> +                              uint64_t *res_postcopy_only)
> +{
> +    VFIODevice *vbasedev = opaque;
> +    VFIOMigration *migration = vbasedev->migration;
> +    int ret;
> +
> +    ret = vfio_update_pending(vbasedev);
> +    if (ret) {
> +        return;
> +    }
> +
> +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
> +        *res_precopy_only += migration->pending_bytes;
> +    } else {
> +        *res_postcopy_only += migration->pending_bytes;
> +    }
> +    *res_compatible += 0;
> +}
> +
> +static int vfio_save_iterate(QEMUFile *f, void *opaque)
> +{
> +    VFIODevice *vbasedev = opaque;
> +    VFIOMigration *migration = vbasedev->migration;
> +    int ret;
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> +
> +    qemu_mutex_lock(&migration->lock);
> +    ret = vfio_save_buffer(f, vbasedev);
> +    qemu_mutex_unlock(&migration->lock);
> +
> +    if (ret < 0) {
> +        error_report("vfio_save_buffer failed %s",
> +                     strerror(errno));
> +        return ret;
> +    }
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> +
> +    ret = qemu_file_get_error(f);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    return ret;
> +}
> +
> +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> +{
> +    VFIODevice *vbasedev = opaque;
> +    VFIOMigration *migration = vbasedev->migration;
> +    int ret;
> +
> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
> +    if (ret) {
> +        error_report("Failed to set state STOP and SAVING");
> +        return ret;
> +    }
> +
> +    ret = vfio_save_device_config_state(f, opaque);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    ret = vfio_update_pending(vbasedev);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    while (migration->pending_bytes > 0) {
> +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> +        ret = vfio_save_buffer(f, vbasedev);
> +        if (ret < 0) {
> +            error_report("Failed to save buffer");
> +            return ret;
> +        } else if (ret == 0) {
> +            break;
> +        }
> +
> +        ret = vfio_update_pending(vbasedev);
> +        if (ret) {
> +            return ret;
> +        }
> +    }
> +
> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> +
> +    ret = qemu_file_get_error(f);
> +    if (ret) {
> +        return ret;
> +    }
> +
> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
> +    if (ret) {
> +        error_report("Failed to set state STOPPED");
> +        return ret;
> +    }
> +    return ret;
> +}
> +
>  static SaveVMHandlers savevm_vfio_handlers = {
>      .save_setup = vfio_save_setup,
>      .save_cleanup = vfio_save_cleanup,
> +    .save_live_pending = vfio_save_pending,
> +    .save_live_iterate = vfio_save_iterate,
> +    .save_live_complete_precopy = vfio_save_complete_precopy,
>  };
>  
>  /* ---------------------------------------------------------------------- */
> -- 
> 2.7.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Yan Zhao June 28, 2019, 9:16 p.m. UTC | #15
On Fri, Jun 28, 2019 at 04:50:30PM +0800, Dr. David Alan Gilbert wrote:
> * Yan Zhao (yan.y.zhao@intel.com) wrote:
> > On Fri, Jun 21, 2019 at 08:31:53AM +0800, Yan Zhao wrote:
> > > On Thu, Jun 20, 2019 at 10:37:36PM +0800, Kirti Wankhede wrote:
> > > > Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> > > > functions. These functions handles pre-copy and stop-and-copy phase.
> > > > 
> > > > In _SAVING|_RUNNING device state or pre-copy phase:
> > > > - read pending_bytes
> > > > - read data_offset - indicates kernel driver to write data to staging
> > > >   buffer which is mmapped.
> > > > - read data_size - amount of data in bytes written by vendor driver in migration
> > > >   region.
> > > > - if data section is trapped, pread() number of bytes in data_size, from
> > > >   data_offset.
> > > > - if data section is mmaped, read mmaped buffer of size data_size.
> > > > - Write data packet to file stream as below:
> > > > {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> > > > VFIO_MIG_FLAG_END_OF_STATE }
> > > > 
> > > > In _SAVING device state or stop-and-copy phase
> > > > a. read config space of device and save to migration file stream. This
> > > >    doesn't need to be from vendor driver. Any other special config state
> > > >    from driver can be saved as data in following iteration.
> > > > b. read pending_bytes - indicates kernel driver to write data to staging
> > > >    buffer which is mmapped.
> > > > c. read data_size - amount of data in bytes written by vendor driver in
> > > >    migration region.
> > > > d. if data section is trapped, pread() from data_offset of size data_size.
> > > > e. if data section is mmaped, read mmaped buffer of size data_size.
> > > > f. Write data packet as below:
> > > >    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> > > > g. iterate through steps b to f until (pending_bytes > 0)
> > > > h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> > > > 
> > > > .save_live_iterate runs outside the iothread lock in the migration case, which
> > > > could race with asynchronous call to get dirty page list causing data corruption
> > > > in mapped migration region. Mutex added here to serial migration buffer read
> > > > operation.
> > > > 
> > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > ---
> > > >  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 212 insertions(+)
> > > > 
> > > > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> > > > index fe0887c27664..0a2f30872316 100644
> > > > --- a/hw/vfio/migration.c
> > > > +++ b/hw/vfio/migration.c
> > > > @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
> > > >      return 0;
> > > >  }
> > > >  
> > > > +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> > > > +{
> > > > +    VFIOMigration *migration = vbasedev->migration;
> > > > +    VFIORegion *region = &migration->region.buffer;
> > > > +    uint64_t data_offset = 0, data_size = 0;
> > > > +    int ret;
> > > > +
> > > > +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> > > > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > > > +                                             data_offset));
> > > > +    if (ret != sizeof(data_offset)) {
> > > > +        error_report("Failed to get migration buffer data offset %d",
> > > > +                     ret);
> > > > +        return -EINVAL;
> > > > +    }
> > > > +
> > > > +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> > > > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > > > +                                             data_size));
> > > > +    if (ret != sizeof(data_size)) {
> > > > +        error_report("Failed to get migration buffer data size %d",
> > > > +                     ret);
> > > > +        return -EINVAL;
> > > > +    }
> > > > +
> > > how big is the data_size ? 
> > > if this size is too big, it may take too much time and block others.
> > > 
> > > > +    if (data_size > 0) {
> > > > +        void *buf = NULL;
> > > > +        bool buffer_mmaped = false;
> > > > +
> > > > +        if (region->mmaps) {
> > > > +            int i;
> > > > +
> > > > +            for (i = 0; i < region->nr_mmaps; i++) {
> > > > +                if ((data_offset >= region->mmaps[i].offset) &&
> > > > +                    (data_offset < region->mmaps[i].offset +
> > > > +                                   region->mmaps[i].size)) {
> > > > +                    buf = region->mmaps[i].mmap + (data_offset -
> > > > +                                                   region->mmaps[i].offset);
> > > > +                    buffer_mmaped = true;
> > > > +                    break;
> > > > +                }
> > > > +            }
> > > > +        }
> > > > +
> > > > +        if (!buffer_mmaped) {
> > > > +            buf = g_malloc0(data_size);
> > > > +            ret = pread(vbasedev->fd, buf, data_size,
> > > > +                        region->fd_offset + data_offset);
> > > > +            if (ret != data_size) {
> > > > +                error_report("Failed to get migration data %d", ret);
> > > > +                g_free(buf);
> > > > +                return -EINVAL;
> > > > +            }
> > > > +        }
> > > > +
> > > > +        qemu_put_be64(f, data_size);
> > > > +        qemu_put_buffer(f, buf, data_size);
> > > > +
> > > > +        if (!buffer_mmaped) {
> > > > +            g_free(buf);
> > > > +        }
> > > > +        migration->pending_bytes -= data_size;
> > > > +    } else {
> > > > +        qemu_put_be64(f, data_size);
> > > > +    }
> > > > +
> > > > +    ret = qemu_file_get_error(f);
> > > > +
> > > > +    return data_size;
> > > > +}
> > > > +
> > > > +static int vfio_update_pending(VFIODevice *vbasedev)
> > > > +{
> > > > +    VFIOMigration *migration = vbasedev->migration;
> > > > +    VFIORegion *region = &migration->region.buffer;
> > > > +    uint64_t pending_bytes = 0;
> > > > +    int ret;
> > > > +
> > > > +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
> > > > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > > > +                                             pending_bytes));
> > > > +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
> > > > +        error_report("Failed to get pending bytes %d", ret);
> > > > +        migration->pending_bytes = 0;
> > > > +        return (ret < 0) ? ret : -EINVAL;
> > > > +    }
> > > > +
> > > > +    migration->pending_bytes = pending_bytes;
> > > > +    return 0;
> > > > +}
> > > > +
> > > > +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
> > > > +{
> > > > +    VFIODevice *vbasedev = opaque;
> > > > +
> > > > +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
> > > > +
> > > > +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
> > > > +        vfio_pci_save_config(vbasedev, f);
> > > > +    }
> > > > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > > +
> > > > +    return qemu_file_get_error(f);
> > > > +}
> > > > +
> > > >  /* ---------------------------------------------------------------------- */
> > > >  
> > > >  static int vfio_save_setup(QEMUFile *f, void *opaque)
> > > > @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
> > > >      }
> > > >  }
> > > >  
> > > > +static void vfio_save_pending(QEMUFile *f, void *opaque,
> > > > +                              uint64_t threshold_size,
> > > > +                              uint64_t *res_precopy_only,
> > > > +                              uint64_t *res_compatible,
> > > > +                              uint64_t *res_postcopy_only)
> > > > +{
> > > > +    VFIODevice *vbasedev = opaque;
> > > > +    VFIOMigration *migration = vbasedev->migration;
> > > > +    int ret;
> > > > +
> > > > +    ret = vfio_update_pending(vbasedev);
> > > > +    if (ret) {
> > > > +        return;
> > > > +    }
> > > > +
> > > > +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
> > > > +        *res_precopy_only += migration->pending_bytes;
> > > > +    } else {
> > > > +        *res_postcopy_only += migration->pending_bytes;
> > > > +    }
> > by definition,
> > - res_precopy_only is for data which must be migrated in precopy phase
> >    or in stopped state, in other words - before target vm start
> > - res_postcopy_only is for data which must be migrated in postcopy phase
> >   or in stopped state, in other words - after source vm stop
> > So, we can only determining data type by the nature of the data. i.e.
> > if it is device state data which must be copied after source vm stop and
> > before target vm start, it belongs to res_precopy_only.
> > 
> > It is not right to determining data type by current device state.
> 
> Right; you can determine it by whether postcopy is *enabled* or not.
> However, since this isn't ready for postcopy yet anyway, just add it to
> res_postcopy_only all the time;  then you can come back to postcopy
hi Dave,
do you mean "add it to res_precopy_only all the time" here?

Thanks
Yan


> later.
> 
> Dave
> 
> > Thanks
> > Yan
> > 
> > > > +    *res_compatible += 0;
> > > > +}
> > > > +
> > > > +static int vfio_save_iterate(QEMUFile *f, void *opaque)
> > > > +{
> > > > +    VFIODevice *vbasedev = opaque;
> > > > +    VFIOMigration *migration = vbasedev->migration;
> > > > +    int ret;
> > > > +
> > > > +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> > > > +
> > > > +    qemu_mutex_lock(&migration->lock);
> > > > +    ret = vfio_save_buffer(f, vbasedev);
> > > > +    qemu_mutex_unlock(&migration->lock);
> > > > +
> > > > +    if (ret < 0) {
> > > > +        error_report("vfio_save_buffer failed %s",
> > > > +                     strerror(errno));
> > > > +        return ret;
> > > > +    }
> > > > +
> > > > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > > +
> > > > +    ret = qemu_file_get_error(f);
> > > > +    if (ret) {
> > > > +        return ret;
> > > > +    }
> > > > +
> > > > +    return ret;
> > > > +}
> > > > +
> > > > +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> > > > +{
> > > > +    VFIODevice *vbasedev = opaque;
> > > > +    VFIOMigration *migration = vbasedev->migration;
> > > > +    int ret;
> > > > +
> > > > +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
> > > > +    if (ret) {
> > > > +        error_report("Failed to set state STOP and SAVING");
> > > > +        return ret;
> > > > +    }
> > > > +
> > > > +    ret = vfio_save_device_config_state(f, opaque);
> > > > +    if (ret) {
> > > > +        return ret;
> > > > +    }
> > > > +
> > > > +    ret = vfio_update_pending(vbasedev);
> > > > +    if (ret) {
> > > > +        return ret;
> > > > +    }
> > > > +
> > > > +    while (migration->pending_bytes > 0) {
> > > > +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> > > > +        ret = vfio_save_buffer(f, vbasedev);
> > > > +        if (ret < 0) {
> > > > +            error_report("Failed to save buffer");
> > > > +            return ret;
> > > > +        } else if (ret == 0) {
> > > > +            break;
> > > > +        }
> > > > +
> > > > +        ret = vfio_update_pending(vbasedev);
> > > > +        if (ret) {
> > > > +            return ret;
> > > > +        }
> > > > +    }
> > > > +
> > > > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > > +
> > > > +    ret = qemu_file_get_error(f);
> > > > +    if (ret) {
> > > > +        return ret;
> > > > +    }
> > > > +
> > > > +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
> > > > +    if (ret) {
> > > > +        error_report("Failed to set state STOPPED");
> > > > +        return ret;
> > > > +    }
> > > > +    return ret;
> > > > +}
> > > > +
> > > >  static SaveVMHandlers savevm_vfio_handlers = {
> > > >      .save_setup = vfio_save_setup,
> > > >      .save_cleanup = vfio_save_cleanup,
> > > > +    .save_live_pending = vfio_save_pending,
> > > > +    .save_live_iterate = vfio_save_iterate,
> > > > +    .save_live_complete_precopy = vfio_save_complete_precopy,
> > > >  };
> > > >  
> > > >  /* ---------------------------------------------------------------------- */
> > > > -- 
> > > > 2.7.0
> > > > 
> > > 
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
diff mbox series

Patch

diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index fe0887c27664..0a2f30872316 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -107,6 +107,111 @@  static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
     return 0;
 }
 
+static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
+{
+    VFIOMigration *migration = vbasedev->migration;
+    VFIORegion *region = &migration->region.buffer;
+    uint64_t data_offset = 0, data_size = 0;
+    int ret;
+
+    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
+                region->fd_offset + offsetof(struct vfio_device_migration_info,
+                                             data_offset));
+    if (ret != sizeof(data_offset)) {
+        error_report("Failed to get migration buffer data offset %d",
+                     ret);
+        return -EINVAL;
+    }
+
+    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
+                region->fd_offset + offsetof(struct vfio_device_migration_info,
+                                             data_size));
+    if (ret != sizeof(data_size)) {
+        error_report("Failed to get migration buffer data size %d",
+                     ret);
+        return -EINVAL;
+    }
+
+    if (data_size > 0) {
+        void *buf = NULL;
+        bool buffer_mmaped = false;
+
+        if (region->mmaps) {
+            int i;
+
+            for (i = 0; i < region->nr_mmaps; i++) {
+                if ((data_offset >= region->mmaps[i].offset) &&
+                    (data_offset < region->mmaps[i].offset +
+                                   region->mmaps[i].size)) {
+                    buf = region->mmaps[i].mmap + (data_offset -
+                                                   region->mmaps[i].offset);
+                    buffer_mmaped = true;
+                    break;
+                }
+            }
+        }
+
+        if (!buffer_mmaped) {
+            buf = g_malloc0(data_size);
+            ret = pread(vbasedev->fd, buf, data_size,
+                        region->fd_offset + data_offset);
+            if (ret != data_size) {
+                error_report("Failed to get migration data %d", ret);
+                g_free(buf);
+                return -EINVAL;
+            }
+        }
+
+        qemu_put_be64(f, data_size);
+        qemu_put_buffer(f, buf, data_size);
+
+        if (!buffer_mmaped) {
+            g_free(buf);
+        }
+        migration->pending_bytes -= data_size;
+    } else {
+        qemu_put_be64(f, data_size);
+    }
+
+    ret = qemu_file_get_error(f);
+
+    return data_size;
+}
+
+static int vfio_update_pending(VFIODevice *vbasedev)
+{
+    VFIOMigration *migration = vbasedev->migration;
+    VFIORegion *region = &migration->region.buffer;
+    uint64_t pending_bytes = 0;
+    int ret;
+
+    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
+                region->fd_offset + offsetof(struct vfio_device_migration_info,
+                                             pending_bytes));
+    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
+        error_report("Failed to get pending bytes %d", ret);
+        migration->pending_bytes = 0;
+        return (ret < 0) ? ret : -EINVAL;
+    }
+
+    migration->pending_bytes = pending_bytes;
+    return 0;
+}
+
+static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
+{
+    VFIODevice *vbasedev = opaque;
+
+    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
+
+    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
+        vfio_pci_save_config(vbasedev, f);
+    }
+    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
+
+    return qemu_file_get_error(f);
+}
+
 /* ---------------------------------------------------------------------- */
 
 static int vfio_save_setup(QEMUFile *f, void *opaque)
@@ -163,9 +268,116 @@  static void vfio_save_cleanup(void *opaque)
     }
 }
 
+static void vfio_save_pending(QEMUFile *f, void *opaque,
+                              uint64_t threshold_size,
+                              uint64_t *res_precopy_only,
+                              uint64_t *res_compatible,
+                              uint64_t *res_postcopy_only)
+{
+    VFIODevice *vbasedev = opaque;
+    VFIOMigration *migration = vbasedev->migration;
+    int ret;
+
+    ret = vfio_update_pending(vbasedev);
+    if (ret) {
+        return;
+    }
+
+    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
+        *res_precopy_only += migration->pending_bytes;
+    } else {
+        *res_postcopy_only += migration->pending_bytes;
+    }
+    *res_compatible += 0;
+}
+
+static int vfio_save_iterate(QEMUFile *f, void *opaque)
+{
+    VFIODevice *vbasedev = opaque;
+    VFIOMigration *migration = vbasedev->migration;
+    int ret;
+
+    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
+
+    qemu_mutex_lock(&migration->lock);
+    ret = vfio_save_buffer(f, vbasedev);
+    qemu_mutex_unlock(&migration->lock);
+
+    if (ret < 0) {
+        error_report("vfio_save_buffer failed %s",
+                     strerror(errno));
+        return ret;
+    }
+
+    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
+
+    ret = qemu_file_get_error(f);
+    if (ret) {
+        return ret;
+    }
+
+    return ret;
+}
+
+static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
+{
+    VFIODevice *vbasedev = opaque;
+    VFIOMigration *migration = vbasedev->migration;
+    int ret;
+
+    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
+    if (ret) {
+        error_report("Failed to set state STOP and SAVING");
+        return ret;
+    }
+
+    ret = vfio_save_device_config_state(f, opaque);
+    if (ret) {
+        return ret;
+    }
+
+    ret = vfio_update_pending(vbasedev);
+    if (ret) {
+        return ret;
+    }
+
+    while (migration->pending_bytes > 0) {
+        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
+        ret = vfio_save_buffer(f, vbasedev);
+        if (ret < 0) {
+            error_report("Failed to save buffer");
+            return ret;
+        } else if (ret == 0) {
+            break;
+        }
+
+        ret = vfio_update_pending(vbasedev);
+        if (ret) {
+            return ret;
+        }
+    }
+
+    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
+
+    ret = qemu_file_get_error(f);
+    if (ret) {
+        return ret;
+    }
+
+    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
+    if (ret) {
+        error_report("Failed to set state STOPPED");
+        return ret;
+    }
+    return ret;
+}
+
 static SaveVMHandlers savevm_vfio_handlers = {
     .save_setup = vfio_save_setup,
     .save_cleanup = vfio_save_cleanup,
+    .save_live_pending = vfio_save_pending,
+    .save_live_iterate = vfio_save_iterate,
+    .save_live_complete_precopy = vfio_save_complete_precopy,
 };
 
 /* ---------------------------------------------------------------------- */