mbox series

[v4,00/13] Add migration support for VFIO device

Message ID 1561041461-22326-1-git-send-email-kwankhede@nvidia.com (mailing list archive)
Headers show
Series Add migration support for VFIO device | expand

Message

Kirti Wankhede June 20, 2019, 2:37 p.m. UTC
Add migration support for VFIO device

This Patch set include patches as below:
- Define KABI for VFIO device for migration support.
- Added save and restore functions for PCI configuration space
- Generic migration functionality for VFIO device.
  * This patch set adds functionality only for PCI devices, but can be
    extended to other VFIO devices.
  * Added all the basic functions required for pre-copy, stop-and-copy and
    resume phases of migration.
  * Added state change notifier and from that notifier function, VFIO
    device's state changed is conveyed to VFIO device driver.
  * During save setup phase and resume/load setup phase, migration region
    is queried and is used to read/write VFIO device data.
  * .save_live_pending and .save_live_iterate are implemented to use QEMU's
    functionality of iteration during pre-copy phase.
  * In .save_live_complete_precopy, that is in stop-and-copy phase,
    iteration to read data from VFIO device driver is implemented till pending
    bytes returned by driver are not zero.
  * Added function to get dirty pages bitmap for the pages which are used by
    driver.
- Add vfio_listerner_log_sync to mark dirty pages.
- Make VFIO PCI device migration capable. If migration region is not provided by
  driver, migration is blocked.

Below is the flow of state change for live migration where states in brackets
represent VM state, migration state and VFIO device state as:
    (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)

Live migration save path:
        QEMU normal running state
        (RUNNING, _NONE, _RUNNING)
                        |
    migrate_init spawns migration_thread.
    (RUNNING, _SETUP, _RUNNING|_SAVING)
    Migration thread then calls each device's .save_setup()
                        |
    (RUNNING, _ACTIVE, _RUNNING|_SAVING)
    If device is active, get pending bytes by .save_live_pending()
    if pending bytes >= threshold_size,  call save_live_iterate()
    Data of VFIO device for pre-copy phase is copied.
    Iterate till pending bytes converge and are less than threshold
                        |
    On migration completion, vCPUs stops and calls .save_live_complete_precopy
    for each active device. VFIO device is then transitioned in
     _SAVING state.
    (FINISH_MIGRATE, _DEVICE, _SAVING)
    For VFIO device, iterate in  .save_live_complete_precopy  until
    pending data is 0.
    (FINISH_MIGRATE, _DEVICE, _STOPPED)
                        |
    (FINISH_MIGRATE, _COMPLETED, STOPPED)
    Migraton thread schedule cleanup bottom half and exit

Live migration resume path:
    Incomming migration calls .load_setup for each device
    (RESTORE_VM, _ACTIVE, STOPPED)
                        |
    For each device, .load_state is called for that device section data
                        |
    At the end, called .load_cleanup for each device and vCPUs are started.
                        |
        (RUNNING, _NONE, _RUNNING)

Note that:
- Migration post copy is not supported.

v3 -> v4:
- Added one more bit for _RESUMING flag to be set explicitly.
- data_offset field is read-only for user space application.
- data_size is read for every iteration before reading data from migration, that
  is removed assumption that data will be till end of migration region.
- If vendor driver supports mappable sparsed region, map those region during
  setup state of save/load, similarly unmap those from cleanup routines.
- Handles race condition that causes data corruption in migration region during
  save device state by adding mutex and serialiaing save_buffer and
  get_dirty_pages routines.
- Skip called get_dirty_pages routine for mapped MMIO region of device.
- Added trace events.
- Splitted into multiple functional patches.

v2 -> v3:
- Removed enum of VFIO device states. Defined VFIO device state with 2 bits.
- Re-structured vfio_device_migration_info to keep it minimal and defined action
  on read and write access on its members.

v1 -> v2:
- Defined MIGRATION region type and sub-type which should be used with region
  type capability.
- Re-structured vfio_device_migration_info. This structure will be placed at 0th
  offset of migration region.
- Replaced ioctl with read/write for trapped part of migration region.
- Added both type of access support, trapped or mmapped, for data section of the
  region.
- Moved PCI device functions to pci file.
- Added iteration to get dirty page bitmap until bitmap for all requested pages
  are copied.

Thanks,
Kirti


Kirti Wankhede (13):
  vfio: KABI for migration interface
  vfio: Add function to unmap VFIO region
  vfio: Add save and load functions for VFIO PCI devices
  vfio: Add migration region initialization and finalize function
  vfio: Add VM state change handler to know state of VM
  vfio: Add migration state change notifier
  vfio: Register SaveVMHandlers for VFIO device
  vfio: Add save state functions to SaveVMHandlers
  vfio: Add load state functions to SaveVMHandlers
  vfio: Add function to get dirty page list
  vfio: Add vfio_listerner_log_sync to mark dirty pages
  vfio: Make vfio-pci device migration capable.
  vfio: Add trace events in migration code path

 hw/vfio/Makefile.objs         |   2 +-
 hw/vfio/common.c              |  55 +++
 hw/vfio/migration.c           | 815 ++++++++++++++++++++++++++++++++++++++++++
 hw/vfio/pci.c                 | 126 ++++++-
 hw/vfio/pci.h                 |  29 ++
 hw/vfio/trace-events          |  19 +
 include/hw/vfio/vfio-common.h |  22 ++
 linux-headers/linux/vfio.h    |  71 ++++
 8 files changed, 1132 insertions(+), 7 deletions(-)
 create mode 100644 hw/vfio/migration.c

Comments

Yan Zhao June 21, 2019, 12:25 a.m. UTC | #1
On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
> Add migration support for VFIO device
> 
> This Patch set include patches as below:
> - Define KABI for VFIO device for migration support.
> - Added save and restore functions for PCI configuration space
> - Generic migration functionality for VFIO device.
>   * This patch set adds functionality only for PCI devices, but can be
>     extended to other VFIO devices.
>   * Added all the basic functions required for pre-copy, stop-and-copy and
>     resume phases of migration.
>   * Added state change notifier and from that notifier function, VFIO
>     device's state changed is conveyed to VFIO device driver.
>   * During save setup phase and resume/load setup phase, migration region
>     is queried and is used to read/write VFIO device data.
>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
>     functionality of iteration during pre-copy phase.
>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
>     iteration to read data from VFIO device driver is implemented till pending
>     bytes returned by driver are not zero.
>   * Added function to get dirty pages bitmap for the pages which are used by
>     driver.
> - Add vfio_listerner_log_sync to mark dirty pages.
> - Make VFIO PCI device migration capable. If migration region is not provided by
>   driver, migration is blocked.
> 
> Below is the flow of state change for live migration where states in brackets
> represent VM state, migration state and VFIO device state as:
>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> 
> Live migration save path:
>         QEMU normal running state
>         (RUNNING, _NONE, _RUNNING)
>                         |
>     migrate_init spawns migration_thread.
>     (RUNNING, _SETUP, _RUNNING|_SAVING)
>     Migration thread then calls each device's .save_setup()
>                         |
>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
>     If device is active, get pending bytes by .save_live_pending()
>     if pending bytes >= threshold_size,  call save_live_iterate()
>     Data of VFIO device for pre-copy phase is copied.
>     Iterate till pending bytes converge and are less than threshold
>                         |
>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
>     for each active device. VFIO device is then transitioned in
>      _SAVING state.
>     (FINISH_MIGRATE, _DEVICE, _SAVING)
>     For VFIO device, iterate in  .save_live_complete_precopy  until
>     pending data is 0.
>     (FINISH_MIGRATE, _DEVICE, _STOPPED)

I suggest we also register to VMStateDescription, whose .pre_save
handler would get called after .save_live_complete_precopy in pre-copy
only case, and will called before .save_live_iterate in post-copy
enabled case.
In the .pre_save handler, we can save all device state which must be
copied after device stop in source vm and before device state in target vm.

>                         |
>     (FINISH_MIGRATE, _COMPLETED, STOPPED)
>     Migraton thread schedule cleanup bottom half and exit
> 
> Live migration resume path:
>     Incomming migration calls .load_setup for each device
>     (RESTORE_VM, _ACTIVE, STOPPED)
>                         |
>     For each device, .load_state is called for that device section data
>                         |
>     At the end, called .load_cleanup for each device and vCPUs are started.
>                         |
>         (RUNNING, _NONE, _RUNNING)
> 
> Note that:
> - Migration post copy is not supported.
> 
> v3 -> v4:
> - Added one more bit for _RESUMING flag to be set explicitly.
> - data_offset field is read-only for user space application.
> - data_size is read for every iteration before reading data from migration, that
>   is removed assumption that data will be till end of migration region.
> - If vendor driver supports mappable sparsed region, map those region during
>   setup state of save/load, similarly unmap those from cleanup routines.
> - Handles race condition that causes data corruption in migration region during
>   save device state by adding mutex and serialiaing save_buffer and
>   get_dirty_pages routines.
> - Skip called get_dirty_pages routine for mapped MMIO region of device.
> - Added trace events.
> - Splitted into multiple functional patches.
> 
> v2 -> v3:
> - Removed enum of VFIO device states. Defined VFIO device state with 2 bits.
> - Re-structured vfio_device_migration_info to keep it minimal and defined action
>   on read and write access on its members.
> 
> v1 -> v2:
> - Defined MIGRATION region type and sub-type which should be used with region
>   type capability.
> - Re-structured vfio_device_migration_info. This structure will be placed at 0th
>   offset of migration region.
> - Replaced ioctl with read/write for trapped part of migration region.
> - Added both type of access support, trapped or mmapped, for data section of the
>   region.
> - Moved PCI device functions to pci file.
> - Added iteration to get dirty page bitmap until bitmap for all requested pages
>   are copied.
> 
> Thanks,
> Kirti
> 
> 
> Kirti Wankhede (13):
>   vfio: KABI for migration interface
>   vfio: Add function to unmap VFIO region
>   vfio: Add save and load functions for VFIO PCI devices
>   vfio: Add migration region initialization and finalize function
>   vfio: Add VM state change handler to know state of VM
>   vfio: Add migration state change notifier
>   vfio: Register SaveVMHandlers for VFIO device
>   vfio: Add save state functions to SaveVMHandlers
>   vfio: Add load state functions to SaveVMHandlers
>   vfio: Add function to get dirty page list
>   vfio: Add vfio_listerner_log_sync to mark dirty pages
>   vfio: Make vfio-pci device migration capable.
>   vfio: Add trace events in migration code path
> 
>  hw/vfio/Makefile.objs         |   2 +-
>  hw/vfio/common.c              |  55 +++
>  hw/vfio/migration.c           | 815 ++++++++++++++++++++++++++++++++++++++++++
>  hw/vfio/pci.c                 | 126 ++++++-
>  hw/vfio/pci.h                 |  29 ++
>  hw/vfio/trace-events          |  19 +
>  include/hw/vfio/vfio-common.h |  22 ++
>  linux-headers/linux/vfio.h    |  71 ++++
>  8 files changed, 1132 insertions(+), 7 deletions(-)
>  create mode 100644 hw/vfio/migration.c
> 
> -- 
> 2.7.0
>
Yan Zhao June 21, 2019, 1:24 a.m. UTC | #2
On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
> > Add migration support for VFIO device
> > 
> > This Patch set include patches as below:
> > - Define KABI for VFIO device for migration support.
> > - Added save and restore functions for PCI configuration space
> > - Generic migration functionality for VFIO device.
> >   * This patch set adds functionality only for PCI devices, but can be
> >     extended to other VFIO devices.
> >   * Added all the basic functions required for pre-copy, stop-and-copy and
> >     resume phases of migration.
> >   * Added state change notifier and from that notifier function, VFIO
> >     device's state changed is conveyed to VFIO device driver.
> >   * During save setup phase and resume/load setup phase, migration region
> >     is queried and is used to read/write VFIO device data.
> >   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
> >     functionality of iteration during pre-copy phase.
> >   * In .save_live_complete_precopy, that is in stop-and-copy phase,
> >     iteration to read data from VFIO device driver is implemented till pending
> >     bytes returned by driver are not zero.
> >   * Added function to get dirty pages bitmap for the pages which are used by
> >     driver.
> > - Add vfio_listerner_log_sync to mark dirty pages.
> > - Make VFIO PCI device migration capable. If migration region is not provided by
> >   driver, migration is blocked.
> > 
> > Below is the flow of state change for live migration where states in brackets
> > represent VM state, migration state and VFIO device state as:
> >     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> > 
> > Live migration save path:
> >         QEMU normal running state
> >         (RUNNING, _NONE, _RUNNING)
> >                         |
> >     migrate_init spawns migration_thread.
> >     (RUNNING, _SETUP, _RUNNING|_SAVING)
> >     Migration thread then calls each device's .save_setup()
> >                         |
> >     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> >     If device is active, get pending bytes by .save_live_pending()
> >     if pending bytes >= threshold_size,  call save_live_iterate()
> >     Data of VFIO device for pre-copy phase is copied.
> >     Iterate till pending bytes converge and are less than threshold
> >                         |
> >     On migration completion, vCPUs stops and calls .save_live_complete_precopy
> >     for each active device. VFIO device is then transitioned in
> >      _SAVING state.
> >     (FINISH_MIGRATE, _DEVICE, _SAVING)
> >     For VFIO device, iterate in  .save_live_complete_precopy  until
> >     pending data is 0.
> >     (FINISH_MIGRATE, _DEVICE, _STOPPED)
> 
> I suggest we also register to VMStateDescription, whose .pre_save
> handler would get called after .save_live_complete_precopy in pre-copy
> only case, and will called before .save_live_iterate in post-copy
> enabled case.
> In the .pre_save handler, we can save all device state which must be
> copied after device stop in source vm and before device start in target vm.
> 
hi
to better describe this idea:

in pre-copy only case, the flow is

start migration --> .save_live_iterate (several round) -> stop source vm
--> .save_live_complete_precopy --> .pre_save  -->start target vm
-->migration complete


in post-copy enabled case, the flow is

start migration --> .save_live_iterate (several round) --> start post copy --> 
stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
-->migration complete

Therefore, we should put saving of device state in .pre_save interface
rather than in .save_live_complete_precopy. 
The device state includes pci config data, page tables, register state, etc.

The .save_live_iterate and .save_live_complete_precopy should only deal
with saving dirty memory.


I know current implementation does not support post-copy. but at least
it should not require huge change when we decide to enable it in future.

Thanks
Yan

> >                         |
> >     (FINISH_MIGRATE, _COMPLETED, STOPPED)
> >     Migraton thread schedule cleanup bottom half and exit
> > 
> > Live migration resume path:
> >     Incomming migration calls .load_setup for each device
> >     (RESTORE_VM, _ACTIVE, STOPPED)
> >                         |
> >     For each device, .load_state is called for that device section data
> >                         |
> >     At the end, called .load_cleanup for each device and vCPUs are started.
> >                         |
> >         (RUNNING, _NONE, _RUNNING)
> > 
> > Note that:
> > - Migration post copy is not supported.
> > 
> > v3 -> v4:
> > - Added one more bit for _RESUMING flag to be set explicitly.
> > - data_offset field is read-only for user space application.
> > - data_size is read for every iteration before reading data from migration, that
> >   is removed assumption that data will be till end of migration region.
> > - If vendor driver supports mappable sparsed region, map those region during
> >   setup state of save/load, similarly unmap those from cleanup routines.
> > - Handles race condition that causes data corruption in migration region during
> >   save device state by adding mutex and serialiaing save_buffer and
> >   get_dirty_pages routines.
> > - Skip called get_dirty_pages routine for mapped MMIO region of device.
> > - Added trace events.
> > - Splitted into multiple functional patches.
> > 
> > v2 -> v3:
> > - Removed enum of VFIO device states. Defined VFIO device state with 2 bits.
> > - Re-structured vfio_device_migration_info to keep it minimal and defined action
> >   on read and write access on its members.
> > 
> > v1 -> v2:
> > - Defined MIGRATION region type and sub-type which should be used with region
> >   type capability.
> > - Re-structured vfio_device_migration_info. This structure will be placed at 0th
> >   offset of migration region.
> > - Replaced ioctl with read/write for trapped part of migration region.
> > - Added both type of access support, trapped or mmapped, for data section of the
> >   region.
> > - Moved PCI device functions to pci file.
> > - Added iteration to get dirty page bitmap until bitmap for all requested pages
> >   are copied.
> > 
> > Thanks,
> > Kirti
> > 
> > 
> > Kirti Wankhede (13):
> >   vfio: KABI for migration interface
> >   vfio: Add function to unmap VFIO region
> >   vfio: Add save and load functions for VFIO PCI devices
> >   vfio: Add migration region initialization and finalize function
> >   vfio: Add VM state change handler to know state of VM
> >   vfio: Add migration state change notifier
> >   vfio: Register SaveVMHandlers for VFIO device
> >   vfio: Add save state functions to SaveVMHandlers
> >   vfio: Add load state functions to SaveVMHandlers
> >   vfio: Add function to get dirty page list
> >   vfio: Add vfio_listerner_log_sync to mark dirty pages
> >   vfio: Make vfio-pci device migration capable.
> >   vfio: Add trace events in migration code path
> > 
> >  hw/vfio/Makefile.objs         |   2 +-
> >  hw/vfio/common.c              |  55 +++
> >  hw/vfio/migration.c           | 815 ++++++++++++++++++++++++++++++++++++++++++
> >  hw/vfio/pci.c                 | 126 ++++++-
> >  hw/vfio/pci.h                 |  29 ++
> >  hw/vfio/trace-events          |  19 +
> >  include/hw/vfio/vfio-common.h |  22 ++
> >  linux-headers/linux/vfio.h    |  71 ++++
> >  8 files changed, 1132 insertions(+), 7 deletions(-)
> >  create mode 100644 hw/vfio/migration.c
> > 
> > -- 
> > 2.7.0
> >
Kirti Wankhede June 21, 2019, 8:02 a.m. UTC | #3
On 6/21/2019 6:54 AM, Yan Zhao wrote:
> On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
>> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
>>> Add migration support for VFIO device
>>>
>>> This Patch set include patches as below:
>>> - Define KABI for VFIO device for migration support.
>>> - Added save and restore functions for PCI configuration space
>>> - Generic migration functionality for VFIO device.
>>>   * This patch set adds functionality only for PCI devices, but can be
>>>     extended to other VFIO devices.
>>>   * Added all the basic functions required for pre-copy, stop-and-copy and
>>>     resume phases of migration.
>>>   * Added state change notifier and from that notifier function, VFIO
>>>     device's state changed is conveyed to VFIO device driver.
>>>   * During save setup phase and resume/load setup phase, migration region
>>>     is queried and is used to read/write VFIO device data.
>>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
>>>     functionality of iteration during pre-copy phase.
>>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
>>>     iteration to read data from VFIO device driver is implemented till pending
>>>     bytes returned by driver are not zero.
>>>   * Added function to get dirty pages bitmap for the pages which are used by
>>>     driver.
>>> - Add vfio_listerner_log_sync to mark dirty pages.
>>> - Make VFIO PCI device migration capable. If migration region is not provided by
>>>   driver, migration is blocked.
>>>
>>> Below is the flow of state change for live migration where states in brackets
>>> represent VM state, migration state and VFIO device state as:
>>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
>>>
>>> Live migration save path:
>>>         QEMU normal running state
>>>         (RUNNING, _NONE, _RUNNING)
>>>                         |
>>>     migrate_init spawns migration_thread.
>>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
>>>     Migration thread then calls each device's .save_setup()
>>>                         |
>>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
>>>     If device is active, get pending bytes by .save_live_pending()
>>>     if pending bytes >= threshold_size,  call save_live_iterate()
>>>     Data of VFIO device for pre-copy phase is copied.
>>>     Iterate till pending bytes converge and are less than threshold
>>>                         |
>>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
>>>     for each active device. VFIO device is then transitioned in
>>>      _SAVING state.
>>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
>>>     For VFIO device, iterate in  .save_live_complete_precopy  until
>>>     pending data is 0.
>>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
>>
>> I suggest we also register to VMStateDescription, whose .pre_save
>> handler would get called after .save_live_complete_precopy in pre-copy
>> only case, and will called before .save_live_iterate in post-copy
>> enabled case.
>> In the .pre_save handler, we can save all device state which must be
>> copied after device stop in source vm and before device start in target vm.
>>
> hi
> to better describe this idea:
> 
> in pre-copy only case, the flow is
> 
> start migration --> .save_live_iterate (several round) -> stop source vm
> --> .save_live_complete_precopy --> .pre_save  -->start target vm
> -->migration complete
> 
> 
> in post-copy enabled case, the flow is
> 
> start migration --> .save_live_iterate (several round) --> start post copy --> 
> stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
> -->migration complete
> 
> Therefore, we should put saving of device state in .pre_save interface
> rather than in .save_live_complete_precopy. 
> The device state includes pci config data, page tables, register state, etc.
> 
> The .save_live_iterate and .save_live_complete_precopy should only deal
> with saving dirty memory.
> 

Vendor driver can decide when to save device state depending on the VFIO
device state set by user. Vendor driver doesn't have to depend on which
callback function QEMU or user application calls. In pre-copy case,
save_live_complete_precopy sets VFIO device state to
VFIO_DEVICE_STATE_SAVING which means vCPUs are stopped and vendor driver
should save all device state.

> 
> I know current implementation does not support post-copy. but at least
> it should not require huge change when we decide to enable it in future.
> 

.has_postcopy and .save_live_complete_postcopy need to be implemented to
support post-copy. I think .save_live_complete_postcopy should be
similar to vfio_save_complete_precopy.

Thanks,
Kirti

> Thanks
> Yan
> 
>>>                         |
>>>     (FINISH_MIGRATE, _COMPLETED, STOPPED)
>>>     Migraton thread schedule cleanup bottom half and exit
>>>
>>> Live migration resume path:
>>>     Incomming migration calls .load_setup for each device
>>>     (RESTORE_VM, _ACTIVE, STOPPED)
>>>                         |
>>>     For each device, .load_state is called for that device section data
>>>                         |
>>>     At the end, called .load_cleanup for each device and vCPUs are started.
>>>                         |
>>>         (RUNNING, _NONE, _RUNNING)
>>>
>>> Note that:
>>> - Migration post copy is not supported.
>>>
>>> v3 -> v4:
>>> - Added one more bit for _RESUMING flag to be set explicitly.
>>> - data_offset field is read-only for user space application.
>>> - data_size is read for every iteration before reading data from migration, that
>>>   is removed assumption that data will be till end of migration region.
>>> - If vendor driver supports mappable sparsed region, map those region during
>>>   setup state of save/load, similarly unmap those from cleanup routines.
>>> - Handles race condition that causes data corruption in migration region during
>>>   save device state by adding mutex and serialiaing save_buffer and
>>>   get_dirty_pages routines.
>>> - Skip called get_dirty_pages routine for mapped MMIO region of device.
>>> - Added trace events.
>>> - Splitted into multiple functional patches.
>>>
>>> v2 -> v3:
>>> - Removed enum of VFIO device states. Defined VFIO device state with 2 bits.
>>> - Re-structured vfio_device_migration_info to keep it minimal and defined action
>>>   on read and write access on its members.
>>>
>>> v1 -> v2:
>>> - Defined MIGRATION region type and sub-type which should be used with region
>>>   type capability.
>>> - Re-structured vfio_device_migration_info. This structure will be placed at 0th
>>>   offset of migration region.
>>> - Replaced ioctl with read/write for trapped part of migration region.
>>> - Added both type of access support, trapped or mmapped, for data section of the
>>>   region.
>>> - Moved PCI device functions to pci file.
>>> - Added iteration to get dirty page bitmap until bitmap for all requested pages
>>>   are copied.
>>>
>>> Thanks,
>>> Kirti
>>>
>>>
>>> Kirti Wankhede (13):
>>>   vfio: KABI for migration interface
>>>   vfio: Add function to unmap VFIO region
>>>   vfio: Add save and load functions for VFIO PCI devices
>>>   vfio: Add migration region initialization and finalize function
>>>   vfio: Add VM state change handler to know state of VM
>>>   vfio: Add migration state change notifier
>>>   vfio: Register SaveVMHandlers for VFIO device
>>>   vfio: Add save state functions to SaveVMHandlers
>>>   vfio: Add load state functions to SaveVMHandlers
>>>   vfio: Add function to get dirty page list
>>>   vfio: Add vfio_listerner_log_sync to mark dirty pages
>>>   vfio: Make vfio-pci device migration capable.
>>>   vfio: Add trace events in migration code path
>>>
>>>  hw/vfio/Makefile.objs         |   2 +-
>>>  hw/vfio/common.c              |  55 +++
>>>  hw/vfio/migration.c           | 815 ++++++++++++++++++++++++++++++++++++++++++
>>>  hw/vfio/pci.c                 | 126 ++++++-
>>>  hw/vfio/pci.h                 |  29 ++
>>>  hw/vfio/trace-events          |  19 +
>>>  include/hw/vfio/vfio-common.h |  22 ++
>>>  linux-headers/linux/vfio.h    |  71 ++++
>>>  8 files changed, 1132 insertions(+), 7 deletions(-)
>>>  create mode 100644 hw/vfio/migration.c
>>>
>>> -- 
>>> 2.7.0
>>>
Yan Zhao June 21, 2019, 8:46 a.m. UTC | #4
On Fri, Jun 21, 2019 at 04:02:50PM +0800, Kirti Wankhede wrote:
> 
> 
> On 6/21/2019 6:54 AM, Yan Zhao wrote:
> > On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
> >> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
> >>> Add migration support for VFIO device
> >>>
> >>> This Patch set include patches as below:
> >>> - Define KABI for VFIO device for migration support.
> >>> - Added save and restore functions for PCI configuration space
> >>> - Generic migration functionality for VFIO device.
> >>>   * This patch set adds functionality only for PCI devices, but can be
> >>>     extended to other VFIO devices.
> >>>   * Added all the basic functions required for pre-copy, stop-and-copy and
> >>>     resume phases of migration.
> >>>   * Added state change notifier and from that notifier function, VFIO
> >>>     device's state changed is conveyed to VFIO device driver.
> >>>   * During save setup phase and resume/load setup phase, migration region
> >>>     is queried and is used to read/write VFIO device data.
> >>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
> >>>     functionality of iteration during pre-copy phase.
> >>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
> >>>     iteration to read data from VFIO device driver is implemented till pending
> >>>     bytes returned by driver are not zero.
> >>>   * Added function to get dirty pages bitmap for the pages which are used by
> >>>     driver.
> >>> - Add vfio_listerner_log_sync to mark dirty pages.
> >>> - Make VFIO PCI device migration capable. If migration region is not provided by
> >>>   driver, migration is blocked.
> >>>
> >>> Below is the flow of state change for live migration where states in brackets
> >>> represent VM state, migration state and VFIO device state as:
> >>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> >>>
> >>> Live migration save path:
> >>>         QEMU normal running state
> >>>         (RUNNING, _NONE, _RUNNING)
> >>>                         |
> >>>     migrate_init spawns migration_thread.
> >>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
> >>>     Migration thread then calls each device's .save_setup()
> >>>                         |
> >>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> >>>     If device is active, get pending bytes by .save_live_pending()
> >>>     if pending bytes >= threshold_size,  call save_live_iterate()
> >>>     Data of VFIO device for pre-copy phase is copied.
> >>>     Iterate till pending bytes converge and are less than threshold
> >>>                         |
> >>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
> >>>     for each active device. VFIO device is then transitioned in
> >>>      _SAVING state.
> >>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
> >>>     For VFIO device, iterate in  .save_live_complete_precopy  until
> >>>     pending data is 0.
> >>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
> >>
> >> I suggest we also register to VMStateDescription, whose .pre_save
> >> handler would get called after .save_live_complete_precopy in pre-copy
> >> only case, and will called before .save_live_iterate in post-copy
> >> enabled case.
> >> In the .pre_save handler, we can save all device state which must be
> >> copied after device stop in source vm and before device start in target vm.
> >>
> > hi
> > to better describe this idea:
> > 
> > in pre-copy only case, the flow is
> > 
> > start migration --> .save_live_iterate (several round) -> stop source vm
> > --> .save_live_complete_precopy --> .pre_save  -->start target vm
> > -->migration complete
> > 
> > 
> > in post-copy enabled case, the flow is
> > 
> > start migration --> .save_live_iterate (several round) --> start post copy --> 
> > stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
> > -->migration complete
> > 
> > Therefore, we should put saving of device state in .pre_save interface
> > rather than in .save_live_complete_precopy. 
> > The device state includes pci config data, page tables, register state, etc.
> > 
> > The .save_live_iterate and .save_live_complete_precopy should only deal
> > with saving dirty memory.
> > 
> 
> Vendor driver can decide when to save device state depending on the VFIO
> device state set by user. Vendor driver doesn't have to depend on which
> callback function QEMU or user application calls. In pre-copy case,
> save_live_complete_precopy sets VFIO device state to
> VFIO_DEVICE_STATE_SAVING which means vCPUs are stopped and vendor driver
> should save all device state.
>
when post copy stops vCPUs and vfio device, vendor driver only needs to
provide device state. but how vendor driver knows that, if no extra
interface or no extra device state is provides?

> > 
> > I know current implementation does not support post-copy. but at least
> > it should not require huge change when we decide to enable it in future.
> > 
> 
> .has_postcopy and .save_live_complete_postcopy need to be implemented to
> support post-copy. I think .save_live_complete_postcopy should be
> similar to vfio_save_complete_precopy.
> 
> Thanks,
> Kirti
> 
> > Thanks
> > Yan
> > 
> >>>                         |
> >>>     (FINISH_MIGRATE, _COMPLETED, STOPPED)
> >>>     Migraton thread schedule cleanup bottom half and exit
> >>>
> >>> Live migration resume path:
> >>>     Incomming migration calls .load_setup for each device
> >>>     (RESTORE_VM, _ACTIVE, STOPPED)
> >>>                         |
> >>>     For each device, .load_state is called for that device section data
> >>>                         |
> >>>     At the end, called .load_cleanup for each device and vCPUs are started.
> >>>                         |
> >>>         (RUNNING, _NONE, _RUNNING)
> >>>
> >>> Note that:
> >>> - Migration post copy is not supported.
> >>>
> >>> v3 -> v4:
> >>> - Added one more bit for _RESUMING flag to be set explicitly.
> >>> - data_offset field is read-only for user space application.
> >>> - data_size is read for every iteration before reading data from migration, that
> >>>   is removed assumption that data will be till end of migration region.
> >>> - If vendor driver supports mappable sparsed region, map those region during
> >>>   setup state of save/load, similarly unmap those from cleanup routines.
> >>> - Handles race condition that causes data corruption in migration region during
> >>>   save device state by adding mutex and serialiaing save_buffer and
> >>>   get_dirty_pages routines.
> >>> - Skip called get_dirty_pages routine for mapped MMIO region of device.
> >>> - Added trace events.
> >>> - Splitted into multiple functional patches.
> >>>
> >>> v2 -> v3:
> >>> - Removed enum of VFIO device states. Defined VFIO device state with 2 bits.
> >>> - Re-structured vfio_device_migration_info to keep it minimal and defined action
> >>>   on read and write access on its members.
> >>>
> >>> v1 -> v2:
> >>> - Defined MIGRATION region type and sub-type which should be used with region
> >>>   type capability.
> >>> - Re-structured vfio_device_migration_info. This structure will be placed at 0th
> >>>   offset of migration region.
> >>> - Replaced ioctl with read/write for trapped part of migration region.
> >>> - Added both type of access support, trapped or mmapped, for data section of the
> >>>   region.
> >>> - Moved PCI device functions to pci file.
> >>> - Added iteration to get dirty page bitmap until bitmap for all requested pages
> >>>   are copied.
> >>>
> >>> Thanks,
> >>> Kirti
> >>>
> >>>
> >>> Kirti Wankhede (13):
> >>>   vfio: KABI for migration interface
> >>>   vfio: Add function to unmap VFIO region
> >>>   vfio: Add save and load functions for VFIO PCI devices
> >>>   vfio: Add migration region initialization and finalize function
> >>>   vfio: Add VM state change handler to know state of VM
> >>>   vfio: Add migration state change notifier
> >>>   vfio: Register SaveVMHandlers for VFIO device
> >>>   vfio: Add save state functions to SaveVMHandlers
> >>>   vfio: Add load state functions to SaveVMHandlers
> >>>   vfio: Add function to get dirty page list
> >>>   vfio: Add vfio_listerner_log_sync to mark dirty pages
> >>>   vfio: Make vfio-pci device migration capable.
> >>>   vfio: Add trace events in migration code path
> >>>
> >>>  hw/vfio/Makefile.objs         |   2 +-
> >>>  hw/vfio/common.c              |  55 +++
> >>>  hw/vfio/migration.c           | 815 ++++++++++++++++++++++++++++++++++++++++++
> >>>  hw/vfio/pci.c                 | 126 ++++++-
> >>>  hw/vfio/pci.h                 |  29 ++
> >>>  hw/vfio/trace-events          |  19 +
> >>>  include/hw/vfio/vfio-common.h |  22 ++
> >>>  linux-headers/linux/vfio.h    |  71 ++++
> >>>  8 files changed, 1132 insertions(+), 7 deletions(-)
> >>>  create mode 100644 hw/vfio/migration.c
> >>>
> >>> -- 
> >>> 2.7.0
> >>>
Kirti Wankhede June 21, 2019, 9:22 a.m. UTC | #5
On 6/21/2019 2:16 PM, Yan Zhao wrote:
> On Fri, Jun 21, 2019 at 04:02:50PM +0800, Kirti Wankhede wrote:
>>
>>
>> On 6/21/2019 6:54 AM, Yan Zhao wrote:
>>> On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
>>>> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
>>>>> Add migration support for VFIO device
>>>>>
>>>>> This Patch set include patches as below:
>>>>> - Define KABI for VFIO device for migration support.
>>>>> - Added save and restore functions for PCI configuration space
>>>>> - Generic migration functionality for VFIO device.
>>>>>   * This patch set adds functionality only for PCI devices, but can be
>>>>>     extended to other VFIO devices.
>>>>>   * Added all the basic functions required for pre-copy, stop-and-copy and
>>>>>     resume phases of migration.
>>>>>   * Added state change notifier and from that notifier function, VFIO
>>>>>     device's state changed is conveyed to VFIO device driver.
>>>>>   * During save setup phase and resume/load setup phase, migration region
>>>>>     is queried and is used to read/write VFIO device data.
>>>>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
>>>>>     functionality of iteration during pre-copy phase.
>>>>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
>>>>>     iteration to read data from VFIO device driver is implemented till pending
>>>>>     bytes returned by driver are not zero.
>>>>>   * Added function to get dirty pages bitmap for the pages which are used by
>>>>>     driver.
>>>>> - Add vfio_listerner_log_sync to mark dirty pages.
>>>>> - Make VFIO PCI device migration capable. If migration region is not provided by
>>>>>   driver, migration is blocked.
>>>>>
>>>>> Below is the flow of state change for live migration where states in brackets
>>>>> represent VM state, migration state and VFIO device state as:
>>>>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
>>>>>
>>>>> Live migration save path:
>>>>>         QEMU normal running state
>>>>>         (RUNNING, _NONE, _RUNNING)
>>>>>                         |
>>>>>     migrate_init spawns migration_thread.
>>>>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
>>>>>     Migration thread then calls each device's .save_setup()
>>>>>                         |
>>>>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
>>>>>     If device is active, get pending bytes by .save_live_pending()
>>>>>     if pending bytes >= threshold_size,  call save_live_iterate()
>>>>>     Data of VFIO device for pre-copy phase is copied.
>>>>>     Iterate till pending bytes converge and are less than threshold
>>>>>                         |
>>>>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
>>>>>     for each active device. VFIO device is then transitioned in
>>>>>      _SAVING state.
>>>>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
>>>>>     For VFIO device, iterate in  .save_live_complete_precopy  until
>>>>>     pending data is 0.
>>>>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
>>>>
>>>> I suggest we also register to VMStateDescription, whose .pre_save
>>>> handler would get called after .save_live_complete_precopy in pre-copy
>>>> only case, and will called before .save_live_iterate in post-copy
>>>> enabled case.
>>>> In the .pre_save handler, we can save all device state which must be
>>>> copied after device stop in source vm and before device start in target vm.
>>>>
>>> hi
>>> to better describe this idea:
>>>
>>> in pre-copy only case, the flow is
>>>
>>> start migration --> .save_live_iterate (several round) -> stop source vm
>>> --> .save_live_complete_precopy --> .pre_save  -->start target vm
>>> -->migration complete
>>>
>>>
>>> in post-copy enabled case, the flow is
>>>
>>> start migration --> .save_live_iterate (several round) --> start post copy --> 
>>> stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
>>> -->migration complete
>>>
>>> Therefore, we should put saving of device state in .pre_save interface
>>> rather than in .save_live_complete_precopy. 
>>> The device state includes pci config data, page tables, register state, etc.
>>>
>>> The .save_live_iterate and .save_live_complete_precopy should only deal
>>> with saving dirty memory.
>>>
>>
>> Vendor driver can decide when to save device state depending on the VFIO
>> device state set by user. Vendor driver doesn't have to depend on which
>> callback function QEMU or user application calls. In pre-copy case,
>> save_live_complete_precopy sets VFIO device state to
>> VFIO_DEVICE_STATE_SAVING which means vCPUs are stopped and vendor driver
>> should save all device state.
>>
> when post copy stops vCPUs and vfio device, vendor driver only needs to
> provide device state. but how vendor driver knows that, if no extra
> interface or no extra device state is provides?
> 

.save_live_complete_postcopy interface for post-copy will get called,
right?

Thanks,
Kirti

>>>
>>> I know current implementation does not support post-copy. but at least
>>> it should not require huge change when we decide to enable it in future.
>>>
>>
>> .has_postcopy and .save_live_complete_postcopy need to be implemented to
>> support post-copy. I think .save_live_complete_postcopy should be
>> similar to vfio_save_complete_precopy.
>>
>> Thanks,
>> Kirti
>>
>>> Thanks
>>> Yan
>>>
Yan Zhao June 21, 2019, 10:45 a.m. UTC | #6
On Fri, Jun 21, 2019 at 05:22:37PM +0800, Kirti Wankhede wrote:
> 
> 
> On 6/21/2019 2:16 PM, Yan Zhao wrote:
> > On Fri, Jun 21, 2019 at 04:02:50PM +0800, Kirti Wankhede wrote:
> >>
> >>
> >> On 6/21/2019 6:54 AM, Yan Zhao wrote:
> >>> On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
> >>>> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
> >>>>> Add migration support for VFIO device
> >>>>>
> >>>>> This Patch set include patches as below:
> >>>>> - Define KABI for VFIO device for migration support.
> >>>>> - Added save and restore functions for PCI configuration space
> >>>>> - Generic migration functionality for VFIO device.
> >>>>>   * This patch set adds functionality only for PCI devices, but can be
> >>>>>     extended to other VFIO devices.
> >>>>>   * Added all the basic functions required for pre-copy, stop-and-copy and
> >>>>>     resume phases of migration.
> >>>>>   * Added state change notifier and from that notifier function, VFIO
> >>>>>     device's state changed is conveyed to VFIO device driver.
> >>>>>   * During save setup phase and resume/load setup phase, migration region
> >>>>>     is queried and is used to read/write VFIO device data.
> >>>>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
> >>>>>     functionality of iteration during pre-copy phase.
> >>>>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
> >>>>>     iteration to read data from VFIO device driver is implemented till pending
> >>>>>     bytes returned by driver are not zero.
> >>>>>   * Added function to get dirty pages bitmap for the pages which are used by
> >>>>>     driver.
> >>>>> - Add vfio_listerner_log_sync to mark dirty pages.
> >>>>> - Make VFIO PCI device migration capable. If migration region is not provided by
> >>>>>   driver, migration is blocked.
> >>>>>
> >>>>> Below is the flow of state change for live migration where states in brackets
> >>>>> represent VM state, migration state and VFIO device state as:
> >>>>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> >>>>>
> >>>>> Live migration save path:
> >>>>>         QEMU normal running state
> >>>>>         (RUNNING, _NONE, _RUNNING)
> >>>>>                         |
> >>>>>     migrate_init spawns migration_thread.
> >>>>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
> >>>>>     Migration thread then calls each device's .save_setup()
> >>>>>                         |
> >>>>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> >>>>>     If device is active, get pending bytes by .save_live_pending()
> >>>>>     if pending bytes >= threshold_size,  call save_live_iterate()
> >>>>>     Data of VFIO device for pre-copy phase is copied.
> >>>>>     Iterate till pending bytes converge and are less than threshold
> >>>>>                         |
> >>>>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
> >>>>>     for each active device. VFIO device is then transitioned in
> >>>>>      _SAVING state.
> >>>>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
> >>>>>     For VFIO device, iterate in  .save_live_complete_precopy  until
> >>>>>     pending data is 0.
> >>>>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
> >>>>
> >>>> I suggest we also register to VMStateDescription, whose .pre_save
> >>>> handler would get called after .save_live_complete_precopy in pre-copy
> >>>> only case, and will called before .save_live_iterate in post-copy
> >>>> enabled case.
> >>>> In the .pre_save handler, we can save all device state which must be
> >>>> copied after device stop in source vm and before device start in target vm.
> >>>>
> >>> hi
> >>> to better describe this idea:
> >>>
> >>> in pre-copy only case, the flow is
> >>>
> >>> start migration --> .save_live_iterate (several round) -> stop source vm
> >>> --> .save_live_complete_precopy --> .pre_save  -->start target vm
> >>> -->migration complete
> >>>
> >>>
> >>> in post-copy enabled case, the flow is
> >>>
> >>> start migration --> .save_live_iterate (several round) --> start post copy --> 
> >>> stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
> >>> -->migration complete
> >>>
> >>> Therefore, we should put saving of device state in .pre_save interface
> >>> rather than in .save_live_complete_precopy. 
> >>> The device state includes pci config data, page tables, register state, etc.
> >>>
> >>> The .save_live_iterate and .save_live_complete_precopy should only deal
> >>> with saving dirty memory.
> >>>
> >>
> >> Vendor driver can decide when to save device state depending on the VFIO
> >> device state set by user. Vendor driver doesn't have to depend on which
> >> callback function QEMU or user application calls. In pre-copy case,
> >> save_live_complete_precopy sets VFIO device state to
> >> VFIO_DEVICE_STATE_SAVING which means vCPUs are stopped and vendor driver
> >> should save all device state.
> >>
> > when post copy stops vCPUs and vfio device, vendor driver only needs to
> > provide device state. but how vendor driver knows that, if no extra
> > interface or no extra device state is provides?
> > 
> 
> .save_live_complete_postcopy interface for post-copy will get called,
> right?
>
yes, but it's too late after postcopy completion

> Thanks,
> Kirti
> 
> >>>
> >>> I know current implementation does not support post-copy. but at least
> >>> it should not require huge change when we decide to enable it in future.
> >>>
> >>
> >> .has_postcopy and .save_live_complete_postcopy need to be implemented to
> >> support post-copy. I think .save_live_complete_postcopy should be
> >> similar to vfio_save_complete_precopy.
> >>
> >> Thanks,
> >> Kirti
> >>
> >>> Thanks
> >>> Yan
> >>>
Dr. David Alan Gilbert June 24, 2019, 7 p.m. UTC | #7
* Kirti Wankhede (kwankhede@nvidia.com) wrote:
> 
> 
> On 6/21/2019 2:16 PM, Yan Zhao wrote:
> > On Fri, Jun 21, 2019 at 04:02:50PM +0800, Kirti Wankhede wrote:
> >>
> >>
> >> On 6/21/2019 6:54 AM, Yan Zhao wrote:
> >>> On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
> >>>> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
> >>>>> Add migration support for VFIO device
> >>>>>
> >>>>> This Patch set include patches as below:
> >>>>> - Define KABI for VFIO device for migration support.
> >>>>> - Added save and restore functions for PCI configuration space
> >>>>> - Generic migration functionality for VFIO device.
> >>>>>   * This patch set adds functionality only for PCI devices, but can be
> >>>>>     extended to other VFIO devices.
> >>>>>   * Added all the basic functions required for pre-copy, stop-and-copy and
> >>>>>     resume phases of migration.
> >>>>>   * Added state change notifier and from that notifier function, VFIO
> >>>>>     device's state changed is conveyed to VFIO device driver.
> >>>>>   * During save setup phase and resume/load setup phase, migration region
> >>>>>     is queried and is used to read/write VFIO device data.
> >>>>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
> >>>>>     functionality of iteration during pre-copy phase.
> >>>>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
> >>>>>     iteration to read data from VFIO device driver is implemented till pending
> >>>>>     bytes returned by driver are not zero.
> >>>>>   * Added function to get dirty pages bitmap for the pages which are used by
> >>>>>     driver.
> >>>>> - Add vfio_listerner_log_sync to mark dirty pages.
> >>>>> - Make VFIO PCI device migration capable. If migration region is not provided by
> >>>>>   driver, migration is blocked.
> >>>>>
> >>>>> Below is the flow of state change for live migration where states in brackets
> >>>>> represent VM state, migration state and VFIO device state as:
> >>>>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> >>>>>
> >>>>> Live migration save path:
> >>>>>         QEMU normal running state
> >>>>>         (RUNNING, _NONE, _RUNNING)
> >>>>>                         |
> >>>>>     migrate_init spawns migration_thread.
> >>>>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
> >>>>>     Migration thread then calls each device's .save_setup()
> >>>>>                         |
> >>>>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> >>>>>     If device is active, get pending bytes by .save_live_pending()
> >>>>>     if pending bytes >= threshold_size,  call save_live_iterate()
> >>>>>     Data of VFIO device for pre-copy phase is copied.
> >>>>>     Iterate till pending bytes converge and are less than threshold
> >>>>>                         |
> >>>>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
> >>>>>     for each active device. VFIO device is then transitioned in
> >>>>>      _SAVING state.
> >>>>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
> >>>>>     For VFIO device, iterate in  .save_live_complete_precopy  until
> >>>>>     pending data is 0.
> >>>>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
> >>>>
> >>>> I suggest we also register to VMStateDescription, whose .pre_save
> >>>> handler would get called after .save_live_complete_precopy in pre-copy
> >>>> only case, and will called before .save_live_iterate in post-copy
> >>>> enabled case.
> >>>> In the .pre_save handler, we can save all device state which must be
> >>>> copied after device stop in source vm and before device start in target vm.
> >>>>
> >>> hi
> >>> to better describe this idea:
> >>>
> >>> in pre-copy only case, the flow is
> >>>
> >>> start migration --> .save_live_iterate (several round) -> stop source vm
> >>> --> .save_live_complete_precopy --> .pre_save  -->start target vm
> >>> -->migration complete
> >>>
> >>>
> >>> in post-copy enabled case, the flow is
> >>>
> >>> start migration --> .save_live_iterate (several round) --> start post copy --> 
> >>> stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
> >>> -->migration complete
> >>>
> >>> Therefore, we should put saving of device state in .pre_save interface
> >>> rather than in .save_live_complete_precopy. 
> >>> The device state includes pci config data, page tables, register state, etc.
> >>>
> >>> The .save_live_iterate and .save_live_complete_precopy should only deal
> >>> with saving dirty memory.
> >>>
> >>
> >> Vendor driver can decide when to save device state depending on the VFIO
> >> device state set by user. Vendor driver doesn't have to depend on which
> >> callback function QEMU or user application calls. In pre-copy case,
> >> save_live_complete_precopy sets VFIO device state to
> >> VFIO_DEVICE_STATE_SAVING which means vCPUs are stopped and vendor driver
> >> should save all device state.
> >>
> > when post copy stops vCPUs and vfio device, vendor driver only needs to
> > provide device state. but how vendor driver knows that, if no extra
> > interface or no extra device state is provides?
> > 
> 
> .save_live_complete_postcopy interface for post-copy will get called,
> right?

That happens at the very end; I think the question here is for something
that gets called at the point we stop iteratively sending RAM, send the
device states and then start sending RAM on demand to the destination
as it's running. Typically we send a small set of device state
(registers etc) at this point.

I guess there's two different postcopy cases that we need to think
about:
  a) Where the VFIO device doesn't support postcopy - it just gets
  migrated like any other device, so all it's RAM must get sent
  before we flip into postcopy mode.

  b) Where the VFIO device does support postcopy - where the pages
  get sent on demand.

(b) maybe tricky depending on whether your hardware can fault
on pages of your RAM that are needed but not yet transferred;  but
if you can that would make life a lot more practical on really
big VFO devices.

Dave

> Thanks,
> Kirti
> 
> >>>
> >>> I know current implementation does not support post-copy. but at least
> >>> it should not require huge change when we decide to enable it in future.
> >>>
> >>
> >> .has_postcopy and .save_live_complete_postcopy need to be implemented to
> >> support post-copy. I think .save_live_complete_postcopy should be
> >> similar to vfio_save_complete_precopy.
> >>
> >> Thanks,
> >> Kirti
> >>
> >>> Thanks
> >>> Yan
> >>>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Yan Zhao June 26, 2019, 12:43 a.m. UTC | #8
On Tue, Jun 25, 2019 at 03:00:24AM +0800, Dr. David Alan Gilbert wrote:
> * Kirti Wankhede (kwankhede@nvidia.com) wrote:
> > 
> > 
> > On 6/21/2019 2:16 PM, Yan Zhao wrote:
> > > On Fri, Jun 21, 2019 at 04:02:50PM +0800, Kirti Wankhede wrote:
> > >>
> > >>
> > >> On 6/21/2019 6:54 AM, Yan Zhao wrote:
> > >>> On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
> > >>>> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
> > >>>>> Add migration support for VFIO device
> > >>>>>
> > >>>>> This Patch set include patches as below:
> > >>>>> - Define KABI for VFIO device for migration support.
> > >>>>> - Added save and restore functions for PCI configuration space
> > >>>>> - Generic migration functionality for VFIO device.
> > >>>>>   * This patch set adds functionality only for PCI devices, but can be
> > >>>>>     extended to other VFIO devices.
> > >>>>>   * Added all the basic functions required for pre-copy, stop-and-copy and
> > >>>>>     resume phases of migration.
> > >>>>>   * Added state change notifier and from that notifier function, VFIO
> > >>>>>     device's state changed is conveyed to VFIO device driver.
> > >>>>>   * During save setup phase and resume/load setup phase, migration region
> > >>>>>     is queried and is used to read/write VFIO device data.
> > >>>>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
> > >>>>>     functionality of iteration during pre-copy phase.
> > >>>>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
> > >>>>>     iteration to read data from VFIO device driver is implemented till pending
> > >>>>>     bytes returned by driver are not zero.
> > >>>>>   * Added function to get dirty pages bitmap for the pages which are used by
> > >>>>>     driver.
> > >>>>> - Add vfio_listerner_log_sync to mark dirty pages.
> > >>>>> - Make VFIO PCI device migration capable. If migration region is not provided by
> > >>>>>   driver, migration is blocked.
> > >>>>>
> > >>>>> Below is the flow of state change for live migration where states in brackets
> > >>>>> represent VM state, migration state and VFIO device state as:
> > >>>>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> > >>>>>
> > >>>>> Live migration save path:
> > >>>>>         QEMU normal running state
> > >>>>>         (RUNNING, _NONE, _RUNNING)
> > >>>>>                         |
> > >>>>>     migrate_init spawns migration_thread.
> > >>>>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
> > >>>>>     Migration thread then calls each device's .save_setup()
> > >>>>>                         |
> > >>>>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> > >>>>>     If device is active, get pending bytes by .save_live_pending()
> > >>>>>     if pending bytes >= threshold_size,  call save_live_iterate()
> > >>>>>     Data of VFIO device for pre-copy phase is copied.
> > >>>>>     Iterate till pending bytes converge and are less than threshold
> > >>>>>                         |
> > >>>>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
> > >>>>>     for each active device. VFIO device is then transitioned in
> > >>>>>      _SAVING state.
> > >>>>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
> > >>>>>     For VFIO device, iterate in  .save_live_complete_precopy  until
> > >>>>>     pending data is 0.
> > >>>>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
> > >>>>
> > >>>> I suggest we also register to VMStateDescription, whose .pre_save
> > >>>> handler would get called after .save_live_complete_precopy in pre-copy
> > >>>> only case, and will called before .save_live_iterate in post-copy
> > >>>> enabled case.
> > >>>> In the .pre_save handler, we can save all device state which must be
> > >>>> copied after device stop in source vm and before device start in target vm.
> > >>>>
> > >>> hi
> > >>> to better describe this idea:
> > >>>
> > >>> in pre-copy only case, the flow is
> > >>>
> > >>> start migration --> .save_live_iterate (several round) -> stop source vm
> > >>> --> .save_live_complete_precopy --> .pre_save  -->start target vm
> > >>> -->migration complete
> > >>>
> > >>>
> > >>> in post-copy enabled case, the flow is
> > >>>
> > >>> start migration --> .save_live_iterate (several round) --> start post copy --> 
> > >>> stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
> > >>> -->migration complete
> > >>>
> > >>> Therefore, we should put saving of device state in .pre_save interface
> > >>> rather than in .save_live_complete_precopy. 
> > >>> The device state includes pci config data, page tables, register state, etc.
> > >>>
> > >>> The .save_live_iterate and .save_live_complete_precopy should only deal
> > >>> with saving dirty memory.
> > >>>
> > >>
> > >> Vendor driver can decide when to save device state depending on the VFIO
> > >> device state set by user. Vendor driver doesn't have to depend on which
> > >> callback function QEMU or user application calls. In pre-copy case,
> > >> save_live_complete_precopy sets VFIO device state to
> > >> VFIO_DEVICE_STATE_SAVING which means vCPUs are stopped and vendor driver
> > >> should save all device state.
> > >>
> > > when post copy stops vCPUs and vfio device, vendor driver only needs to
> > > provide device state. but how vendor driver knows that, if no extra
> > > interface or no extra device state is provides?
> > > 
> > 
> > .save_live_complete_postcopy interface for post-copy will get called,
> > right?
> 
> That happens at the very end; I think the question here is for something
> that gets called at the point we stop iteratively sending RAM, send the
> device states and then start sending RAM on demand to the destination
> as it's running. Typically we send a small set of device state
> (registers etc) at this point.
> 
> I guess there's two different postcopy cases that we need to think
> about:
>   a) Where the VFIO device doesn't support postcopy - it just gets
>   migrated like any other device, so all it's RAM must get sent
>   before we flip into postcopy mode.
> 
>   b) Where the VFIO device does support postcopy - where the pages
>   get sent on demand.
> 
> (b) maybe tricky depending on whether your hardware can fault
> on pages of your RAM that are needed but not yet transferred;  but
> if you can that would make life a lot more practical on really
> big VFO devices.
> 
> Dave
>
hi Dave,
so do you think it is good to abstract device state data and save it in
.pre_save callback?

Thanks
Yan

> > Thanks,
> > Kirti
> > 
> > >>>
> > >>> I know current implementation does not support post-copy. but at least
> > >>> it should not require huge change when we decide to enable it in future.
> > >>>
> > >>
> > >> .has_postcopy and .save_live_complete_postcopy need to be implemented to
> > >> support post-copy. I think .save_live_complete_postcopy should be
> > >> similar to vfio_save_complete_precopy.
> > >>
> > >> Thanks,
> > >> Kirti
> > >>
> > >>> Thanks
> > >>> Yan
> > >>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Dr. David Alan Gilbert June 28, 2019, 9:44 a.m. UTC | #9
* Yan Zhao (yan.y.zhao@intel.com) wrote:
> On Tue, Jun 25, 2019 at 03:00:24AM +0800, Dr. David Alan Gilbert wrote:
> > * Kirti Wankhede (kwankhede@nvidia.com) wrote:
> > > 
> > > 
> > > On 6/21/2019 2:16 PM, Yan Zhao wrote:
> > > > On Fri, Jun 21, 2019 at 04:02:50PM +0800, Kirti Wankhede wrote:
> > > >>
> > > >>
> > > >> On 6/21/2019 6:54 AM, Yan Zhao wrote:
> > > >>> On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
> > > >>>> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
> > > >>>>> Add migration support for VFIO device
> > > >>>>>
> > > >>>>> This Patch set include patches as below:
> > > >>>>> - Define KABI for VFIO device for migration support.
> > > >>>>> - Added save and restore functions for PCI configuration space
> > > >>>>> - Generic migration functionality for VFIO device.
> > > >>>>>   * This patch set adds functionality only for PCI devices, but can be
> > > >>>>>     extended to other VFIO devices.
> > > >>>>>   * Added all the basic functions required for pre-copy, stop-and-copy and
> > > >>>>>     resume phases of migration.
> > > >>>>>   * Added state change notifier and from that notifier function, VFIO
> > > >>>>>     device's state changed is conveyed to VFIO device driver.
> > > >>>>>   * During save setup phase and resume/load setup phase, migration region
> > > >>>>>     is queried and is used to read/write VFIO device data.
> > > >>>>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
> > > >>>>>     functionality of iteration during pre-copy phase.
> > > >>>>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
> > > >>>>>     iteration to read data from VFIO device driver is implemented till pending
> > > >>>>>     bytes returned by driver are not zero.
> > > >>>>>   * Added function to get dirty pages bitmap for the pages which are used by
> > > >>>>>     driver.
> > > >>>>> - Add vfio_listerner_log_sync to mark dirty pages.
> > > >>>>> - Make VFIO PCI device migration capable. If migration region is not provided by
> > > >>>>>   driver, migration is blocked.
> > > >>>>>
> > > >>>>> Below is the flow of state change for live migration where states in brackets
> > > >>>>> represent VM state, migration state and VFIO device state as:
> > > >>>>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> > > >>>>>
> > > >>>>> Live migration save path:
> > > >>>>>         QEMU normal running state
> > > >>>>>         (RUNNING, _NONE, _RUNNING)
> > > >>>>>                         |
> > > >>>>>     migrate_init spawns migration_thread.
> > > >>>>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
> > > >>>>>     Migration thread then calls each device's .save_setup()
> > > >>>>>                         |
> > > >>>>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> > > >>>>>     If device is active, get pending bytes by .save_live_pending()
> > > >>>>>     if pending bytes >= threshold_size,  call save_live_iterate()
> > > >>>>>     Data of VFIO device for pre-copy phase is copied.
> > > >>>>>     Iterate till pending bytes converge and are less than threshold
> > > >>>>>                         |
> > > >>>>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
> > > >>>>>     for each active device. VFIO device is then transitioned in
> > > >>>>>      _SAVING state.
> > > >>>>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
> > > >>>>>     For VFIO device, iterate in  .save_live_complete_precopy  until
> > > >>>>>     pending data is 0.
> > > >>>>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
> > > >>>>
> > > >>>> I suggest we also register to VMStateDescription, whose .pre_save
> > > >>>> handler would get called after .save_live_complete_precopy in pre-copy
> > > >>>> only case, and will called before .save_live_iterate in post-copy
> > > >>>> enabled case.
> > > >>>> In the .pre_save handler, we can save all device state which must be
> > > >>>> copied after device stop in source vm and before device start in target vm.
> > > >>>>
> > > >>> hi
> > > >>> to better describe this idea:
> > > >>>
> > > >>> in pre-copy only case, the flow is
> > > >>>
> > > >>> start migration --> .save_live_iterate (several round) -> stop source vm
> > > >>> --> .save_live_complete_precopy --> .pre_save  -->start target vm
> > > >>> -->migration complete
> > > >>>
> > > >>>
> > > >>> in post-copy enabled case, the flow is
> > > >>>
> > > >>> start migration --> .save_live_iterate (several round) --> start post copy --> 
> > > >>> stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
> > > >>> -->migration complete
> > > >>>
> > > >>> Therefore, we should put saving of device state in .pre_save interface
> > > >>> rather than in .save_live_complete_precopy. 
> > > >>> The device state includes pci config data, page tables, register state, etc.
> > > >>>
> > > >>> The .save_live_iterate and .save_live_complete_precopy should only deal
> > > >>> with saving dirty memory.
> > > >>>
> > > >>
> > > >> Vendor driver can decide when to save device state depending on the VFIO
> > > >> device state set by user. Vendor driver doesn't have to depend on which
> > > >> callback function QEMU or user application calls. In pre-copy case,
> > > >> save_live_complete_precopy sets VFIO device state to
> > > >> VFIO_DEVICE_STATE_SAVING which means vCPUs are stopped and vendor driver
> > > >> should save all device state.
> > > >>
> > > > when post copy stops vCPUs and vfio device, vendor driver only needs to
> > > > provide device state. but how vendor driver knows that, if no extra
> > > > interface or no extra device state is provides?
> > > > 
> > > 
> > > .save_live_complete_postcopy interface for post-copy will get called,
> > > right?
> > 
> > That happens at the very end; I think the question here is for something
> > that gets called at the point we stop iteratively sending RAM, send the
> > device states and then start sending RAM on demand to the destination
> > as it's running. Typically we send a small set of device state
> > (registers etc) at this point.
> > 
> > I guess there's two different postcopy cases that we need to think
> > about:
> >   a) Where the VFIO device doesn't support postcopy - it just gets
> >   migrated like any other device, so all it's RAM must get sent
> >   before we flip into postcopy mode.
> > 
> >   b) Where the VFIO device does support postcopy - where the pages
> >   get sent on demand.
> > 
> > (b) maybe tricky depending on whether your hardware can fault
> > on pages of your RAM that are needed but not yet transferred;  but
> > if you can that would make life a lot more practical on really
> > big VFO devices.
> > 
> > Dave
> >
> hi Dave,
> so do you think it is good to abstract device state data and save it in
> .pre_save callback?

I'm not sure we have a vmsd/pre_save in this setup?  If we did then it's
a bit confusing because I don't think we have any other iterative device
that also has a vmsd.

I'd have to test it, but I think you might get the devices
->save_live_complete_precopy called at the right point just before
postcopy switchover.  It's worth looking.

Dave

> Thanks
> Yan
> 
> > > Thanks,
> > > Kirti
> > > 
> > > >>>
> > > >>> I know current implementation does not support post-copy. but at least
> > > >>> it should not require huge change when we decide to enable it in future.
> > > >>>
> > > >>
> > > >> .has_postcopy and .save_live_complete_postcopy need to be implemented to
> > > >> support post-copy. I think .save_live_complete_postcopy should be
> > > >> similar to vfio_save_complete_precopy.
> > > >>
> > > >> Thanks,
> > > >> Kirti
> > > >>
> > > >>> Thanks
> > > >>> Yan
> > > >>>
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Yan Zhao June 28, 2019, 9:28 p.m. UTC | #10
On Fri, Jun 28, 2019 at 05:44:47PM +0800, Dr. David Alan Gilbert wrote:
> * Yan Zhao (yan.y.zhao@intel.com) wrote:
> > On Tue, Jun 25, 2019 at 03:00:24AM +0800, Dr. David Alan Gilbert wrote:
> > > * Kirti Wankhede (kwankhede@nvidia.com) wrote:
> > > > 
> > > > 
> > > > On 6/21/2019 2:16 PM, Yan Zhao wrote:
> > > > > On Fri, Jun 21, 2019 at 04:02:50PM +0800, Kirti Wankhede wrote:
> > > > >>
> > > > >>
> > > > >> On 6/21/2019 6:54 AM, Yan Zhao wrote:
> > > > >>> On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
> > > > >>>> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
> > > > >>>>> Add migration support for VFIO device
> > > > >>>>>
> > > > >>>>> This Patch set include patches as below:
> > > > >>>>> - Define KABI for VFIO device for migration support.
> > > > >>>>> - Added save and restore functions for PCI configuration space
> > > > >>>>> - Generic migration functionality for VFIO device.
> > > > >>>>>   * This patch set adds functionality only for PCI devices, but can be
> > > > >>>>>     extended to other VFIO devices.
> > > > >>>>>   * Added all the basic functions required for pre-copy, stop-and-copy and
> > > > >>>>>     resume phases of migration.
> > > > >>>>>   * Added state change notifier and from that notifier function, VFIO
> > > > >>>>>     device's state changed is conveyed to VFIO device driver.
> > > > >>>>>   * During save setup phase and resume/load setup phase, migration region
> > > > >>>>>     is queried and is used to read/write VFIO device data.
> > > > >>>>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
> > > > >>>>>     functionality of iteration during pre-copy phase.
> > > > >>>>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
> > > > >>>>>     iteration to read data from VFIO device driver is implemented till pending
> > > > >>>>>     bytes returned by driver are not zero.
> > > > >>>>>   * Added function to get dirty pages bitmap for the pages which are used by
> > > > >>>>>     driver.
> > > > >>>>> - Add vfio_listerner_log_sync to mark dirty pages.
> > > > >>>>> - Make VFIO PCI device migration capable. If migration region is not provided by
> > > > >>>>>   driver, migration is blocked.
> > > > >>>>>
> > > > >>>>> Below is the flow of state change for live migration where states in brackets
> > > > >>>>> represent VM state, migration state and VFIO device state as:
> > > > >>>>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> > > > >>>>>
> > > > >>>>> Live migration save path:
> > > > >>>>>         QEMU normal running state
> > > > >>>>>         (RUNNING, _NONE, _RUNNING)
> > > > >>>>>                         |
> > > > >>>>>     migrate_init spawns migration_thread.
> > > > >>>>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
> > > > >>>>>     Migration thread then calls each device's .save_setup()
> > > > >>>>>                         |
> > > > >>>>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> > > > >>>>>     If device is active, get pending bytes by .save_live_pending()
> > > > >>>>>     if pending bytes >= threshold_size,  call save_live_iterate()
> > > > >>>>>     Data of VFIO device for pre-copy phase is copied.
> > > > >>>>>     Iterate till pending bytes converge and are less than threshold
> > > > >>>>>                         |
> > > > >>>>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
> > > > >>>>>     for each active device. VFIO device is then transitioned in
> > > > >>>>>      _SAVING state.
> > > > >>>>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
> > > > >>>>>     For VFIO device, iterate in  .save_live_complete_precopy  until
> > > > >>>>>     pending data is 0.
> > > > >>>>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
> > > > >>>>
> > > > >>>> I suggest we also register to VMStateDescription, whose .pre_save
> > > > >>>> handler would get called after .save_live_complete_precopy in pre-copy
> > > > >>>> only case, and will called before .save_live_iterate in post-copy
> > > > >>>> enabled case.
> > > > >>>> In the .pre_save handler, we can save all device state which must be
> > > > >>>> copied after device stop in source vm and before device start in target vm.
> > > > >>>>
> > > > >>> hi
> > > > >>> to better describe this idea:
> > > > >>>
> > > > >>> in pre-copy only case, the flow is
> > > > >>>
> > > > >>> start migration --> .save_live_iterate (several round) -> stop source vm
> > > > >>> --> .save_live_complete_precopy --> .pre_save  -->start target vm
> > > > >>> -->migration complete
> > > > >>>
> > > > >>>
> > > > >>> in post-copy enabled case, the flow is
> > > > >>>
> > > > >>> start migration --> .save_live_iterate (several round) --> start post copy --> 
> > > > >>> stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
> > > > >>> -->migration complete
> > > > >>>
> > > > >>> Therefore, we should put saving of device state in .pre_save interface
> > > > >>> rather than in .save_live_complete_precopy. 
> > > > >>> The device state includes pci config data, page tables, register state, etc.
> > > > >>>
> > > > >>> The .save_live_iterate and .save_live_complete_precopy should only deal
> > > > >>> with saving dirty memory.
> > > > >>>
> > > > >>
> > > > >> Vendor driver can decide when to save device state depending on the VFIO
> > > > >> device state set by user. Vendor driver doesn't have to depend on which
> > > > >> callback function QEMU or user application calls. In pre-copy case,
> > > > >> save_live_complete_precopy sets VFIO device state to
> > > > >> VFIO_DEVICE_STATE_SAVING which means vCPUs are stopped and vendor driver
> > > > >> should save all device state.
> > > > >>
> > > > > when post copy stops vCPUs and vfio device, vendor driver only needs to
> > > > > provide device state. but how vendor driver knows that, if no extra
> > > > > interface or no extra device state is provides?
> > > > > 
> > > > 
> > > > .save_live_complete_postcopy interface for post-copy will get called,
> > > > right?
> > > 
> > > That happens at the very end; I think the question here is for something
> > > that gets called at the point we stop iteratively sending RAM, send the
> > > device states and then start sending RAM on demand to the destination
> > > as it's running. Typically we send a small set of device state
> > > (registers etc) at this point.
> > > 
> > > I guess there's two different postcopy cases that we need to think
> > > about:
> > >   a) Where the VFIO device doesn't support postcopy - it just gets
> > >   migrated like any other device, so all it's RAM must get sent
> > >   before we flip into postcopy mode.
> > > 
> > >   b) Where the VFIO device does support postcopy - where the pages
> > >   get sent on demand.
> > > 
> > > (b) maybe tricky depending on whether your hardware can fault
> > > on pages of your RAM that are needed but not yet transferred;  but
> > > if you can that would make life a lot more practical on really
> > > big VFO devices.
> > > 
> > > Dave
> > >
> > hi Dave,
> > so do you think it is good to abstract device state data and save it in
> > .pre_save callback?
> 
> I'm not sure we have a vmsd/pre_save in this setup?  If we did then it's
> a bit confusing because I don't think we have any other iterative device
> that also has a vmsd.
Yes, I tried it. it's ok to register SaveVMHandlers and VMStateDescription at the
same time.

> 
> I'd have to test it, but I think you might get the devices
> ->save_live_complete_precopy called at the right point just before
> postcopy switchover.  It's worth looking.
> 
if a iterative device supports postcopy, then its save_live_complete_precopy
would not get called before postcopy switchover.
However, postcopy may need to save pure device state only data (not memory) at that
time. That's the reason I think we should also register to
VMStateDescription also, as its .pre_save handler would get called at
at that time.

Thanks
Yan

> Dave
> 
> > Thanks
> > Yan
> > 
> > > > Thanks,
> > > > Kirti
> > > > 
> > > > >>>
> > > > >>> I know current implementation does not support post-copy. but at least
> > > > >>> it should not require huge change when we decide to enable it in future.
> > > > >>>
> > > > >>
> > > > >> .has_postcopy and .save_live_complete_postcopy need to be implemented to
> > > > >> support post-copy. I think .save_live_complete_postcopy should be
> > > > >> similar to vfio_save_complete_precopy.
> > > > >>
> > > > >> Thanks,
> > > > >> Kirti
> > > > >>
> > > > >>> Thanks
> > > > >>> Yan
> > > > >>>
> > > --
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK