diff mbox series

[v3,5/7] implementation of vm_start() BH

Message ID 20201119125940.20017-6-andrey.gruzdev@virtuozzo.com (mailing list archive)
State New, archived
Headers show
Series UFFD write-tracking migration/snapshots | expand

Commit Message

Andrey Gruzdev Nov. 19, 2020, 12:59 p.m. UTC
To avoid saving updated versions of memory pages we need
to start tracking RAM writes before we resume operation of
vCPUs. This sequence is especially critical for virtio device
backends whos VQs are mapped to main memory and accessed
directly not using MMIO callbacks.

One problem is that vm_start() routine makes calls state
change notifier callbacks directly from itself. Virtio drivers
do some stuff with syncing/flusing VQs in its notifier routines.
Since we poll UFFD and process faults on the same thread, that
leads to the situation when the thread locks in vm_start()
if we try to call it from the migration thread.

The solution is to call ram_write_tracking_start() directly
from migration thread and then schedule BH for vm_start.

Signed-off-by: Andrey Gruzdev <andrey.gruzdev@virtuozzo.com>
---
 migration/migration.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Comments

Peter Xu Nov. 19, 2020, 6:46 p.m. UTC | #1
On Thu, Nov 19, 2020 at 03:59:38PM +0300, Andrey Gruzdev wrote:
> To avoid saving updated versions of memory pages we need
> to start tracking RAM writes before we resume operation of
> vCPUs. This sequence is especially critical for virtio device
> backends whos VQs are mapped to main memory and accessed
> directly not using MMIO callbacks.
> 
> One problem is that vm_start() routine makes calls state
> change notifier callbacks directly from itself. Virtio drivers
> do some stuff with syncing/flusing VQs in its notifier routines.
> Since we poll UFFD and process faults on the same thread, that
> leads to the situation when the thread locks in vm_start()
> if we try to call it from the migration thread.

There's a nice comment in previous patch about this before the bottom half
created, thanks, that's helpful.  Though IMHO this patch can directly be
squashed into previous one, since it's confusing with the comment there but
without doing anything about it.
Andrey Gruzdev Nov. 20, 2020, 11:13 a.m. UTC | #2
On 19.11.2020 21:46, Peter Xu wrote:
> On Thu, Nov 19, 2020 at 03:59:38PM +0300, Andrey Gruzdev wrote:
>> To avoid saving updated versions of memory pages we need
>> to start tracking RAM writes before we resume operation of
>> vCPUs. This sequence is especially critical for virtio device
>> backends whos VQs are mapped to main memory and accessed
>> directly not using MMIO callbacks.
>>
>> One problem is that vm_start() routine makes calls state
>> change notifier callbacks directly from itself. Virtio drivers
>> do some stuff with syncing/flusing VQs in its notifier routines.
>> Since we poll UFFD and process faults on the same thread, that
>> leads to the situation when the thread locks in vm_start()
>> if we try to call it from the migration thread.
> 
> There's a nice comment in previous patch about this before the bottom half
> created, thanks, that's helpful.  Though IMHO this patch can directly be
> squashed into previous one, since it's confusing with the comment there but
> without doing anything about it.
> 

Yes, agree, better to squash this small commit.
diff mbox series

Patch

diff --git a/migration/migration.c b/migration/migration.c
index 158e5441ec..dba388f8bd 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3716,7 +3716,13 @@  static void *migration_thread(void *opaque)
 
 static void wt_migration_vm_start_bh(void *opaque)
 {
-    /* TODO: implement */
+    MigrationState *s = opaque;
+
+    qemu_bh_delete(s->wt_vm_start_bh);
+    s->wt_vm_start_bh = NULL;
+
+    vm_start();
+    s->downtime = qemu_clock_get_ms(QEMU_CLOCK_REALTIME) - s->downtime_start;
 }
 
 /*