mbox series

[Bug,Report,RFC,0/1] block: fix failing assert on paused VM migration

Message ID 20240924125611.664315-1-andrey.drobyshev@virtuozzo.com (mailing list archive)
Headers show
Series block: fix failing assert on paused VM migration | expand

Message

Andrey Drobyshev Sept. 24, 2024, 12:56 p.m. UTC
There's a bug (failing assert) which is reproduced during migration of
a paused VM.  I am able to reproduce it on a stand with 2 nodes and a common
NFS share, with VM's disk on that share.

root@fedora40-1-vm:~# virsh domblklist alma8-vm
 Target   Source
------------------------------------------
 sda      /mnt/shared/images/alma8.qcow2

root@fedora40-1-vm:~# df -Th /mnt/shared
Filesystem          Type  Size  Used Avail Use% Mounted on
127.0.0.1:/srv/nfsd nfs4   63G   16G   48G  25% /mnt/shared

On the 1st node:

root@fedora40-1-vm:~# virsh start alma8-vm ; virsh suspend alma8-vm
root@fedora40-1-vm:~# virsh migrate --compressed --p2p --persistent --undefinesource --live alma8-vm qemu+ssh://fedora40-2-vm/system

Then on the 2nd node:

root@fedora40-2-vm:~# virsh migrate --compressed --p2p --persistent --undefinesource --live alma8-vm qemu+ssh://fedora40-1-vm/system
error: operation failed: domain is not running

root@fedora40-2-vm:~# tail -3 /var/log/libvirt/qemu/alma8-vm.log
2024-09-19 13:53:33.336+0000: initiating migration
qemu-system-x86_64: ../block.c:6976: int bdrv_inactivate_recurse(BlockDriverState *): Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.
2024-09-19 13:53:42.991+0000: shutting down, reason=crashed

Backtrace:

(gdb) bt
#0  0x00007f7eaa2f1664 in __pthread_kill_implementation () at /lib64/libc.so.6
#1  0x00007f7eaa298c4e in raise () at /lib64/libc.so.6
#2  0x00007f7eaa280902 in abort () at /lib64/libc.so.6
#3  0x00007f7eaa28081e in __assert_fail_base.cold () at /lib64/libc.so.6
#4  0x00007f7eaa290d87 in __assert_fail () at /lib64/libc.so.6
#5  0x0000563c38b95eb8 in bdrv_inactivate_recurse (bs=0x563c3b6c60c0) at ../block.c:6976
#6  0x0000563c38b95aeb in bdrv_inactivate_all () at ../block.c:7038
#7  0x0000563c3884d354 in qemu_savevm_state_complete_precopy_non_iterable (f=0x563c3b700c20, in_postcopy=false, inactivate_disks=true)
    at ../migration/savevm.c:1571
#8  0x0000563c3884dc1a in qemu_savevm_state_complete_precopy (f=0x563c3b700c20, iterable_only=false, inactivate_disks=true) at ../migration/savevm.c:1631
#9  0x0000563c3883a340 in migration_completion_precopy (s=0x563c3b4d51f0, current_active_state=<optimized out>) at ../migration/migration.c:2780
#10 migration_completion (s=0x563c3b4d51f0) at ../migration/migration.c:2844
#11 migration_iteration_run (s=0x563c3b4d51f0) at ../migration/migration.c:3270
#12 migration_thread (opaque=0x563c3b4d51f0) at ../migration/migration.c:3536
#13 0x0000563c38dbcf14 in qemu_thread_start (args=0x563c3c2d5bf0) at ../util/qemu-thread-posix.c:541
#14 0x00007f7eaa2ef6d7 in start_thread () at /lib64/libc.so.6
#15 0x00007f7eaa373414 in clone () at /lib64/libc.so.6

What happens here is that after 1st migration BDS related to HDD remains
inactive as VM is still paused.  Then when we initiate 2nd migration,
bdrv_inactivate_all() leads to the attempt to set BDRV_O_INACTIVE flag
on that node which is already set, thus assert fails.

Attached patch which simply skips setting flag if it's already set is more
of a kludge than a clean solution.  Should we use more sophisticated logic
which allows some of the nodes be in inactive state prior to the migration,
and takes them into account during bdrv_inactivate_all()?  Comments would
be appreciated.

Andrey

Andrey Drobyshev (1):
  block: do not fail when inactivating node which is inactive

 block.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)