[3/3] iotests: Test external snapshot with VM state
diff mbox series

Message ID 20191217145939.5537-4-kwolf@redhat.com
State New
Headers show
Series
  • block: Fix external snapshot with VM state
Related show

Commit Message

Kevin Wolf Dec. 17, 2019, 2:59 p.m. UTC
This tests creating an external snapshot with VM state (which results in
an active overlay over an inactive backing file, which is also the root
node of an inactive BlockBackend), re-activating the images and
performing some operations to test that the re-activation worked as
intended.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/qemu-iotests/280     | 83 ++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/280.out | 50 +++++++++++++++++++++++
 tests/qemu-iotests/group   |  1 +
 3 files changed, 134 insertions(+)
 create mode 100755 tests/qemu-iotests/280
 create mode 100644 tests/qemu-iotests/280.out

Comments

Max Reitz Dec. 19, 2019, 2:26 p.m. UTC | #1
On 17.12.19 15:59, Kevin Wolf wrote:
> This tests creating an external snapshot with VM state (which results in
> an active overlay over an inactive backing file, which is also the root
> node of an inactive BlockBackend), re-activating the images and
> performing some operations to test that the re-activation worked as
> intended.
> 
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>  tests/qemu-iotests/280     | 83 ++++++++++++++++++++++++++++++++++++++
>  tests/qemu-iotests/280.out | 50 +++++++++++++++++++++++
>  tests/qemu-iotests/group   |  1 +
>  3 files changed, 134 insertions(+)
>  create mode 100755 tests/qemu-iotests/280
>  create mode 100644 tests/qemu-iotests/280.out

[...]

> diff --git a/tests/qemu-iotests/280.out b/tests/qemu-iotests/280.out
> new file mode 100644
> index 0000000000..5d382faaa8
> --- /dev/null
> +++ b/tests/qemu-iotests/280.out
> @@ -0,0 +1,50 @@
> +Formatting 'TEST_DIR/PID-base', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16
> +
> +=== Launch VM ===
> +Enabling migration QMP events on VM...
> +{"return": {}}
> +
> +=== Migrate to file ===
> +{"execute": "migrate", "arguments": {"uri": "exec:cat > /dev/null"}}
> +{"return": {}}
> +{"data": {"status": "setup"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> +{"data": {"status": "active"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> +{"data": {"status": "completed"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> +
> +VM is now stopped:
> +completed
> +{"execute": "query-status", "arguments": {}}
> +{"return": {"running": false, "singlestep": false, "status": "postmigrate"}}

Hmmm, I get a finish-migrate status here (on tmpfs)...

Max
Kevin Wolf Dec. 19, 2019, 3:47 p.m. UTC | #2
Am 19.12.2019 um 15:26 hat Max Reitz geschrieben:
> On 17.12.19 15:59, Kevin Wolf wrote:
> > This tests creating an external snapshot with VM state (which results in
> > an active overlay over an inactive backing file, which is also the root
> > node of an inactive BlockBackend), re-activating the images and
> > performing some operations to test that the re-activation worked as
> > intended.
> > 
> > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> 
> [...]
> 
> > diff --git a/tests/qemu-iotests/280.out b/tests/qemu-iotests/280.out
> > new file mode 100644
> > index 0000000000..5d382faaa8
> > --- /dev/null
> > +++ b/tests/qemu-iotests/280.out
> > @@ -0,0 +1,50 @@
> > +Formatting 'TEST_DIR/PID-base', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16
> > +
> > +=== Launch VM ===
> > +Enabling migration QMP events on VM...
> > +{"return": {}}
> > +
> > +=== Migrate to file ===
> > +{"execute": "migrate", "arguments": {"uri": "exec:cat > /dev/null"}}
> > +{"return": {}}
> > +{"data": {"status": "setup"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > +{"data": {"status": "active"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > +{"data": {"status": "completed"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > +
> > +VM is now stopped:
> > +completed
> > +{"execute": "query-status", "arguments": {}}
> > +{"return": {"running": false, "singlestep": false, "status": "postmigrate"}}
> 
> Hmmm, I get a finish-migrate status here (on tmpfs)...

Dave, is it intentional that the "completed" migration event is emitted
while we are still in finish-migration rather than postmigrate?

I guess we could change wait_migration() in qemu-iotests to wait for the
postmigrate state rather than the "completed" event, but maybe it would
be better to change the migration code to avoid similar races in other
QMP clients.

Kevin
Dr. David Alan Gilbert Jan. 2, 2020, 1:25 p.m. UTC | #3
* Kevin Wolf (kwolf@redhat.com) wrote:
> Am 19.12.2019 um 15:26 hat Max Reitz geschrieben:
> > On 17.12.19 15:59, Kevin Wolf wrote:
> > > This tests creating an external snapshot with VM state (which results in
> > > an active overlay over an inactive backing file, which is also the root
> > > node of an inactive BlockBackend), re-activating the images and
> > > performing some operations to test that the re-activation worked as
> > > intended.
> > > 
> > > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > 
> > [...]
> > 
> > > diff --git a/tests/qemu-iotests/280.out b/tests/qemu-iotests/280.out
> > > new file mode 100644
> > > index 0000000000..5d382faaa8
> > > --- /dev/null
> > > +++ b/tests/qemu-iotests/280.out
> > > @@ -0,0 +1,50 @@
> > > +Formatting 'TEST_DIR/PID-base', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16
> > > +
> > > +=== Launch VM ===
> > > +Enabling migration QMP events on VM...
> > > +{"return": {}}
> > > +
> > > +=== Migrate to file ===
> > > +{"execute": "migrate", "arguments": {"uri": "exec:cat > /dev/null"}}
> > > +{"return": {}}
> > > +{"data": {"status": "setup"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > +{"data": {"status": "active"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > +{"data": {"status": "completed"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > +
> > > +VM is now stopped:
> > > +completed
> > > +{"execute": "query-status", "arguments": {}}
> > > +{"return": {"running": false, "singlestep": false, "status": "postmigrate"}}
> > 
> > Hmmm, I get a finish-migrate status here (on tmpfs)...
> 
> Dave, is it intentional that the "completed" migration event is emitted
> while we are still in finish-migration rather than postmigrate?

Yes it looks like it;  it's that the migration state machine hits
COMPLETED that then _causes_ the runstate transitition to POSTMIGRATE.

static void migration_iteration_finish(MigrationState *s)
{
    /* If we enabled cpu throttling for auto-converge, turn it off. */
    cpu_throttle_stop();

    qemu_mutex_lock_iothread();
    switch (s->state) {
    case MIGRATION_STATUS_COMPLETED:
        migration_calculate_complete(s);
        runstate_set(RUN_STATE_POSTMIGRATE);
        break;

then there are a bunch of error cases where if it landed in
FAILED/CANCELLED etc then we either restart the VM or also go to
POSTMIGRATE.

> I guess we could change wait_migration() in qemu-iotests to wait for the
> postmigrate state rather than the "completed" event, but maybe it would
> be better to change the migration code to avoid similar races in other
> QMP clients.

Given that the migration state machine is driving the runstate state
machine I think it currently makes sense internally;  (although I don't
think it's documented to be in that order or tested to be, which we
might want to fix).

Looking at 234 and 262, it looks like you're calling wait_migration on
both the source and dest; I don't think the dest will see the
POSTMIGRATE.  Also note that depending what you're trying to do, with
postcopy you'll be running on the destination before you see COMPLETED.

Waiting for the destination to leave 'inmigrate' state is probably
the best strategy; then wait for the source to be in postmigrate.
You can cause early exits if you see transitions to 'FAILED' - but
actually the destination will likely quit in that case; so it should
be much rarer for you to hit a timeout on a failed migration.

Dave


> Kevin


--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Kevin Wolf Jan. 6, 2020, 4:06 p.m. UTC | #4
Am 02.01.2020 um 14:25 hat Dr. David Alan Gilbert geschrieben:
> * Kevin Wolf (kwolf@redhat.com) wrote:
> > Am 19.12.2019 um 15:26 hat Max Reitz geschrieben:
> > > On 17.12.19 15:59, Kevin Wolf wrote:
> > > > This tests creating an external snapshot with VM state (which results in
> > > > an active overlay over an inactive backing file, which is also the root
> > > > node of an inactive BlockBackend), re-activating the images and
> > > > performing some operations to test that the re-activation worked as
> > > > intended.
> > > > 
> > > > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > > 
> > > [...]
> > > 
> > > > diff --git a/tests/qemu-iotests/280.out b/tests/qemu-iotests/280.out
> > > > new file mode 100644
> > > > index 0000000000..5d382faaa8
> > > > --- /dev/null
> > > > +++ b/tests/qemu-iotests/280.out
> > > > @@ -0,0 +1,50 @@
> > > > +Formatting 'TEST_DIR/PID-base', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16
> > > > +
> > > > +=== Launch VM ===
> > > > +Enabling migration QMP events on VM...
> > > > +{"return": {}}
> > > > +
> > > > +=== Migrate to file ===
> > > > +{"execute": "migrate", "arguments": {"uri": "exec:cat > /dev/null"}}
> > > > +{"return": {}}
> > > > +{"data": {"status": "setup"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > > +{"data": {"status": "active"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > > +{"data": {"status": "completed"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > > +
> > > > +VM is now stopped:
> > > > +completed
> > > > +{"execute": "query-status", "arguments": {}}
> > > > +{"return": {"running": false, "singlestep": false, "status": "postmigrate"}}
> > > 
> > > Hmmm, I get a finish-migrate status here (on tmpfs)...
> > 
> > Dave, is it intentional that the "completed" migration event is emitted
> > while we are still in finish-migration rather than postmigrate?
> 
> Yes it looks like it;  it's that the migration state machine hits
> COMPLETED that then _causes_ the runstate transitition to POSTMIGRATE.
> 
> static void migration_iteration_finish(MigrationState *s)
> {
>     /* If we enabled cpu throttling for auto-converge, turn it off. */
>     cpu_throttle_stop();
> 
>     qemu_mutex_lock_iothread();
>     switch (s->state) {
>     case MIGRATION_STATUS_COMPLETED:
>         migration_calculate_complete(s);
>         runstate_set(RUN_STATE_POSTMIGRATE);
>         break;
> 
> then there are a bunch of error cases where if it landed in
> FAILED/CANCELLED etc then we either restart the VM or also go to
> POSTMIGRATE.

Yes, I read the code. My question was more if there is a reason why we
want things to look like this in the external interface.

I just thought that it was confusing that migration is already called
completed when it will still change the runstate. But I guess the
opposite could be confusing as well (if we're in postmigrate, why should
the migration status still change?)

> > I guess we could change wait_migration() in qemu-iotests to wait for the
> > postmigrate state rather than the "completed" event, but maybe it would
> > be better to change the migration code to avoid similar races in other
> > QMP clients.
> 
> Given that the migration state machine is driving the runstate state
> machine I think it currently makes sense internally;  (although I don't
> think it's documented to be in that order or tested to be, which we
> might want to fix).

In any case, I seem to remember that it's inconsistent between source
and destination. On one side, the migration status is updated first, on
the other side the runstate is updated first.

> Looking at 234 and 262, it looks like you're calling wait_migration on
> both the source and dest; I don't think the dest will see the
> POSTMIGRATE.  Also note that depending what you're trying to do, with
> postcopy you'll be running on the destination before you see COMPLETED.
> 
> Waiting for the destination to leave 'inmigrate' state is probably
> the best strategy; then wait for the source to be in postmigrate.
> You can cause early exits if you see transitions to 'FAILED' - but
> actually the destination will likely quit in that case; so it should
> be much rarer for you to hit a timeout on a failed migration.

Commit 37ff7d70 changed it to wait for "postmigrate" on the source and
"running" on the destination, which I guess is good enough for a test
case that doesn't expect failure.

Kevin
Dr. David Alan Gilbert Feb. 10, 2020, 12:31 p.m. UTC | #5
* Kevin Wolf (kwolf@redhat.com) wrote:
> Am 02.01.2020 um 14:25 hat Dr. David Alan Gilbert geschrieben:
> > * Kevin Wolf (kwolf@redhat.com) wrote:
> > > Am 19.12.2019 um 15:26 hat Max Reitz geschrieben:
> > > > On 17.12.19 15:59, Kevin Wolf wrote:
> > > > > This tests creating an external snapshot with VM state (which results in
> > > > > an active overlay over an inactive backing file, which is also the root
> > > > > node of an inactive BlockBackend), re-activating the images and
> > > > > performing some operations to test that the re-activation worked as
> > > > > intended.
> > > > > 
> > > > > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > > > 
> > > > [...]
> > > > 
> > > > > diff --git a/tests/qemu-iotests/280.out b/tests/qemu-iotests/280.out
> > > > > new file mode 100644
> > > > > index 0000000000..5d382faaa8
> > > > > --- /dev/null
> > > > > +++ b/tests/qemu-iotests/280.out
> > > > > @@ -0,0 +1,50 @@
> > > > > +Formatting 'TEST_DIR/PID-base', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16
> > > > > +
> > > > > +=== Launch VM ===
> > > > > +Enabling migration QMP events on VM...
> > > > > +{"return": {}}
> > > > > +
> > > > > +=== Migrate to file ===
> > > > > +{"execute": "migrate", "arguments": {"uri": "exec:cat > /dev/null"}}
> > > > > +{"return": {}}
> > > > > +{"data": {"status": "setup"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > > > +{"data": {"status": "active"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > > > +{"data": {"status": "completed"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > > > +
> > > > > +VM is now stopped:
> > > > > +completed
> > > > > +{"execute": "query-status", "arguments": {}}
> > > > > +{"return": {"running": false, "singlestep": false, "status": "postmigrate"}}
> > > > 
> > > > Hmmm, I get a finish-migrate status here (on tmpfs)...
> > > 
> > > Dave, is it intentional that the "completed" migration event is emitted
> > > while we are still in finish-migration rather than postmigrate?
> > 
> > Yes it looks like it;  it's that the migration state machine hits
> > COMPLETED that then _causes_ the runstate transitition to POSTMIGRATE.
> > 
> > static void migration_iteration_finish(MigrationState *s)
> > {
> >     /* If we enabled cpu throttling for auto-converge, turn it off. */
> >     cpu_throttle_stop();
> > 
> >     qemu_mutex_lock_iothread();
> >     switch (s->state) {
> >     case MIGRATION_STATUS_COMPLETED:
> >         migration_calculate_complete(s);
> >         runstate_set(RUN_STATE_POSTMIGRATE);
> >         break;
> > 
> > then there are a bunch of error cases where if it landed in
> > FAILED/CANCELLED etc then we either restart the VM or also go to
> > POSTMIGRATE.
> 
> Yes, I read the code. My question was more if there is a reason why we
> want things to look like this in the external interface.
> 
> I just thought that it was confusing that migration is already called
> completed when it will still change the runstate. But I guess the
> opposite could be confusing as well (if we're in postmigrate, why should
> the migration status still change?)
> 
> > > I guess we could change wait_migration() in qemu-iotests to wait for the
> > > postmigrate state rather than the "completed" event, but maybe it would
> > > be better to change the migration code to avoid similar races in other
> > > QMP clients.
> > 
> > Given that the migration state machine is driving the runstate state
> > machine I think it currently makes sense internally;  (although I don't
> > think it's documented to be in that order or tested to be, which we
> > might want to fix).
> 
> In any case, I seem to remember that it's inconsistent between source
> and destination. On one side, the migration status is updated first, on
> the other side the runstate is updated first.

(Digging through old mails)

That might be partially due to my ed1f30 from 2015 where I move the
COMPLETED event later - prior to that it was much too early; before
the network announce and before the bdrv_invalidate_cache_all, and I
ended up moving it right to the end - it might have been better to leave
it before the runstate change.



> > Looking at 234 and 262, it looks like you're calling wait_migration on
> > both the source and dest; I don't think the dest will see the
> > POSTMIGRATE.  Also note that depending what you're trying to do, with
> > postcopy you'll be running on the destination before you see COMPLETED.
> > 
> > Waiting for the destination to leave 'inmigrate' state is probably
> > the best strategy; then wait for the source to be in postmigrate.
> > You can cause early exits if you see transitions to 'FAILED' - but
> > actually the destination will likely quit in that case; so it should
> > be much rarer for you to hit a timeout on a failed migration.
> 
> Commit 37ff7d70 changed it to wait for "postmigrate" on the source and
> "running" on the destination, which I guess is good enough for a test
> case that doesn't expect failure.

Dave

> Kevin
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Kevin Wolf Feb. 10, 2020, 1:37 p.m. UTC | #6
Am 10.02.2020 um 13:31 hat Dr. David Alan Gilbert geschrieben:
> * Kevin Wolf (kwolf@redhat.com) wrote:
> > Am 02.01.2020 um 14:25 hat Dr. David Alan Gilbert geschrieben:
> > > * Kevin Wolf (kwolf@redhat.com) wrote:
> > > > Am 19.12.2019 um 15:26 hat Max Reitz geschrieben:
> > > > > On 17.12.19 15:59, Kevin Wolf wrote:
> > > > > > This tests creating an external snapshot with VM state (which results in
> > > > > > an active overlay over an inactive backing file, which is also the root
> > > > > > node of an inactive BlockBackend), re-activating the images and
> > > > > > performing some operations to test that the re-activation worked as
> > > > > > intended.
> > > > > > 
> > > > > > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > > > > 
> > > > > [...]
> > > > > 
> > > > > > diff --git a/tests/qemu-iotests/280.out b/tests/qemu-iotests/280.out
> > > > > > new file mode 100644
> > > > > > index 0000000000..5d382faaa8
> > > > > > --- /dev/null
> > > > > > +++ b/tests/qemu-iotests/280.out
> > > > > > @@ -0,0 +1,50 @@
> > > > > > +Formatting 'TEST_DIR/PID-base', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16
> > > > > > +
> > > > > > +=== Launch VM ===
> > > > > > +Enabling migration QMP events on VM...
> > > > > > +{"return": {}}
> > > > > > +
> > > > > > +=== Migrate to file ===
> > > > > > +{"execute": "migrate", "arguments": {"uri": "exec:cat > /dev/null"}}
> > > > > > +{"return": {}}
> > > > > > +{"data": {"status": "setup"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > > > > +{"data": {"status": "active"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > > > > +{"data": {"status": "completed"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
> > > > > > +
> > > > > > +VM is now stopped:
> > > > > > +completed
> > > > > > +{"execute": "query-status", "arguments": {}}
> > > > > > +{"return": {"running": false, "singlestep": false, "status": "postmigrate"}}
> > > > > 
> > > > > Hmmm, I get a finish-migrate status here (on tmpfs)...
> > > > 
> > > > Dave, is it intentional that the "completed" migration event is emitted
> > > > while we are still in finish-migration rather than postmigrate?
> > > 
> > > Yes it looks like it;  it's that the migration state machine hits
> > > COMPLETED that then _causes_ the runstate transitition to POSTMIGRATE.
> > > 
> > > static void migration_iteration_finish(MigrationState *s)
> > > {
> > >     /* If we enabled cpu throttling for auto-converge, turn it off. */
> > >     cpu_throttle_stop();
> > > 
> > >     qemu_mutex_lock_iothread();
> > >     switch (s->state) {
> > >     case MIGRATION_STATUS_COMPLETED:
> > >         migration_calculate_complete(s);
> > >         runstate_set(RUN_STATE_POSTMIGRATE);
> > >         break;
> > > 
> > > then there are a bunch of error cases where if it landed in
> > > FAILED/CANCELLED etc then we either restart the VM or also go to
> > > POSTMIGRATE.
> > 
> > Yes, I read the code. My question was more if there is a reason why we
> > want things to look like this in the external interface.
> > 
> > I just thought that it was confusing that migration is already called
> > completed when it will still change the runstate. But I guess the
> > opposite could be confusing as well (if we're in postmigrate, why should
> > the migration status still change?)
> > 
> > > > I guess we could change wait_migration() in qemu-iotests to wait for the
> > > > postmigrate state rather than the "completed" event, but maybe it would
> > > > be better to change the migration code to avoid similar races in other
> > > > QMP clients.
> > > 
> > > Given that the migration state machine is driving the runstate state
> > > machine I think it currently makes sense internally;  (although I don't
> > > think it's documented to be in that order or tested to be, which we
> > > might want to fix).
> > 
> > In any case, I seem to remember that it's inconsistent between source
> > and destination. On one side, the migration status is updated first, on
> > the other side the runstate is updated first.
> 
> (Digging through old mails)
> 
> That might be partially due to my ed1f30 from 2015 where I move the
> COMPLETED event later - prior to that it was much too early; before
> the network announce and before the bdrv_invalidate_cache_all, and I
> ended up moving it right to the end - it might have been better to leave
> it before the runstate change.

We are working around this in the qemu-iotests now, so I guess I don't
have a pressing need for a consistent interface any more at the moment.
But if having this kind of inconsistency bothers you, feel free to do
something about it anyway. :-)

Kevin

Patch
diff mbox series

diff --git a/tests/qemu-iotests/280 b/tests/qemu-iotests/280
new file mode 100755
index 0000000000..0b1fa8e1d8
--- /dev/null
+++ b/tests/qemu-iotests/280
@@ -0,0 +1,83 @@ 
+#!/usr/bin/env python
+#
+# Copyright (C) 2019 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+# Creator/Owner: Kevin Wolf <kwolf@redhat.com>
+#
+# Test migration to file for taking an external snapshot with VM state.
+
+import iotests
+import os
+
+iotests.verify_image_format(supported_fmts=['qcow2'])
+iotests.verify_protocol(supported=['file'])
+iotests.verify_platform(['linux'])
+
+with iotests.FilePath('base') as base_path , \
+     iotests.FilePath('top') as top_path, \
+     iotests.VM() as vm:
+
+    iotests.qemu_img_log('create', '-f', iotests.imgfmt, base_path, '64M')
+
+    iotests.log('=== Launch VM ===')
+    vm.add_object('iothread,id=iothread0')
+    vm.add_blockdev('file,filename=%s,node-name=base-file' % (base_path))
+    vm.add_blockdev('%s,file=base-file,node-name=base-fmt' % (iotests.imgfmt))
+    vm.add_device('virtio-blk,drive=base-fmt,iothread=iothread0,id=vda')
+    vm.launch()
+
+    vm.enable_migration_events('VM')
+
+    iotests.log('\n=== Migrate to file ===')
+    vm.qmp_log('migrate', uri='exec:cat > /dev/null')
+
+    with iotests.Timeout(3, 'Migration does not complete'):
+        vm.wait_migration()
+
+    iotests.log('\nVM is now stopped:')
+    iotests.log(vm.qmp('query-migrate')['return']['status'])
+    vm.qmp_log('query-status')
+
+    iotests.log('\n=== Create a snapshot of the disk image ===')
+    vm.blockdev_create({
+        'driver': 'file',
+        'filename': top_path,
+        'size': 0,
+    })
+    vm.qmp_log('blockdev-add', node_name='top-file',
+               driver='file', filename=top_path,
+               filters=[iotests.filter_qmp_testfiles])
+
+    vm.blockdev_create({
+        'driver': iotests.imgfmt,
+        'file': 'top-file',
+        'size': 1024 * 1024,
+    })
+    vm.qmp_log('blockdev-add', node_name='top-fmt',
+               driver=iotests.imgfmt, file='top-file')
+
+    vm.qmp_log('blockdev-snapshot', node='base-fmt', overlay='top-fmt')
+
+    iotests.log('\n=== Resume the VM and simulate a write request ===')
+    vm.qmp_log('cont')
+    iotests.log(vm.hmp_qemu_io('-d vda/virtio-backend', 'write 4k 4k'))
+
+    iotests.log('\n=== Commit it to the backing file ===')
+    result = vm.qmp_log('block-commit', job_id='job0', auto_dismiss=False,
+                        device='top-fmt', top_node='top-fmt',
+                        filters=[iotests.filter_qmp_testfiles])
+    if 'return' in result:
+        vm.run_job('job0')
diff --git a/tests/qemu-iotests/280.out b/tests/qemu-iotests/280.out
new file mode 100644
index 0000000000..5d382faaa8
--- /dev/null
+++ b/tests/qemu-iotests/280.out
@@ -0,0 +1,50 @@ 
+Formatting 'TEST_DIR/PID-base', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16
+
+=== Launch VM ===
+Enabling migration QMP events on VM...
+{"return": {}}
+
+=== Migrate to file ===
+{"execute": "migrate", "arguments": {"uri": "exec:cat > /dev/null"}}
+{"return": {}}
+{"data": {"status": "setup"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"status": "active"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"status": "completed"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+
+VM is now stopped:
+completed
+{"execute": "query-status", "arguments": {}}
+{"return": {"running": false, "singlestep": false, "status": "postmigrate"}}
+
+=== Create a snapshot of the disk image ===
+{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "file", "filename": "TEST_DIR/PID-top", "size": 0}}}
+{"return": {}}
+{"execute": "job-dismiss", "arguments": {"id": "job0"}}
+{"return": {}}
+
+{"execute": "blockdev-add", "arguments": {"driver": "file", "filename": "TEST_DIR/PID-top", "node-name": "top-file"}}
+{"return": {}}
+{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "qcow2", "file": "top-file", "size": 1048576}}}
+{"return": {}}
+{"execute": "job-dismiss", "arguments": {"id": "job0"}}
+{"return": {}}
+
+{"execute": "blockdev-add", "arguments": {"driver": "qcow2", "file": "top-file", "node-name": "top-fmt"}}
+{"return": {}}
+{"execute": "blockdev-snapshot", "arguments": {"node": "base-fmt", "overlay": "top-fmt"}}
+{"return": {}}
+
+=== Resume the VM and simulate a write request ===
+{"execute": "cont", "arguments": {}}
+{"return": {}}
+{"return": ""}
+
+=== Commit it to the backing file ===
+{"execute": "block-commit", "arguments": {"auto-dismiss": false, "device": "top-fmt", "job-id": "job0", "top-node": "top-fmt"}}
+{"return": {}}
+{"execute": "job-complete", "arguments": {"id": "job0"}}
+{"return": {}}
+{"data": {"device": "job0", "len": 65536, "offset": 65536, "speed": 0, "type": "commit"}, "event": "BLOCK_JOB_READY", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "job0", "len": 65536, "offset": 65536, "speed": 0, "type": "commit"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
+{"execute": "job-dismiss", "arguments": {"id": "job0"}}
+{"return": {}}
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
index eb57ddc72c..cb2b789e44 100644
--- a/tests/qemu-iotests/group
+++ b/tests/qemu-iotests/group
@@ -287,3 +287,4 @@ 
 273 backing quick
 277 rw quick
 279 rw backing quick
+280 rw migration quick