diff mbox series

[2/2] migration/multifd: Fix rb->receivedmap cleanup race

Message ID 20240917185802.15619-3-farosas@suse.de (mailing list archive)
State New
Headers show
Series migration/multifd: Fix rb->receivedmap cleanup race | expand

Commit Message

Fabiano Rosas Sept. 17, 2024, 6:58 p.m. UTC
Fix a segmentation fault in multifd when rb->receivedmap is cleared
too early.

After commit 5ef7e26bdb ("migration/multifd: solve zero page causing
multiple page faults"), multifd started using the rb->receivedmap
bitmap, which belongs to ram.c and is initialized and *freed* from the
ram SaveVMHandlers.

Multifd threads are live until migration_incoming_state_destroy(),
which is called after qemu_loadvm_state_cleanup(), leading to a crash
when accessing rb->receivedmap.

process_incoming_migration_co()        ...
  qemu_loadvm_state()                  multifd_nocomp_recv()
    qemu_loadvm_state_cleanup()          ramblock_recv_bitmap_set_offset()
      rb->receivedmap = NULL               set_bit_atomic(..., rb->receivedmap)
  ...
  migration_incoming_state_destroy()
    multifd_recv_cleanup()
      multifd_recv_terminate_threads(NULL)

Move the loadvm cleanup into migration_incoming_state_destroy(), after
multifd_recv_cleanup() to ensure multifd threads have already exited
when rb->receivedmap is cleared.

Adjust the postcopy listen thread comment to indicate that we still
want to skip the cpu synchronization.

CC: qemu-stable@nongnu.org
Fixes: 5ef7e26bdb ("migration/multifd: solve zero page causing multiple page faults")
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/migration.c | 1 +
 migration/savevm.c    | 6 ++++--
 2 files changed, 5 insertions(+), 2 deletions(-)

Comments

Peter Xu Sept. 17, 2024, 7:20 p.m. UTC | #1
On Tue, Sep 17, 2024 at 03:58:02PM -0300, Fabiano Rosas wrote:
> Fix a segmentation fault in multifd when rb->receivedmap is cleared
> too early.
> 
> After commit 5ef7e26bdb ("migration/multifd: solve zero page causing
> multiple page faults"), multifd started using the rb->receivedmap
> bitmap, which belongs to ram.c and is initialized and *freed* from the
> ram SaveVMHandlers.
> 
> Multifd threads are live until migration_incoming_state_destroy(),
> which is called after qemu_loadvm_state_cleanup(), leading to a crash
> when accessing rb->receivedmap.
> 
> process_incoming_migration_co()        ...
>   qemu_loadvm_state()                  multifd_nocomp_recv()
>     qemu_loadvm_state_cleanup()          ramblock_recv_bitmap_set_offset()
>       rb->receivedmap = NULL               set_bit_atomic(..., rb->receivedmap)
>   ...
>   migration_incoming_state_destroy()
>     multifd_recv_cleanup()
>       multifd_recv_terminate_threads(NULL)
> 
> Move the loadvm cleanup into migration_incoming_state_destroy(), after
> multifd_recv_cleanup() to ensure multifd threads have already exited
> when rb->receivedmap is cleared.
> 
> Adjust the postcopy listen thread comment to indicate that we still
> want to skip the cpu synchronization.
> 
> CC: qemu-stable@nongnu.org
> Fixes: 5ef7e26bdb ("migration/multifd: solve zero page causing multiple page faults")
> Signed-off-by: Fabiano Rosas <farosas@suse.de>

Reviewed-by: Peter Xu <peterx@redhat.com>

One trivial question below..

> ---
>  migration/migration.c | 1 +
>  migration/savevm.c    | 6 ++++--
>  2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 3dea06d577..b190a574b1 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -378,6 +378,7 @@ void migration_incoming_state_destroy(void)
>      struct MigrationIncomingState *mis = migration_incoming_get_current();
>  
>      multifd_recv_cleanup();

Would you mind I add a comment squashed here when queue?

       /*
        * RAM state cleanup needs to happen after multifd cleanup, because
        * multifd threads can use some of its states (receivedmap).
        */

> +    qemu_loadvm_state_cleanup();
>  
>      if (mis->to_src_file) {
>          /* Tell source that we are done */
> diff --git a/migration/savevm.c b/migration/savevm.c
> index d0759694fd..7e1e27182a 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -2979,7 +2979,10 @@ int qemu_loadvm_state(QEMUFile *f)
>      trace_qemu_loadvm_state_post_main(ret);
>  
>      if (mis->have_listen_thread) {
> -        /* Listen thread still going, can't clean up yet */
> +        /*
> +         * Postcopy listen thread still going, don't synchronize the
> +         * cpus yet.
> +         */
>          return ret;
>      }
>  
> @@ -3022,7 +3025,6 @@ int qemu_loadvm_state(QEMUFile *f)
>          }
>      }
>  
> -    qemu_loadvm_state_cleanup();
>      cpu_synchronize_all_post_init();
>  
>      return ret;
> -- 
> 2.35.3
>
Fabiano Rosas Sept. 17, 2024, 7:29 p.m. UTC | #2
Peter Xu <peterx@redhat.com> writes:

> On Tue, Sep 17, 2024 at 03:58:02PM -0300, Fabiano Rosas wrote:
>> Fix a segmentation fault in multifd when rb->receivedmap is cleared
>> too early.
>> 
>> After commit 5ef7e26bdb ("migration/multifd: solve zero page causing
>> multiple page faults"), multifd started using the rb->receivedmap
>> bitmap, which belongs to ram.c and is initialized and *freed* from the
>> ram SaveVMHandlers.
>> 
>> Multifd threads are live until migration_incoming_state_destroy(),
>> which is called after qemu_loadvm_state_cleanup(), leading to a crash
>> when accessing rb->receivedmap.
>> 
>> process_incoming_migration_co()        ...
>>   qemu_loadvm_state()                  multifd_nocomp_recv()
>>     qemu_loadvm_state_cleanup()          ramblock_recv_bitmap_set_offset()
>>       rb->receivedmap = NULL               set_bit_atomic(..., rb->receivedmap)
>>   ...
>>   migration_incoming_state_destroy()
>>     multifd_recv_cleanup()
>>       multifd_recv_terminate_threads(NULL)
>> 
>> Move the loadvm cleanup into migration_incoming_state_destroy(), after
>> multifd_recv_cleanup() to ensure multifd threads have already exited
>> when rb->receivedmap is cleared.
>> 
>> Adjust the postcopy listen thread comment to indicate that we still
>> want to skip the cpu synchronization.
>> 
>> CC: qemu-stable@nongnu.org
>> Fixes: 5ef7e26bdb ("migration/multifd: solve zero page causing multiple page faults")
>> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
>
> One trivial question below..
>
>> ---
>>  migration/migration.c | 1 +
>>  migration/savevm.c    | 6 ++++--
>>  2 files changed, 5 insertions(+), 2 deletions(-)
>> 
>> diff --git a/migration/migration.c b/migration/migration.c
>> index 3dea06d577..b190a574b1 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -378,6 +378,7 @@ void migration_incoming_state_destroy(void)
>>      struct MigrationIncomingState *mis = migration_incoming_get_current();
>>  
>>      multifd_recv_cleanup();
>
> Would you mind I add a comment squashed here when queue?
>
>        /*
>         * RAM state cleanup needs to happen after multifd cleanup, because
>         * multifd threads can use some of its states (receivedmap).
>         */

Yeah, that's ok.

>
>> +    qemu_loadvm_state_cleanup();
>>  
>>      if (mis->to_src_file) {
>>          /* Tell source that we are done */
>> diff --git a/migration/savevm.c b/migration/savevm.c
>> index d0759694fd..7e1e27182a 100644
>> --- a/migration/savevm.c
>> +++ b/migration/savevm.c
>> @@ -2979,7 +2979,10 @@ int qemu_loadvm_state(QEMUFile *f)
>>      trace_qemu_loadvm_state_post_main(ret);
>>  
>>      if (mis->have_listen_thread) {
>> -        /* Listen thread still going, can't clean up yet */
>> +        /*
>> +         * Postcopy listen thread still going, don't synchronize the
>> +         * cpus yet.
>> +         */
>>          return ret;
>>      }
>>  
>> @@ -3022,7 +3025,6 @@ int qemu_loadvm_state(QEMUFile *f)
>>          }
>>      }
>>  
>> -    qemu_loadvm_state_cleanup();
>>      cpu_synchronize_all_post_init();
>>  
>>      return ret;
>> -- 
>> 2.35.3
>>
diff mbox series

Patch

diff --git a/migration/migration.c b/migration/migration.c
index 3dea06d577..b190a574b1 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -378,6 +378,7 @@  void migration_incoming_state_destroy(void)
     struct MigrationIncomingState *mis = migration_incoming_get_current();
 
     multifd_recv_cleanup();
+    qemu_loadvm_state_cleanup();
 
     if (mis->to_src_file) {
         /* Tell source that we are done */
diff --git a/migration/savevm.c b/migration/savevm.c
index d0759694fd..7e1e27182a 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2979,7 +2979,10 @@  int qemu_loadvm_state(QEMUFile *f)
     trace_qemu_loadvm_state_post_main(ret);
 
     if (mis->have_listen_thread) {
-        /* Listen thread still going, can't clean up yet */
+        /*
+         * Postcopy listen thread still going, don't synchronize the
+         * cpus yet.
+         */
         return ret;
     }
 
@@ -3022,7 +3025,6 @@  int qemu_loadvm_state(QEMUFile *f)
         }
     }
 
-    qemu_loadvm_state_cleanup();
     cpu_synchronize_all_post_init();
 
     return ret;