[v2] migratioin/ram: leave RAMBlock->bmap blank on allocating
diff mbox series

Message ID 20190604002810.12869-1-richardw.yang@linux.intel.com
State New
Headers show
Series
  • [v2] migratioin/ram: leave RAMBlock->bmap blank on allocating
Related show

Commit Message

Wei Yang June 4, 2019, 12:28 a.m. UTC
During migration, we would sync bitmap from ram_list.dirty_memory to
RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().

Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
means at the first round this sync is meaningless and is a duplicated
work.

Leaving RAMBlock->bmap blank on allocating would have a side effect on
migration_dirty_pages, since it is calculated from the result of
cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
set migration_dirty_pages to 0 in ram_state_init().

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

---
v2: add a comment explaining why leaving RAMBlock.bmap clear
---
 migration/ram.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

Comments

Peter Xu June 4, 2019, 3:51 a.m. UTC | #1
On Tue, Jun 04, 2019 at 08:28:10AM +0800, Wei Yang wrote:
> During migration, we would sync bitmap from ram_list.dirty_memory to
> RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
> 
> Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
> means at the first round this sync is meaningless and is a duplicated
> work.
> 
> Leaving RAMBlock->bmap blank on allocating would have a side effect on
> migration_dirty_pages, since it is calculated from the result of
> cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
> set migration_dirty_pages to 0 in ram_state_init().
> 
> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> 
> ---
> v2: add a comment explaining why leaving RAMBlock.bmap clear
> ---
>  migration/ram.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 4c60869226..e9a27ea5e6 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -3181,12 +3181,7 @@ static int ram_state_init(RAMState **rsp)
>      qemu_mutex_init(&(*rsp)->src_page_req_mutex);
>      QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
>  
> -    /*
> -     * Count the total number of pages used by ram blocks not including any
> -     * gaps due to alignment or unplugs.
> -     */
> -    (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
> -

Since you've spent time discovering this relationship, we can also
comment on this too to hint future readers:

       /*
        * This must match with the initial values of dirty bitmap.
        * Currently we initialize the dirty bitmap to all zeros so
        * here the total dirty page count is zero.
        */

> +    (*rsp)->migration_dirty_pages = 0;
>      ram_state_reset(*rsp);
>  
>      return 0;
> @@ -3201,8 +3196,15 @@ static void ram_list_init_bitmaps(void)
>      if (ram_bytes_total()) {
>          RAMBLOCK_FOREACH_NOT_IGNORED(block) {
>              pages = block->max_length >> TARGET_PAGE_BITS;
> +            /*
> +             * We leave RAMBlock.bmap clear here and they will be sync from
> +             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in
> +             * migration_bitmap_sync_precopy().

(This sentence is a bit confusing to me)

> +             *
> +             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to all 1
> +             * in ram_block_add().

How about:

    The initial dirty bitmap for migration must be set with all ones
    to make sure we'll migrate every guest RAM page to destination.
    Here we didn't set RAMBlock.bmap simply because it is already set
    in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in ram_block_add,
    and that's where we'll sync the dirty bitmaps.  Here setting
    RAMBlock.bmap would be fine too but not necessary.

?

> +             */
>              block->bmap = bitmap_new(pages);
> -            bitmap_set(block->bmap, 0, pages);
>              if (migrate_postcopy_ram()) {
>                  block->unsentmap = bitmap_new(pages);
>                  bitmap_set(block->unsentmap, 0, pages);
> -- 
> 2.19.1
> 

Regards,
Wei Yang June 4, 2019, 5:21 a.m. UTC | #2
On Tue, Jun 04, 2019 at 11:51:24AM +0800, Peter Xu wrote:
>On Tue, Jun 04, 2019 at 08:28:10AM +0800, Wei Yang wrote:
>> During migration, we would sync bitmap from ram_list.dirty_memory to
>> RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
>> 
>> Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
>> means at the first round this sync is meaningless and is a duplicated
>> work.
>> 
>> Leaving RAMBlock->bmap blank on allocating would have a side effect on
>> migration_dirty_pages, since it is calculated from the result of
>> cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
>> set migration_dirty_pages to 0 in ram_state_init().
>> 
>> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> 
>> ---
>> v2: add a comment explaining why leaving RAMBlock.bmap clear
>> ---
>>  migration/ram.c | 16 +++++++++-------
>>  1 file changed, 9 insertions(+), 7 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 4c60869226..e9a27ea5e6 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -3181,12 +3181,7 @@ static int ram_state_init(RAMState **rsp)
>>      qemu_mutex_init(&(*rsp)->src_page_req_mutex);
>>      QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
>>  
>> -    /*
>> -     * Count the total number of pages used by ram blocks not including any
>> -     * gaps due to alignment or unplugs.
>> -     */
>> -    (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>> -
>
>Since you've spent time discovering this relationship, we can also
>comment on this too to hint future readers:
>
>       /*
>        * This must match with the initial values of dirty bitmap.
>        * Currently we initialize the dirty bitmap to all zeros so
>        * here the total dirty page count is zero.
>        */
>

You are right.

>> +    (*rsp)->migration_dirty_pages = 0;
>>      ram_state_reset(*rsp);
>>  
>>      return 0;
>> @@ -3201,8 +3196,15 @@ static void ram_list_init_bitmaps(void)
>>      if (ram_bytes_total()) {
>>          RAMBLOCK_FOREACH_NOT_IGNORED(block) {
>>              pages = block->max_length >> TARGET_PAGE_BITS;
>> +            /*
>> +             * We leave RAMBlock.bmap clear here and they will be sync from
>> +             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in
>> +             * migration_bitmap_sync_precopy().
>
>(This sentence is a bit confusing to me)
>
>> +             *
>> +             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to all 1
>> +             * in ram_block_add().
>
>How about:
>
>    The initial dirty bitmap for migration must be set with all ones
>    to make sure we'll migrate every guest RAM page to destination.
>    Here we didn't set RAMBlock.bmap simply because it is already set
>    in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in ram_block_add,
>    and that's where we'll sync the dirty bitmaps.  Here setting
>    RAMBlock.bmap would be fine too but not necessary.
>
>?
>

Sounds better :-)

>> +             */
>>              block->bmap = bitmap_new(pages);
>> -            bitmap_set(block->bmap, 0, pages);
>>              if (migrate_postcopy_ram()) {
>>                  block->unsentmap = bitmap_new(pages);
>>                  bitmap_set(block->unsentmap, 0, pages);
>> -- 
>> 2.19.1
>> 
>
>Regards,
>
>-- 
>Peter Xu

Patch
diff mbox series

diff --git a/migration/ram.c b/migration/ram.c
index 4c60869226..e9a27ea5e6 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3181,12 +3181,7 @@  static int ram_state_init(RAMState **rsp)
     qemu_mutex_init(&(*rsp)->src_page_req_mutex);
     QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
 
-    /*
-     * Count the total number of pages used by ram blocks not including any
-     * gaps due to alignment or unplugs.
-     */
-    (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
-
+    (*rsp)->migration_dirty_pages = 0;
     ram_state_reset(*rsp);
 
     return 0;
@@ -3201,8 +3196,15 @@  static void ram_list_init_bitmaps(void)
     if (ram_bytes_total()) {
         RAMBLOCK_FOREACH_NOT_IGNORED(block) {
             pages = block->max_length >> TARGET_PAGE_BITS;
+            /*
+             * We leave RAMBlock.bmap clear here and they will be sync from
+             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in
+             * migration_bitmap_sync_precopy().
+             *
+             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to all 1
+             * in ram_block_add().
+             */
             block->bmap = bitmap_new(pages);
-            bitmap_set(block->bmap, 0, pages);
             if (migrate_postcopy_ram()) {
                 block->unsentmap = bitmap_new(pages);
                 bitmap_set(block->unsentmap, 0, pages);