migratioin/ram: leave RAMBlock->bmap blank on allocating
diff mbox series

Message ID 20190507031703.856-1-richardw.yang@linux.intel.com
State New
Headers show
Series
  • migratioin/ram: leave RAMBlock->bmap blank on allocating
Related show

Commit Message

Wei Yang May 7, 2019, 3:17 a.m. UTC
During migration, we would sync bitmap from ram_list.dirty_memory to
RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().

Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
means at the first round this sync is meaningless and is a duplicated
work.

Leaving RAMBlock->bmap blank on allocating would have a side effect on
migration_dirty_pages, since it is calculated from the result of
cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
set migration_dirty_pages to 0 in ram_state_init().

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
---
 migration/ram.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

Comments

Dr. David Alan Gilbert May 31, 2019, 4:43 p.m. UTC | #1
* Wei Yang (richardw.yang@linux.intel.com) wrote:
> During migration, we would sync bitmap from ram_list.dirty_memory to
> RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
> 
> Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
> means at the first round this sync is meaningless and is a duplicated
> work.
> 
> Leaving RAMBlock->bmap blank on allocating would have a side effect on
> migration_dirty_pages, since it is calculated from the result of
> cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
> set migration_dirty_pages to 0 in ram_state_init().
> 
> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>

I've looked at this for a while, and I think it's OK, so

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Peter, Juan: Can you just see if there's arny reason this would be bad,
but I think it's actually more sensible than what we have.

Dave
> ---
>  migration/ram.c | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 95c51109d2..417874707d 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -3151,12 +3151,7 @@ static int ram_state_init(RAMState **rsp)
>      qemu_mutex_init(&(*rsp)->src_page_req_mutex);
>      QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
>  
> -    /*
> -     * Count the total number of pages used by ram blocks not including any
> -     * gaps due to alignment or unplugs.
> -     */
> -    (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
> -
> +    (*rsp)->migration_dirty_pages = 0;
>      ram_state_reset(*rsp);
>  
>      return 0;
> @@ -3172,7 +3167,6 @@ static void ram_list_init_bitmaps(void)
>          RAMBLOCK_FOREACH_NOT_IGNORED(block) {
>              pages = block->max_length >> TARGET_PAGE_BITS;
>              block->bmap = bitmap_new(pages);
> -            bitmap_set(block->bmap, 0, pages);
>              if (migrate_postcopy_ram()) {
>                  block->unsentmap = bitmap_new(pages);
>                  bitmap_set(block->unsentmap, 0, pages);
> -- 
> 2.19.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Peter Xu June 1, 2019, 3:34 a.m. UTC | #2
On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote:
> * Wei Yang (richardw.yang@linux.intel.com) wrote:
> > During migration, we would sync bitmap from ram_list.dirty_memory to
> > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
> > 
> > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
> > means at the first round this sync is meaningless and is a duplicated
> > work.
> > 
> > Leaving RAMBlock->bmap blank on allocating would have a side effect on
> > migration_dirty_pages, since it is calculated from the result of
> > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
> > set migration_dirty_pages to 0 in ram_state_init().
> > 
> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> 
> I've looked at this for a while, and I think it's OK, so
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> 
> Peter, Juan: Can you just see if there's arny reason this would be bad,
> but I think it's actually more sensible than what we have.

I really suspect it will work in all cases...  Wei, have you done any
test (or better, thorough tests) with this change?  My reasoning of
why we should need the bitmap all set is here:

https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html

Regards,
Wei Yang June 3, 2019, 1:33 a.m. UTC | #3
On Sat, Jun 01, 2019 at 11:34:41AM +0800, Peter Xu wrote:
>On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote:
>> * Wei Yang (richardw.yang@linux.intel.com) wrote:
>> > During migration, we would sync bitmap from ram_list.dirty_memory to
>> > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
>> > 
>> > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
>> > means at the first round this sync is meaningless and is a duplicated
>> > work.
>> > 
>> > Leaving RAMBlock->bmap blank on allocating would have a side effect on
>> > migration_dirty_pages, since it is calculated from the result of
>> > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
>> > set migration_dirty_pages to 0 in ram_state_init().
>> > 
>> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
>> 
>> I've looked at this for a while, and I think it's OK, so
>> 
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> 
>> Peter, Juan: Can you just see if there's arny reason this would be bad,
>> but I think it's actually more sensible than what we have.
>
>I really suspect it will work in all cases...  Wei, have you done any
>test (or better, thorough tests) with this change?  My reasoning of
>why we should need the bitmap all set is here:
>

I have done some migration cases, like migrate a linux guest through tcp.

Other cases suggested to do?

>https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html
>
>Regards,
>
>-- 
>Peter Xu
Peter Xu June 3, 2019, 2:35 a.m. UTC | #4
On Mon, Jun 03, 2019 at 09:33:05AM +0800, Wei Yang wrote:
> On Sat, Jun 01, 2019 at 11:34:41AM +0800, Peter Xu wrote:
> >On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote:
> >> * Wei Yang (richardw.yang@linux.intel.com) wrote:
> >> > During migration, we would sync bitmap from ram_list.dirty_memory to
> >> > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
> >> > 
> >> > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
> >> > means at the first round this sync is meaningless and is a duplicated
> >> > work.
> >> > 
> >> > Leaving RAMBlock->bmap blank on allocating would have a side effect on
> >> > migration_dirty_pages, since it is calculated from the result of
> >> > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
> >> > set migration_dirty_pages to 0 in ram_state_init().
> >> > 
> >> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> >> 
> >> I've looked at this for a while, and I think it's OK, so
> >> 
> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> >> 
> >> Peter, Juan: Can you just see if there's arny reason this would be bad,
> >> but I think it's actually more sensible than what we have.
> >
> >I really suspect it will work in all cases...  Wei, have you done any
> >test (or better, thorough tests) with this change?  My reasoning of
> >why we should need the bitmap all set is here:
> >
> 
> I have done some migration cases, like migrate a linux guest through tcp.

When did you start the migration?  Have you tried to migrate during
some workload?

> 
> Other cases suggested to do?

Could you also help answer the question I raised below in the link?

Thanks,

> >https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html
Wei Yang June 3, 2019, 3:36 a.m. UTC | #5
On Mon, Jun 03, 2019 at 10:35:27AM +0800, Peter Xu wrote:
>On Mon, Jun 03, 2019 at 09:33:05AM +0800, Wei Yang wrote:
>> On Sat, Jun 01, 2019 at 11:34:41AM +0800, Peter Xu wrote:
>> >On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote:
>> >> * Wei Yang (richardw.yang@linux.intel.com) wrote:
>> >> > During migration, we would sync bitmap from ram_list.dirty_memory to
>> >> > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
>> >> > 
>> >> > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
>> >> > means at the first round this sync is meaningless and is a duplicated
>> >> > work.
>> >> > 
>> >> > Leaving RAMBlock->bmap blank on allocating would have a side effect on
>> >> > migration_dirty_pages, since it is calculated from the result of
>> >> > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
>> >> > set migration_dirty_pages to 0 in ram_state_init().
>> >> > 
>> >> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
>> >> 
>> >> I've looked at this for a while, and I think it's OK, so
>> >> 
>> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> >> 
>> >> Peter, Juan: Can you just see if there's arny reason this would be bad,
>> >> but I think it's actually more sensible than what we have.
>> >
>> >I really suspect it will work in all cases...  Wei, have you done any
>> >test (or better, thorough tests) with this change?  My reasoning of
>> >why we should need the bitmap all set is here:
>> >
>> 
>> I have done some migration cases, like migrate a linux guest through tcp.
>
>When did you start the migration?  Have you tried to migrate during
>some workload?
>

I tried kernel build in guest.

>> 
>> Other cases suggested to do?
>
>Could you also help answer the question I raised below in the link?
>
>Thanks,
>
>> >https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html
>

I took a look into this link, hope my understanding is correct.

You concern is this thread/patch is based on one prerequisite -- dirty all the
bitmap at start.

My answer is we already did it in ram_block_add() for each RAMBlock. And then
the bitmap is synced by migration_bitmap_sync_precopy() from
ram_list.dirty_memory to RAMBlock.bmap.


>-- 
>Peter Xu
Peter Xu June 3, 2019, 5:40 a.m. UTC | #6
On Mon, Jun 03, 2019 at 11:36:00AM +0800, Wei Yang wrote:
> On Mon, Jun 03, 2019 at 10:35:27AM +0800, Peter Xu wrote:
> >On Mon, Jun 03, 2019 at 09:33:05AM +0800, Wei Yang wrote:
> >> On Sat, Jun 01, 2019 at 11:34:41AM +0800, Peter Xu wrote:
> >> >On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote:
> >> >> * Wei Yang (richardw.yang@linux.intel.com) wrote:
> >> >> > During migration, we would sync bitmap from ram_list.dirty_memory to
> >> >> > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
> >> >> > 
> >> >> > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
> >> >> > means at the first round this sync is meaningless and is a duplicated
> >> >> > work.
> >> >> > 
> >> >> > Leaving RAMBlock->bmap blank on allocating would have a side effect on
> >> >> > migration_dirty_pages, since it is calculated from the result of
> >> >> > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
> >> >> > set migration_dirty_pages to 0 in ram_state_init().
> >> >> > 
> >> >> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> >> >> 
> >> >> I've looked at this for a while, and I think it's OK, so
> >> >> 
> >> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> >> >> 
> >> >> Peter, Juan: Can you just see if there's arny reason this would be bad,
> >> >> but I think it's actually more sensible than what we have.
> >> >
> >> >I really suspect it will work in all cases...  Wei, have you done any
> >> >test (or better, thorough tests) with this change?  My reasoning of
> >> >why we should need the bitmap all set is here:
> >> >
> >> 
> >> I have done some migration cases, like migrate a linux guest through tcp.
> >
> >When did you start the migration?  Have you tried to migrate during
> >some workload?
> >
> 
> I tried kernel build in guest.
> 
> >> 
> >> Other cases suggested to do?
> >
> >Could you also help answer the question I raised below in the link?
> >
> >Thanks,
> >
> >> >https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html
> >
> 
> I took a look into this link, hope my understanding is correct.
> 
> You concern is this thread/patch is based on one prerequisite -- dirty all the
> bitmap at start.
> 
> My answer is we already did it in ram_block_add() for each RAMBlock. And then
> the bitmap is synced by migration_bitmap_sync_precopy() from
> ram_list.dirty_memory to RAMBlock.bmap.

Ah I see, thanks for the pointer.  Then I would agree it's fine.

I'm not an expert of TCG - I'm curious on why all those three dirty
bitmaps need to be set at the very beginning.  IIUC at least the VGA
bitmap should not require that (so IMHO we should be fine to have all
zeros with VGA bitmap for ramblocks, and we only set them when the
guest touches them).  Migration bitmap should be special somehow but I
don't know much on TCG/TLB part I'd confess so I can't say.  In other
words, if migration is the only one that requires this "all-1"
initialization then IMHO we may consider to remove the other part
rather than here in migration because that's what we'd better to be
sure with.

And even if you want to remove this, I still have two suggestions:

(1) proper comment here above bmap on the above fact that although
    bmap is not set here but it's actually set somewhere else because
    we'll sooner or later copy all 1s from the ramblock bitmap

(2) imho you can move "migration_dirty_pages = 0" into
    ram_list_init_bitmaps() too to let them be together
Wei Yang June 3, 2019, 6:05 a.m. UTC | #7
On Mon, Jun 03, 2019 at 01:40:13PM +0800, Peter Xu wrote:
>
>Ah I see, thanks for the pointer.  Then I would agree it's fine.
>
>I'm not an expert of TCG - I'm curious on why all those three dirty
>bitmaps need to be set at the very beginning.  IIUC at least the VGA
>bitmap should not require that (so IMHO we should be fine to have all
>zeros with VGA bitmap for ramblocks, and we only set them when the
>guest touches them).  Migration bitmap should be special somehow but I
>don't know much on TCG/TLB part I'd confess so I can't say.  In other
>words, if migration is the only one that requires this "all-1"
>initialization then IMHO we may consider to remove the other part
>rather than here in migration because that's what we'd better to be
>sure with.

I am not sure about the background here, so I didn't make a change at this
place.

>
>And even if you want to remove this, I still have two suggestions:
>
>(1) proper comment here above bmap on the above fact that although
>    bmap is not set here but it's actually set somewhere else because
>    we'll sooner or later copy all 1s from the ramblock bitmap
>
>(2) imho you can move "migration_dirty_pages = 0" into
>    ram_list_init_bitmaps() too to let them be together
>

I will address these two comments and send v2.

Thanks.

>-- 
>Peter Xu
Wei Yang June 3, 2019, 6:10 a.m. UTC | #8
On Mon, Jun 03, 2019 at 02:05:47PM +0800, Wei Yang wrote:
>On Mon, Jun 03, 2019 at 01:40:13PM +0800, Peter Xu wrote:
>>
>>Ah I see, thanks for the pointer.  Then I would agree it's fine.
>>
>>I'm not an expert of TCG - I'm curious on why all those three dirty
>>bitmaps need to be set at the very beginning.  IIUC at least the VGA
>>bitmap should not require that (so IMHO we should be fine to have all
>>zeros with VGA bitmap for ramblocks, and we only set them when the
>>guest touches them).  Migration bitmap should be special somehow but I
>>don't know much on TCG/TLB part I'd confess so I can't say.  In other
>>words, if migration is the only one that requires this "all-1"
>>initialization then IMHO we may consider to remove the other part
>>rather than here in migration because that's what we'd better to be
>>sure with.
>
>I am not sure about the background here, so I didn't make a change at this
>place.
>
>>
>>And even if you want to remove this, I still have two suggestions:
>>
>>(1) proper comment here above bmap on the above fact that although
>>    bmap is not set here but it's actually set somewhere else because
>>    we'll sooner or later copy all 1s from the ramblock bitmap
>>
>>(2) imho you can move "migration_dirty_pages = 0" into
>>    ram_list_init_bitmaps() too to let them be together
>>

I took a look into this one.

ram_list_init_bitmaps() setup bitmap for each RAMBlock, while ram_state_init()
setup RAMState. Since migration_dirty_pages belongs to RAMState, it maybe more
proper to leave it at the original place.

Do you feel good about this?

>
>I will address these two comments and send v2.
>
>Thanks.
>
>>-- 
>>Peter Xu
>
>-- 
>Wei Yang
>Help you, Help me
Peter Xu June 3, 2019, 6:35 a.m. UTC | #9
On Mon, Jun 03, 2019 at 02:10:34PM +0800, Wei Yang wrote:
> On Mon, Jun 03, 2019 at 02:05:47PM +0800, Wei Yang wrote:
> >On Mon, Jun 03, 2019 at 01:40:13PM +0800, Peter Xu wrote:
> >>
> >>Ah I see, thanks for the pointer.  Then I would agree it's fine.
> >>
> >>I'm not an expert of TCG - I'm curious on why all those three dirty
> >>bitmaps need to be set at the very beginning.  IIUC at least the VGA
> >>bitmap should not require that (so IMHO we should be fine to have all
> >>zeros with VGA bitmap for ramblocks, and we only set them when the
> >>guest touches them).  Migration bitmap should be special somehow but I
> >>don't know much on TCG/TLB part I'd confess so I can't say.  In other
> >>words, if migration is the only one that requires this "all-1"
> >>initialization then IMHO we may consider to remove the other part
> >>rather than here in migration because that's what we'd better to be
> >>sure with.
> >
> >I am not sure about the background here, so I didn't make a change at this
> >place.
> >
> >>
> >>And even if you want to remove this, I still have two suggestions:
> >>
> >>(1) proper comment here above bmap on the above fact that although
> >>    bmap is not set here but it's actually set somewhere else because
> >>    we'll sooner or later copy all 1s from the ramblock bitmap
> >>
> >>(2) imho you can move "migration_dirty_pages = 0" into
> >>    ram_list_init_bitmaps() too to let them be together
> >>
> 
> I took a look into this one.
> 
> ram_list_init_bitmaps() setup bitmap for each RAMBlock, while ram_state_init()
> setup RAMState. Since migration_dirty_pages belongs to RAMState, it maybe more
> proper to leave it at the original place.
> 
> Do you feel good about this?

Yes it's ok to me.  Thanks,

Patch
diff mbox series

diff --git a/migration/ram.c b/migration/ram.c
index 95c51109d2..417874707d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3151,12 +3151,7 @@  static int ram_state_init(RAMState **rsp)
     qemu_mutex_init(&(*rsp)->src_page_req_mutex);
     QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
 
-    /*
-     * Count the total number of pages used by ram blocks not including any
-     * gaps due to alignment or unplugs.
-     */
-    (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
-
+    (*rsp)->migration_dirty_pages = 0;
     ram_state_reset(*rsp);
 
     return 0;
@@ -3172,7 +3167,6 @@  static void ram_list_init_bitmaps(void)
         RAMBLOCK_FOREACH_NOT_IGNORED(block) {
             pages = block->max_length >> TARGET_PAGE_BITS;
             block->bmap = bitmap_new(pages);
-            bitmap_set(block->bmap, 0, pages);
             if (migrate_postcopy_ram()) {
                 block->unsentmap = bitmap_new(pages);
                 bitmap_set(block->unsentmap, 0, pages);