diff mbox series

[2/3] migration: Drop unnecessary check in ram's pending_exact()

Message ID 20240117075848.139045-3-peterx@redhat.com (mailing list archive)
State New, archived
Headers show
Series migration: some small cleanups | expand

Commit Message

Peter Xu Jan. 17, 2024, 7:58 a.m. UTC
From: Peter Xu <peterx@redhat.com>

When the migration frameworks fetches the exact pending sizes, it means
this check:

  remaining_size < s->threshold_size

Must have been done already, actually at migration_iteration_run():

    if (must_precopy <= s->threshold_size) {
        qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);

That should be after one round of ram_state_pending_estimate().  It makes
the 2nd check meaningless and can be dropped.

To say it in another way, when reaching ->state_pending_exact(), we
unconditionally sync dirty bits for precopy.

Then we can drop migrate_get_current() there too.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/ram.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

Comments

Fabiano Rosas Jan. 19, 2024, 1:25 p.m. UTC | #1
peterx@redhat.com writes:

> From: Peter Xu <peterx@redhat.com>
>
> When the migration frameworks fetches the exact pending sizes, it means
> this check:
>
>   remaining_size < s->threshold_size
>
> Must have been done already, actually at migration_iteration_run():
>
>     if (must_precopy <= s->threshold_size) {
>         qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
>
> That should be after one round of ram_state_pending_estimate().  It makes
> the 2nd check meaningless and can be dropped.
>
> To say it in another way, when reaching ->state_pending_exact(), we
> unconditionally sync dirty bits for precopy.
>
> Then we can drop migrate_get_current() there too.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Fabiano Rosas <farosas@suse.de>
Nina Schoetterl-Glausch March 20, 2024, 5:51 p.m. UTC | #2
On Wed, 2024-01-17 at 15:58 +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> When the migration frameworks fetches the exact pending sizes, it means
> this check:
> 
>   remaining_size < s->threshold_size
> 
> Must have been done already, actually at migration_iteration_run():
> 
>     if (must_precopy <= s->threshold_size) {
>         qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
> 
> That should be after one round of ram_state_pending_estimate().  It makes
> the 2nd check meaningless and can be dropped.
> 
> To say it in another way, when reaching ->state_pending_exact(), we
> unconditionally sync dirty bits for precopy.
> 
> Then we can drop migrate_get_current() there too.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

Hi Peter,

could you have a look at this issue:
https://gitlab.com/qemu-project/qemu/-/issues/1565

which I reopened. Previous thread here:

https://lore.kernel.org/qemu-devel/20230324184129.3119575-1-nsg@linux.ibm.com/

I'm seeing migration failures with s390x TCG again, which look the same to me
as those a while back.

> ---
>  migration/ram.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index c0cdcccb75..d5b7cd5ac2 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -3213,21 +3213,20 @@ static void ram_state_pending_estimate(void *opaque, uint64_t *must_precopy,
>  static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy,
>                                      uint64_t *can_postcopy)
>  {
> -    MigrationState *s = migrate_get_current();
>      RAMState **temp = opaque;
>      RAMState *rs = *temp;
> +    uint64_t remaining_size;
>  
> -    uint64_t remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> -
> -    if (!migration_in_postcopy() && remaining_size < s->threshold_size) {
> +    if (!migration_in_postcopy()) {
>          bql_lock();
>          WITH_RCU_READ_LOCK_GUARD() {
>              migration_bitmap_sync_precopy(rs, false);
>          }
>          bql_unlock();
> -        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
>      }
>  
> +    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> +
>      if (migrate_postcopy_ram()) {
>          /* We can do postcopy, and all the data is postcopiable */
>          *can_postcopy += remaining_size;

This basically reverts 28ef5339c3 ("migration: fix ram_state_pending_exact()"), which originally
made the issue disappear.

Any thoughts on the matter appreciated.

Thanks,
Nina
Nina Schoetterl-Glausch March 20, 2024, 6:05 p.m. UTC | #3
I cc'ed Juan, but it looks like he is no longer with Redhat.
Peter Xu March 20, 2024, 6:57 p.m. UTC | #4
On Wed, Mar 20, 2024 at 06:51:26PM +0100, Nina Schoetterl-Glausch wrote:
> On Wed, 2024-01-17 at 15:58 +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> > 
> > When the migration frameworks fetches the exact pending sizes, it means
> > this check:
> > 
> >   remaining_size < s->threshold_size
> > 
> > Must have been done already, actually at migration_iteration_run():
> > 
> >     if (must_precopy <= s->threshold_size) {
> >         qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
> > 
> > That should be after one round of ram_state_pending_estimate().  It makes
> > the 2nd check meaningless and can be dropped.
> > 
> > To say it in another way, when reaching ->state_pending_exact(), we
> > unconditionally sync dirty bits for precopy.
> > 
> > Then we can drop migrate_get_current() there too.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> 
> Hi Peter,

Hi, Nina,

> 
> could you have a look at this issue:
> https://gitlab.com/qemu-project/qemu/-/issues/1565
> 
> which I reopened. Previous thread here:
> 
> https://lore.kernel.org/qemu-devel/20230324184129.3119575-1-nsg@linux.ibm.com/
> 
> I'm seeing migration failures with s390x TCG again, which look the same to me
> as those a while back.

I'm still quite confused how that could be caused of this.

What you described in the previous bug report seems to imply some page was
leftover in migration so some page got corrupted after migrated.

However what this patch mostly does is it can sync more than before even if
I overlooked the condition check there (I still think the check is
redundant, there's one outlier when remaining_size == threshold_size, but I
don't think it should matter here as of now).  It'll make more sense if
this patch made the sync less, but that's not the case but vice versa.

> 
> > ---
> >  migration/ram.c | 9 ++++-----
> >  1 file changed, 4 insertions(+), 5 deletions(-)
> > 
> > diff --git a/migration/ram.c b/migration/ram.c
> > index c0cdcccb75..d5b7cd5ac2 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -3213,21 +3213,20 @@ static void ram_state_pending_estimate(void *opaque, uint64_t *must_precopy,
> >  static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy,
> >                                      uint64_t *can_postcopy)
> >  {
> > -    MigrationState *s = migrate_get_current();
> >      RAMState **temp = opaque;
> >      RAMState *rs = *temp;
> > +    uint64_t remaining_size;
> >  
> > -    uint64_t remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> > -
> > -    if (!migration_in_postcopy() && remaining_size < s->threshold_size) {
> > +    if (!migration_in_postcopy()) {
> >          bql_lock();
> >          WITH_RCU_READ_LOCK_GUARD() {
> >              migration_bitmap_sync_precopy(rs, false);
> >          }
> >          bql_unlock();
> > -        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> >      }
> >  
> > +    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> > +
> >      if (migrate_postcopy_ram()) {
> >          /* We can do postcopy, and all the data is postcopiable */
> >          *can_postcopy += remaining_size;
> 
> This basically reverts 28ef5339c3 ("migration: fix ram_state_pending_exact()"), which originally
> made the issue disappear.
> 
> Any thoughts on the matter appreciated.

In the previous discussion, you mentioned that you bisected to the commit
and also verified the fix.  Now you also mentioned in the bz that you can't
reporduce this bug manually.

Is it still possible to be reproduced with some scripts?  Do you also mean
that it's harder to reproduce comparing to before?  In all cases, some way
to reproduce it would definitely be helpful.

Even if we want to revert this change, we'll need to know whether this will
fix your case so we need something to verify it before a revert.  I'll
consider that the last though as I had a feeling this is papering over
something else.

Thanks,
Nina Schoetterl-Glausch March 20, 2024, 7:21 p.m. UTC | #5
On Wed, 2024-03-20 at 14:57 -0400, Peter Xu wrote:
> On Wed, Mar 20, 2024 at 06:51:26PM +0100, Nina Schoetterl-Glausch wrote:
> > On Wed, 2024-01-17 at 15:58 +0800, peterx@redhat.com wrote:
> > > From: Peter Xu <peterx@redhat.com>
> > > 
> > > When the migration frameworks fetches the exact pending sizes, it means
> > > this check:
> > > 
> > >   remaining_size < s->threshold_size
> > > 
> > > Must have been done already, actually at migration_iteration_run():
> > > 
> > >     if (must_precopy <= s->threshold_size) {
> > >         qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
> > > 
> > > That should be after one round of ram_state_pending_estimate().  It makes
> > > the 2nd check meaningless and can be dropped.
> > > 
> > > To say it in another way, when reaching ->state_pending_exact(), we
> > > unconditionally sync dirty bits for precopy.
> > > 
> > > Then we can drop migrate_get_current() there too.
> > > 
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > 
> > Hi Peter,
> 
> Hi, Nina,
> 
> > 
> > could you have a look at this issue:
> > https://gitlab.com/qemu-project/qemu/-/issues/1565
> > 
> > which I reopened. Previous thread here:
> > 
> > https://lore.kernel.org/qemu-devel/20230324184129.3119575-1-nsg@linux.ibm.com/
> > 
> > I'm seeing migration failures with s390x TCG again, which look the same to me
> > as those a while back.
> 
> I'm still quite confused how that could be caused of this.
> 
> What you described in the previous bug report seems to imply some page was
> leftover in migration so some page got corrupted after migrated.
> 
> However what this patch mostly does is it can sync more than before even if
> I overlooked the condition check there (I still think the check is
> redundant, there's one outlier when remaining_size == threshold_size, but I
> don't think it should matter here as of now).  It'll make more sense if
> this patch made the sync less, but that's not the case but vice versa.

[...]

> In the previous discussion, you mentioned that you bisected to the commit
> and also verified the fix.  Now you also mentioned in the bz that you can't
> reporduce this bug manually.
> 
> Is it still possible to be reproduced with some scripts?  Do you also mean
> that it's harder to reproduce comparing to before?  In all cases, some way
> to reproduce it would definitely be helpful.

I tried running the kvm-unit-test a bunch of times in a loop and couldn't
trigger a failure. I just tried again on a different system and managed just
fine, yay. No idea why it wouldn't on the first system tho.
> 
> Even if we want to revert this change, we'll need to know whether this will
> fix your case so we need something to verify it before a revert.  I'll
> consider that the last though as I had a feeling this is papering over
> something else.

I can check if I can reproduce the issue before & after b0504edd ("migration:
Drop unnecessary check in ram's pending_exact()").
I can also check if I can reproduce it on x86, that worked last time.
Anything else? Ideas on how to pinpoint where the corruption happens?

> 
> Thanks,
>
Peter Xu March 20, 2024, 7:46 p.m. UTC | #6
On Wed, Mar 20, 2024 at 08:21:30PM +0100, Nina Schoetterl-Glausch wrote:
> On Wed, 2024-03-20 at 14:57 -0400, Peter Xu wrote:
> > On Wed, Mar 20, 2024 at 06:51:26PM +0100, Nina Schoetterl-Glausch wrote:
> > > On Wed, 2024-01-17 at 15:58 +0800, peterx@redhat.com wrote:
> > > > From: Peter Xu <peterx@redhat.com>
> > > > 
> > > > When the migration frameworks fetches the exact pending sizes, it means
> > > > this check:
> > > > 
> > > >   remaining_size < s->threshold_size
> > > > 
> > > > Must have been done already, actually at migration_iteration_run():
> > > > 
> > > >     if (must_precopy <= s->threshold_size) {
> > > >         qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
> > > > 
> > > > That should be after one round of ram_state_pending_estimate().  It makes
> > > > the 2nd check meaningless and can be dropped.
> > > > 
> > > > To say it in another way, when reaching ->state_pending_exact(), we
> > > > unconditionally sync dirty bits for precopy.
> > > > 
> > > > Then we can drop migrate_get_current() there too.
> > > > 
> > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > 
> > > Hi Peter,
> > 
> > Hi, Nina,
> > 
> > > 
> > > could you have a look at this issue:
> > > https://gitlab.com/qemu-project/qemu/-/issues/1565
> > > 
> > > which I reopened. Previous thread here:
> > > 
> > > https://lore.kernel.org/qemu-devel/20230324184129.3119575-1-nsg@linux.ibm.com/
> > > 
> > > I'm seeing migration failures with s390x TCG again, which look the same to me
> > > as those a while back.
> > 
> > I'm still quite confused how that could be caused of this.
> > 
> > What you described in the previous bug report seems to imply some page was
> > leftover in migration so some page got corrupted after migrated.
> > 
> > However what this patch mostly does is it can sync more than before even if
> > I overlooked the condition check there (I still think the check is
> > redundant, there's one outlier when remaining_size == threshold_size, but I
> > don't think it should matter here as of now).  It'll make more sense if
> > this patch made the sync less, but that's not the case but vice versa.
> 
> [...]
> 
> > In the previous discussion, you mentioned that you bisected to the commit
> > and also verified the fix.  Now you also mentioned in the bz that you can't
> > reporduce this bug manually.
> > 
> > Is it still possible to be reproduced with some scripts?  Do you also mean
> > that it's harder to reproduce comparing to before?  In all cases, some way
> > to reproduce it would definitely be helpful.
> 
> I tried running the kvm-unit-test a bunch of times in a loop and couldn't
> trigger a failure. I just tried again on a different system and managed just
> fine, yay. No idea why it wouldn't on the first system tho.

There's probably still a bug somewhere.  If reproduction rate changed, it's
also a sign that it might not be directly relevant to this change, as
otherwise it should reproduce the same as before.

> > 
> > Even if we want to revert this change, we'll need to know whether this will
> > fix your case so we need something to verify it before a revert.  I'll
> > consider that the last though as I had a feeling this is papering over
> > something else.
> 
> I can check if I can reproduce the issue before & after b0504edd ("migration:
> Drop unnecessary check in ram's pending_exact()").
> I can also check if I can reproduce it on x86, that worked last time.
> Anything else? Ideas on how to pinpoint where the corruption happens?

I don't have a solid clue yet, but more information of the single case
where it reproduced could help.

I saw from the bug link that the cmdline is pretty simple.  However still
not sure of something that can be relevant.  E.g., did you use postcopy
(including when postcopy-ram enabled but precopy completed)?  Is there any
special device, like s390's CMMA (would that simplest cmdline include such
a device; apologies, I have zero knowledge there before today)?

I _think_ when reading the code I already found something quite unusual,
but only when postcopy is selected: I notice postcopy will frequently sync
dirty bitmap while it doesn't really necessarily need to, because
ram_state_pending_estimate() will report all ram as "can_postcopy"; it
means it's highly likely that this check will 99.999% always be true simply
because must_precopy can in most cases be zero:

    if (must_precopy <= s->threshold_size) { <---------------------------- here
        qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
        pending_size = must_precopy + can_postcopy;
        trace_migrate_pending_exact(pending_size, must_precopy, can_postcopy);
    }

I need to think more of this, but this doesn't sound right at all.  There's
no such issue with precopy-only, and I'm surprised it is like that for years.
Peter Xu March 20, 2024, 8:45 p.m. UTC | #7
On Wed, Mar 20, 2024 at 03:46:44PM -0400, Peter Xu wrote:
> On Wed, Mar 20, 2024 at 08:21:30PM +0100, Nina Schoetterl-Glausch wrote:
> > On Wed, 2024-03-20 at 14:57 -0400, Peter Xu wrote:
> > > On Wed, Mar 20, 2024 at 06:51:26PM +0100, Nina Schoetterl-Glausch wrote:
> > > > On Wed, 2024-01-17 at 15:58 +0800, peterx@redhat.com wrote:
> > > > > From: Peter Xu <peterx@redhat.com>
> > > > > 
> > > > > When the migration frameworks fetches the exact pending sizes, it means
> > > > > this check:
> > > > > 
> > > > >   remaining_size < s->threshold_size
> > > > > 
> > > > > Must have been done already, actually at migration_iteration_run():
> > > > > 
> > > > >     if (must_precopy <= s->threshold_size) {
> > > > >         qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
> > > > > 
> > > > > That should be after one round of ram_state_pending_estimate().  It makes
> > > > > the 2nd check meaningless and can be dropped.
> > > > > 
> > > > > To say it in another way, when reaching ->state_pending_exact(), we
> > > > > unconditionally sync dirty bits for precopy.
> > > > > 
> > > > > Then we can drop migrate_get_current() there too.
> > > > > 
> > > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > > 
> > > > Hi Peter,
> > > 
> > > Hi, Nina,
> > > 
> > > > 
> > > > could you have a look at this issue:
> > > > https://gitlab.com/qemu-project/qemu/-/issues/1565
> > > > 
> > > > which I reopened. Previous thread here:
> > > > 
> > > > https://lore.kernel.org/qemu-devel/20230324184129.3119575-1-nsg@linux.ibm.com/
> > > > 
> > > > I'm seeing migration failures with s390x TCG again, which look the same to me
> > > > as those a while back.
> > > 
> > > I'm still quite confused how that could be caused of this.
> > > 
> > > What you described in the previous bug report seems to imply some page was
> > > leftover in migration so some page got corrupted after migrated.
> > > 
> > > However what this patch mostly does is it can sync more than before even if
> > > I overlooked the condition check there (I still think the check is
> > > redundant, there's one outlier when remaining_size == threshold_size, but I
> > > don't think it should matter here as of now).  It'll make more sense if
> > > this patch made the sync less, but that's not the case but vice versa.
> > 
> > [...]
> > 
> > > In the previous discussion, you mentioned that you bisected to the commit
> > > and also verified the fix.  Now you also mentioned in the bz that you can't
> > > reporduce this bug manually.
> > > 
> > > Is it still possible to be reproduced with some scripts?  Do you also mean
> > > that it's harder to reproduce comparing to before?  In all cases, some way
> > > to reproduce it would definitely be helpful.
> > 
> > I tried running the kvm-unit-test a bunch of times in a loop and couldn't
> > trigger a failure. I just tried again on a different system and managed just
> > fine, yay. No idea why it wouldn't on the first system tho.
> 
> There's probably still a bug somewhere.  If reproduction rate changed, it's
> also a sign that it might not be directly relevant to this change, as
> otherwise it should reproduce the same as before.
> 
> > > 
> > > Even if we want to revert this change, we'll need to know whether this will
> > > fix your case so we need something to verify it before a revert.  I'll
> > > consider that the last though as I had a feeling this is papering over
> > > something else.
> > 
> > I can check if I can reproduce the issue before & after b0504edd ("migration:
> > Drop unnecessary check in ram's pending_exact()").
> > I can also check if I can reproduce it on x86, that worked last time.
> > Anything else? Ideas on how to pinpoint where the corruption happens?
> 
> I don't have a solid clue yet, but more information of the single case
> where it reproduced could help.
> 
> I saw from the bug link that the cmdline is pretty simple.  However still
> not sure of something that can be relevant.  E.g., did you use postcopy
> (including when postcopy-ram enabled but precopy completed)?  Is there any
> special device, like s390's CMMA (would that simplest cmdline include such
> a device; apologies, I have zero knowledge there before today)?
> 
> I _think_ when reading the code I already found something quite unusual,
> but only when postcopy is selected: I notice postcopy will frequently sync
> dirty bitmap while it doesn't really necessarily need to, because
> ram_state_pending_estimate() will report all ram as "can_postcopy"; it
> means it's highly likely that this check will 99.999% always be true simply
> because must_precopy can in most cases be zero:
> 
>     if (must_precopy <= s->threshold_size) { <---------------------------- here
>         qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
>         pending_size = must_precopy + can_postcopy;
>         trace_migrate_pending_exact(pending_size, must_precopy, can_postcopy);
>     }
> 
> I need to think more of this, but this doesn't sound right at all.  There's
> no such issue with precopy-only, and I'm surprised it is like that for years.

It seems this can be a separate new bug.. possible introduced in the same
commit since 8.0.  I will post a patch for this soon.

One more thing to mention is I am aware Nicholas & Phil also hit some s390
tcg issues, and just recently there got a fix landed, I'm suspecting that
could also be relevant. See:

https://lore.kernel.org/qemu-devel/20240312201458.79532-1-philmd@linaro.org/
03bfc2188f physmem: Fix migration dirty bitmap coherency with TCG memory access

I would suspect this issue reproduces easier before this.  I think Nicholas
also mentioned there can be other bug floating around:

https://lore.kernel.org/qemu-devel/CZSDDVZW4G3L.6CV89ZRMQK9G@wheely/

Let me add all into this loop.

Thanks,
diff mbox series

Patch

diff --git a/migration/ram.c b/migration/ram.c
index c0cdcccb75..d5b7cd5ac2 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3213,21 +3213,20 @@  static void ram_state_pending_estimate(void *opaque, uint64_t *must_precopy,
 static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy,
                                     uint64_t *can_postcopy)
 {
-    MigrationState *s = migrate_get_current();
     RAMState **temp = opaque;
     RAMState *rs = *temp;
+    uint64_t remaining_size;
 
-    uint64_t remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
-
-    if (!migration_in_postcopy() && remaining_size < s->threshold_size) {
+    if (!migration_in_postcopy()) {
         bql_lock();
         WITH_RCU_READ_LOCK_GUARD() {
             migration_bitmap_sync_precopy(rs, false);
         }
         bql_unlock();
-        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
     }
 
+    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
+
     if (migrate_postcopy_ram()) {
         /* We can do postcopy, and all the data is postcopiable */
         *can_postcopy += remaining_size;