diff mbox series

[05/14] migration: Yield bitmap_mutex properly when sending/sleeping

Message ID 20220920225210.48732-1-peterx@redhat.com (mailing list archive)
State New, archived
Headers show
Series migration: Postcopy Preempt-Full | expand

Commit Message

Peter Xu Sept. 20, 2022, 10:52 p.m. UTC
Don't take the bitmap mutex when sending pages, or when being throttled by
migration_rate_limit() (which is a bit tricky to call it here in ram code,
but seems still helpful).

It prepares for the possibility of concurrently sending pages in >1 threads
using the function ram_save_host_page() because all threads may need the
bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of
qemu_sem_wait() blocking for one thread will not block the other from
progressing.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/ram.c | 42 +++++++++++++++++++++++++++++++-----------
 1 file changed, 31 insertions(+), 11 deletions(-)

Comments

Dr. David Alan Gilbert Oct. 4, 2022, 1:55 p.m. UTC | #1
* Peter Xu (peterx@redhat.com) wrote:
> Don't take the bitmap mutex when sending pages, or when being throttled by
> migration_rate_limit() (which is a bit tricky to call it here in ram code,
> but seems still helpful).
> 
> It prepares for the possibility of concurrently sending pages in >1 threads
> using the function ram_save_host_page() because all threads may need the
> bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of
> qemu_sem_wait() blocking for one thread will not block the other from
> progressing.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

I generally dont like taking locks conditionally; but this kind of looks
OK; I think it needs a big comment on the start of the function saying
that it's called and left with the lock held but that it might drop it
temporarily.

> ---
>  migration/ram.c | 42 +++++++++++++++++++++++++++++++-----------
>  1 file changed, 31 insertions(+), 11 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 8303252b6d..6e7de6087a 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2463,6 +2463,7 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
>   */
>  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
>  {
> +    bool page_dirty, release_lock = postcopy_preempt_active();

Could you rename that to something like 'drop_lock' - you are taking the
lock at the end even when you have 'release_lock' set - which is a bit
strange naming.

>      int tmppages, pages = 0;
>      size_t pagesize_bits =
>          qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> @@ -2486,22 +2487,41 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
>              break;
>          }
>  
> +        page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
> +        /*
> +         * Properly yield the lock only in postcopy preempt mode because
> +         * both migration thread and rp-return thread can operate on the
> +         * bitmaps.
> +         */
> +        if (release_lock) {
> +            qemu_mutex_unlock(&rs->bitmap_mutex);
> +        }

Shouldn't the unlock/lock move inside the 'if (page_dirty) {' ?


>          /* Check the pages is dirty and if it is send it */
> -        if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
> +        if (page_dirty) {
>              tmppages = ram_save_target_page(rs, pss);
> -            if (tmppages < 0) {
> -                return tmppages;
> +            if (tmppages >= 0) {
> +                pages += tmppages;
> +                /*
> +                 * Allow rate limiting to happen in the middle of huge pages if
> +                 * something is sent in the current iteration.
> +                 */
> +                if (pagesize_bits > 1 && tmppages > 0) {
> +                    migration_rate_limit();

This feels interesting, I know it's no change from before, and it's
difficult to do here, but it seems odd to hold the lock around the
sleeping in the rate limit.

Dave

> +                }
>              }
> +        } else {
> +            tmppages = 0;
> +        }
>  
> -            pages += tmppages;
> -            /*
> -             * Allow rate limiting to happen in the middle of huge pages if
> -             * something is sent in the current iteration.
> -             */
> -            if (pagesize_bits > 1 && tmppages > 0) {
> -                migration_rate_limit();
> -            }
> +        if (release_lock) {
> +            qemu_mutex_lock(&rs->bitmap_mutex);
>          }
> +
> +        if (tmppages < 0) {
> +            return tmppages;
> +        }
> +
>          pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>      } while ((pss->page < hostpage_boundary) &&
>               offset_in_ramblock(pss->block,
> -- 
> 2.32.0
>
Peter Xu Oct. 4, 2022, 7:13 p.m. UTC | #2
On Tue, Oct 04, 2022 at 02:55:10PM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > Don't take the bitmap mutex when sending pages, or when being throttled by
> > migration_rate_limit() (which is a bit tricky to call it here in ram code,
> > but seems still helpful).
> > 
> > It prepares for the possibility of concurrently sending pages in >1 threads
> > using the function ram_save_host_page() because all threads may need the
> > bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of
> > qemu_sem_wait() blocking for one thread will not block the other from
> > progressing.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> 
> I generally dont like taking locks conditionally; but this kind of looks
> OK; I think it needs a big comment on the start of the function saying
> that it's called and left with the lock held but that it might drop it
> temporarily.

Right, the code is slightly hard to read, I just didn't yet see a good and
easy solution for it yet.  It's just that we may still want to keep the
lock as long as possible for precopy in one shot.

> 
> > ---
> >  migration/ram.c | 42 +++++++++++++++++++++++++++++++-----------
> >  1 file changed, 31 insertions(+), 11 deletions(-)
> > 
> > diff --git a/migration/ram.c b/migration/ram.c
> > index 8303252b6d..6e7de6087a 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -2463,6 +2463,7 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
> >   */
> >  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> >  {
> > +    bool page_dirty, release_lock = postcopy_preempt_active();
> 
> Could you rename that to something like 'drop_lock' - you are taking the
> lock at the end even when you have 'release_lock' set - which is a bit
> strange naming.

Is there any difference on "drop" or "release"?  I'll change the name
anyway since I definitely trust you on any English comments, but please
still let me know - I love to learn more on those! :)

> 
> >      int tmppages, pages = 0;
> >      size_t pagesize_bits =
> >          qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> > @@ -2486,22 +2487,41 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> >              break;
> >          }
> >  
> > +        page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
> > +        /*
> > +         * Properly yield the lock only in postcopy preempt mode because
> > +         * both migration thread and rp-return thread can operate on the
> > +         * bitmaps.
> > +         */
> > +        if (release_lock) {
> > +            qemu_mutex_unlock(&rs->bitmap_mutex);
> > +        }
> 
> Shouldn't the unlock/lock move inside the 'if (page_dirty) {' ?

I think we can move into it, but it may not be as optimal as keeping it
as-is.

Consider a case where we've got the bitmap with continous zero bits.
During postcopy, the migration thread could be spinning here with the lock
held even if it doesn't send a thing.  It could still block the other
return path thread on sending urgent pages which may be outside the zero
zones.

> 
> 
> >          /* Check the pages is dirty and if it is send it */
> > -        if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
> > +        if (page_dirty) {
> >              tmppages = ram_save_target_page(rs, pss);
> > -            if (tmppages < 0) {
> > -                return tmppages;
> > +            if (tmppages >= 0) {
> > +                pages += tmppages;
> > +                /*
> > +                 * Allow rate limiting to happen in the middle of huge pages if
> > +                 * something is sent in the current iteration.
> > +                 */
> > +                if (pagesize_bits > 1 && tmppages > 0) {
> > +                    migration_rate_limit();
> 
> This feels interesting, I know it's no change from before, and it's
> difficult to do here, but it seems odd to hold the lock around the
> sleeping in the rate limit.

Good point.. I think I'll leave it there for this patch because it's
totally irrelevant, but seems proper in the future to do unlocking too for
normal precopy.

Maybe I'll just attach a patch at the end of this series when I repost.
That'll be easier before things got forgotten again.
Dr. David Alan Gilbert Oct. 5, 2022, 11:18 a.m. UTC | #3
* Peter Xu (peterx@redhat.com) wrote:
> On Tue, Oct 04, 2022 at 02:55:10PM +0100, Dr. David Alan Gilbert wrote:
> > * Peter Xu (peterx@redhat.com) wrote:
> > > Don't take the bitmap mutex when sending pages, or when being throttled by
> > > migration_rate_limit() (which is a bit tricky to call it here in ram code,
> > > but seems still helpful).
> > > 
> > > It prepares for the possibility of concurrently sending pages in >1 threads
> > > using the function ram_save_host_page() because all threads may need the
> > > bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of
> > > qemu_sem_wait() blocking for one thread will not block the other from
> > > progressing.
> > > 
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > 
> > I generally dont like taking locks conditionally; but this kind of looks
> > OK; I think it needs a big comment on the start of the function saying
> > that it's called and left with the lock held but that it might drop it
> > temporarily.
> 
> Right, the code is slightly hard to read, I just didn't yet see a good and
> easy solution for it yet.  It's just that we may still want to keep the
> lock as long as possible for precopy in one shot.
> 
> > 
> > > ---
> > >  migration/ram.c | 42 +++++++++++++++++++++++++++++++-----------
> > >  1 file changed, 31 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/migration/ram.c b/migration/ram.c
> > > index 8303252b6d..6e7de6087a 100644
> > > --- a/migration/ram.c
> > > +++ b/migration/ram.c
> > > @@ -2463,6 +2463,7 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
> > >   */
> > >  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> > >  {
> > > +    bool page_dirty, release_lock = postcopy_preempt_active();
> > 
> > Could you rename that to something like 'drop_lock' - you are taking the
> > lock at the end even when you have 'release_lock' set - which is a bit
> > strange naming.
> 
> Is there any difference on "drop" or "release"?  I'll change the name
> anyway since I definitely trust you on any English comments, but please
> still let me know - I love to learn more on those! :)

I'm not sure 'drop' is much better either; I was struggling to find a
good nam.

> > 
> > >      int tmppages, pages = 0;
> > >      size_t pagesize_bits =
> > >          qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> > > @@ -2486,22 +2487,41 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> > >              break;
> > >          }
> > >  
> > > +        page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
> > > +        /*
> > > +         * Properly yield the lock only in postcopy preempt mode because
> > > +         * both migration thread and rp-return thread can operate on the
> > > +         * bitmaps.
> > > +         */
> > > +        if (release_lock) {
> > > +            qemu_mutex_unlock(&rs->bitmap_mutex);
> > > +        }
> > 
> > Shouldn't the unlock/lock move inside the 'if (page_dirty) {' ?
> 
> I think we can move into it, but it may not be as optimal as keeping it
> as-is.
> 
> Consider a case where we've got the bitmap with continous zero bits.
> During postcopy, the migration thread could be spinning here with the lock
> held even if it doesn't send a thing.  It could still block the other
> return path thread on sending urgent pages which may be outside the zero
> zones.

OK, that reason needs commenting then - you're going to do a lot of
release/take pairs in that case which is going to show up as very hot;
so hmm, if ti was just for that type of 'yield' behaviour you wouldn't
normally do it for each bit.

> > 
> > 
> > >          /* Check the pages is dirty and if it is send it */
> > > -        if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
> > > +        if (page_dirty) {
> > >              tmppages = ram_save_target_page(rs, pss);
> > > -            if (tmppages < 0) {
> > > -                return tmppages;
> > > +            if (tmppages >= 0) {
> > > +                pages += tmppages;
> > > +                /*
> > > +                 * Allow rate limiting to happen in the middle of huge pages if
> > > +                 * something is sent in the current iteration.
> > > +                 */
> > > +                if (pagesize_bits > 1 && tmppages > 0) {
> > > +                    migration_rate_limit();
> > 
> > This feels interesting, I know it's no change from before, and it's
> > difficult to do here, but it seems odd to hold the lock around the
> > sleeping in the rate limit.
> 
> Good point.. I think I'll leave it there for this patch because it's
> totally irrelevant, but seems proper in the future to do unlocking too for
> normal precopy.
> 
> Maybe I'll just attach a patch at the end of this series when I repost.
> That'll be easier before things got forgotten again.

Dave

> -- 
> Peter Xu
>
Peter Xu Oct. 5, 2022, 1:40 p.m. UTC | #4
On Wed, Oct 05, 2022 at 12:18:00PM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > On Tue, Oct 04, 2022 at 02:55:10PM +0100, Dr. David Alan Gilbert wrote:
> > > * Peter Xu (peterx@redhat.com) wrote:
> > > > Don't take the bitmap mutex when sending pages, or when being throttled by
> > > > migration_rate_limit() (which is a bit tricky to call it here in ram code,
> > > > but seems still helpful).
> > > > 
> > > > It prepares for the possibility of concurrently sending pages in >1 threads
> > > > using the function ram_save_host_page() because all threads may need the
> > > > bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of
> > > > qemu_sem_wait() blocking for one thread will not block the other from
> > > > progressing.
> > > > 
> > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > 
> > > I generally dont like taking locks conditionally; but this kind of looks
> > > OK; I think it needs a big comment on the start of the function saying
> > > that it's called and left with the lock held but that it might drop it
> > > temporarily.
> > 
> > Right, the code is slightly hard to read, I just didn't yet see a good and
> > easy solution for it yet.  It's just that we may still want to keep the
> > lock as long as possible for precopy in one shot.
> > 
> > > 
> > > > ---
> > > >  migration/ram.c | 42 +++++++++++++++++++++++++++++++-----------
> > > >  1 file changed, 31 insertions(+), 11 deletions(-)
> > > > 
> > > > diff --git a/migration/ram.c b/migration/ram.c
> > > > index 8303252b6d..6e7de6087a 100644
> > > > --- a/migration/ram.c
> > > > +++ b/migration/ram.c
> > > > @@ -2463,6 +2463,7 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
> > > >   */
> > > >  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> > > >  {
> > > > +    bool page_dirty, release_lock = postcopy_preempt_active();
> > > 
> > > Could you rename that to something like 'drop_lock' - you are taking the
> > > lock at the end even when you have 'release_lock' set - which is a bit
> > > strange naming.
> > 
> > Is there any difference on "drop" or "release"?  I'll change the name
> > anyway since I definitely trust you on any English comments, but please
> > still let me know - I love to learn more on those! :)
> 
> I'm not sure 'drop' is much better either; I was struggling to find a
> good nam.

I can also call it "preempt_enabled".

Actually I can directly replace it with calling postcopy_preempt_active()
always but I just want to make it crystal clear that the value is not
changing and lock & unlock are always paired - in our case I think it is
not changing, but the var helps to be 100% sure there'll be no possible bug
on e.g. deadlock caused by state changing.

> 
> > > 
> > > >      int tmppages, pages = 0;
> > > >      size_t pagesize_bits =
> > > >          qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> > > > @@ -2486,22 +2487,41 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> > > >              break;
> > > >          }
> > > >  
> > > > +        page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
> > > > +        /*
> > > > +         * Properly yield the lock only in postcopy preempt mode because
> > > > +         * both migration thread and rp-return thread can operate on the
> > > > +         * bitmaps.
> > > > +         */
> > > > +        if (release_lock) {
> > > > +            qemu_mutex_unlock(&rs->bitmap_mutex);
> > > > +        }
> > > 
> > > Shouldn't the unlock/lock move inside the 'if (page_dirty) {' ?
> > 
> > I think we can move into it, but it may not be as optimal as keeping it
> > as-is.
> > 
> > Consider a case where we've got the bitmap with continous zero bits.
> > During postcopy, the migration thread could be spinning here with the lock
> > held even if it doesn't send a thing.  It could still block the other
> > return path thread on sending urgent pages which may be outside the zero
> > zones.
> 
> OK, that reason needs commenting then - you're going to do a lot of
> release/take pairs in that case which is going to show up as very hot;
> so hmm, if ti was just for that type of 'yield' behaviour you wouldn't
> normally do it for each bit.

Hold on.. I think my assumption won't easily trigger, because at the end of
the loop we'll try to look for the next "dirty" page.  So continuously
clean pages are unlikely, or I even think it's impossible because we're
holding the mutex during scanning and clear-dirty, so no one will be able
to flip the bit.

So yeah I think it's okay to move it into "page_dirty", but since we'll
mostly always go into dirty maybe it's just that it won't help a lot
either, because it'll be mostly the same as keeping it outside?
Peter Xu Oct. 5, 2022, 7:48 p.m. UTC | #5
On Wed, Oct 05, 2022 at 09:40:53AM -0400, Peter Xu wrote:
> On Wed, Oct 05, 2022 at 12:18:00PM +0100, Dr. David Alan Gilbert wrote:
> > * Peter Xu (peterx@redhat.com) wrote:
> > > On Tue, Oct 04, 2022 at 02:55:10PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Peter Xu (peterx@redhat.com) wrote:
> > > > > Don't take the bitmap mutex when sending pages, or when being throttled by
> > > > > migration_rate_limit() (which is a bit tricky to call it here in ram code,
> > > > > but seems still helpful).
> > > > > 
> > > > > It prepares for the possibility of concurrently sending pages in >1 threads
> > > > > using the function ram_save_host_page() because all threads may need the
> > > > > bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of
> > > > > qemu_sem_wait() blocking for one thread will not block the other from
> > > > > progressing.
> > > > > 
> > > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > > 
> > > > I generally dont like taking locks conditionally; but this kind of looks
> > > > OK; I think it needs a big comment on the start of the function saying
> > > > that it's called and left with the lock held but that it might drop it
> > > > temporarily.
> > > 
> > > Right, the code is slightly hard to read, I just didn't yet see a good and
> > > easy solution for it yet.  It's just that we may still want to keep the
> > > lock as long as possible for precopy in one shot.
> > > 
> > > > 
> > > > > ---
> > > > >  migration/ram.c | 42 +++++++++++++++++++++++++++++++-----------
> > > > >  1 file changed, 31 insertions(+), 11 deletions(-)
> > > > > 
> > > > > diff --git a/migration/ram.c b/migration/ram.c
> > > > > index 8303252b6d..6e7de6087a 100644
> > > > > --- a/migration/ram.c
> > > > > +++ b/migration/ram.c
> > > > > @@ -2463,6 +2463,7 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
> > > > >   */
> > > > >  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> > > > >  {
> > > > > +    bool page_dirty, release_lock = postcopy_preempt_active();
> > > > 
> > > > Could you rename that to something like 'drop_lock' - you are taking the
> > > > lock at the end even when you have 'release_lock' set - which is a bit
> > > > strange naming.
> > > 
> > > Is there any difference on "drop" or "release"?  I'll change the name
> > > anyway since I definitely trust you on any English comments, but please
> > > still let me know - I love to learn more on those! :)
> > 
> > I'm not sure 'drop' is much better either; I was struggling to find a
> > good nam.
> 
> I can also call it "preempt_enabled".
> 
> Actually I can directly replace it with calling postcopy_preempt_active()
> always but I just want to make it crystal clear that the value is not
> changing and lock & unlock are always paired - in our case I think it is
> not changing, but the var helps to be 100% sure there'll be no possible bug
> on e.g. deadlock caused by state changing.
> 
> > 
> > > > 
> > > > >      int tmppages, pages = 0;
> > > > >      size_t pagesize_bits =
> > > > >          qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> > > > > @@ -2486,22 +2487,41 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> > > > >              break;
> > > > >          }
> > > > >  
> > > > > +        page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
> > > > > +        /*
> > > > > +         * Properly yield the lock only in postcopy preempt mode because
> > > > > +         * both migration thread and rp-return thread can operate on the
> > > > > +         * bitmaps.
> > > > > +         */
> > > > > +        if (release_lock) {
> > > > > +            qemu_mutex_unlock(&rs->bitmap_mutex);
> > > > > +        }
> > > > 
> > > > Shouldn't the unlock/lock move inside the 'if (page_dirty) {' ?
> > > 
> > > I think we can move into it, but it may not be as optimal as keeping it
> > > as-is.
> > > 
> > > Consider a case where we've got the bitmap with continous zero bits.
> > > During postcopy, the migration thread could be spinning here with the lock
> > > held even if it doesn't send a thing.  It could still block the other
> > > return path thread on sending urgent pages which may be outside the zero
> > > zones.
> > 
> > OK, that reason needs commenting then - you're going to do a lot of
> > release/take pairs in that case which is going to show up as very hot;
> > so hmm, if ti was just for that type of 'yield' behaviour you wouldn't
> > normally do it for each bit.
> 
> Hold on.. I think my assumption won't easily trigger, because at the end of
> the loop we'll try to look for the next "dirty" page.  So continuously
> clean pages are unlikely, or I even think it's impossible because we're
> holding the mutex during scanning and clear-dirty, so no one will be able
> to flip the bit.
> 
> So yeah I think it's okay to move it into "page_dirty", but since we'll
> mostly always go into dirty maybe it's just that it won't help a lot
> either, because it'll be mostly the same as keeping it outside?

IOW, maybe I should drop page_dirty directly and replace it with a check,
failing migration if migration_bitmap_clear_dirty() returned false?
diff mbox series

Patch

diff --git a/migration/ram.c b/migration/ram.c
index 8303252b6d..6e7de6087a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2463,6 +2463,7 @@  static void postcopy_preempt_reset_channel(RAMState *rs)
  */
 static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
 {
+    bool page_dirty, release_lock = postcopy_preempt_active();
     int tmppages, pages = 0;
     size_t pagesize_bits =
         qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
@@ -2486,22 +2487,41 @@  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
             break;
         }
 
+        page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
+        /*
+         * Properly yield the lock only in postcopy preempt mode because
+         * both migration thread and rp-return thread can operate on the
+         * bitmaps.
+         */
+        if (release_lock) {
+            qemu_mutex_unlock(&rs->bitmap_mutex);
+        }
+
         /* Check the pages is dirty and if it is send it */
-        if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
+        if (page_dirty) {
             tmppages = ram_save_target_page(rs, pss);
-            if (tmppages < 0) {
-                return tmppages;
+            if (tmppages >= 0) {
+                pages += tmppages;
+                /*
+                 * Allow rate limiting to happen in the middle of huge pages if
+                 * something is sent in the current iteration.
+                 */
+                if (pagesize_bits > 1 && tmppages > 0) {
+                    migration_rate_limit();
+                }
             }
+        } else {
+            tmppages = 0;
+        }
 
-            pages += tmppages;
-            /*
-             * Allow rate limiting to happen in the middle of huge pages if
-             * something is sent in the current iteration.
-             */
-            if (pagesize_bits > 1 && tmppages > 0) {
-                migration_rate_limit();
-            }
+        if (release_lock) {
+            qemu_mutex_lock(&rs->bitmap_mutex);
         }
+
+        if (tmppages < 0) {
+            return tmppages;
+        }
+
         pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
     } while ((pss->page < hostpage_boundary) &&
              offset_in_ramblock(pss->block,