Message ID | 1520692378-1835-1-git-send-email-lidongchen@tencent.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Ping. On Sat, Mar 10, 2018 at 10:32 PM, Lidong Chen <jemmy858585@gmail.com> wrote: > RDMA migration implement save_page function for QEMUFile, but > ram_control_save_page do not increase bytes_xfer. So when doing > RDMA migration, it will use whole bandwidth. > > Signed-off-by: Lidong Chen <lidongchen@tencent.com> > --- > migration/qemu-file.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/migration/qemu-file.c b/migration/qemu-file.c > index 2ab2bf3..217609d 100644 > --- a/migration/qemu-file.c > +++ b/migration/qemu-file.c > @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset, > if (f->hooks && f->hooks->save_page) { > int ret = f->hooks->save_page(f, f->opaque, block_offset, > offset, size, bytes_sent); > - > + f->bytes_xfer += size; > if (ret != RAM_SAVE_CONTROL_DELAYED) { > if (bytes_sent && *bytes_sent > 0) { > qemu_update_position(f, *bytes_sent); > -- > 1.8.3.1 >
* Lidong Chen (jemmy858585@gmail.com) wrote: > RDMA migration implement save_page function for QEMUFile, but > ram_control_save_page do not increase bytes_xfer. So when doing > RDMA migration, it will use whole bandwidth. Hi, Thanks for this, > Signed-off-by: Lidong Chen <lidongchen@tencent.com> > --- > migration/qemu-file.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/migration/qemu-file.c b/migration/qemu-file.c > index 2ab2bf3..217609d 100644 > --- a/migration/qemu-file.c > +++ b/migration/qemu-file.c > @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset, > if (f->hooks && f->hooks->save_page) { > int ret = f->hooks->save_page(f, f->opaque, block_offset, > offset, size, bytes_sent); > - > + f->bytes_xfer += size; I'm a bit confused, because I know rdma.c calls acct_update_position() and I'd always thought that was enough. That calls qemu_update_position(...) which increases f->pos but not f->bytes_xfer. f_pos is used to calculate the 'transferred' value in migration_update_counters and thus the current bandwidth and downtime - but as you say, not the rate_limit. So really, should this f->bytes_xfer += size go in qemu_update_position ? Juan: I'm not sure I know why we have both bytes_xfer and pos. Dave > if (ret != RAM_SAVE_CONTROL_DELAYED) { > if (bytes_sent && *bytes_sent > 0) { > qemu_update_position(f, *bytes_sent); > -- > 1.8.3.1 > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Thu, Mar 15, 2018 at 4:19 AM, Dr. David Alan Gilbert <dgilbert@redhat.com> wrote: > * Lidong Chen (jemmy858585@gmail.com) wrote: >> RDMA migration implement save_page function for QEMUFile, but >> ram_control_save_page do not increase bytes_xfer. So when doing >> RDMA migration, it will use whole bandwidth. > > Hi, > Thanks for this, > >> Signed-off-by: Lidong Chen <lidongchen@tencent.com> >> --- >> migration/qemu-file.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/migration/qemu-file.c b/migration/qemu-file.c >> index 2ab2bf3..217609d 100644 >> --- a/migration/qemu-file.c >> +++ b/migration/qemu-file.c >> @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset, >> if (f->hooks && f->hooks->save_page) { >> int ret = f->hooks->save_page(f, f->opaque, block_offset, >> offset, size, bytes_sent); >> - >> + f->bytes_xfer += size; > > I'm a bit confused, because I know rdma.c calls acct_update_position() > and I'd always thought that was enough. > That calls qemu_update_position(...) which increases f->pos but not > f->bytes_xfer. > > f_pos is used to calculate the 'transferred' value in > migration_update_counters and thus the current bandwidth and downtime - > but as you say, not the rate_limit. > > So really, should this f->bytes_xfer += size go in > qemu_update_position ? For tcp migration, bytes_xfer is updated before qemu_fflush(f) which actually send data. but qemu_update_position is invoked by qemu_rdma_write_one, which after call ibv_post_send. and qemu_rdma_save_page is asynchronous, it may merge the page. I think it's more safe to limiting rate before send data > > Juan: I'm not sure I know why we have both bytes_xfer and pos. Maybe the reasion is bytes_xfer is updated before send data, and bytes_xfer will be reset by migration_update_counters. > > Dave > >> if (ret != RAM_SAVE_CONTROL_DELAYED) { >> if (bytes_sent && *bytes_sent > 0) { >> qemu_update_position(f, *bytes_sent); >> -- >> 1.8.3.1 >> > -- > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
ping. On Thu, Mar 15, 2018 at 1:33 PM, 858585 jemmy <jemmy858585@gmail.com> wrote: > On Thu, Mar 15, 2018 at 4:19 AM, Dr. David Alan Gilbert > <dgilbert@redhat.com> wrote: >> * Lidong Chen (jemmy858585@gmail.com) wrote: >>> RDMA migration implement save_page function for QEMUFile, but >>> ram_control_save_page do not increase bytes_xfer. So when doing >>> RDMA migration, it will use whole bandwidth. >> >> Hi, >> Thanks for this, >> >>> Signed-off-by: Lidong Chen <lidongchen@tencent.com> >>> --- >>> migration/qemu-file.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/migration/qemu-file.c b/migration/qemu-file.c >>> index 2ab2bf3..217609d 100644 >>> --- a/migration/qemu-file.c >>> +++ b/migration/qemu-file.c >>> @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset, >>> if (f->hooks && f->hooks->save_page) { >>> int ret = f->hooks->save_page(f, f->opaque, block_offset, >>> offset, size, bytes_sent); >>> - >>> + f->bytes_xfer += size; >> >> I'm a bit confused, because I know rdma.c calls acct_update_position() >> and I'd always thought that was enough. >> That calls qemu_update_position(...) which increases f->pos but not >> f->bytes_xfer. >> >> f_pos is used to calculate the 'transferred' value in >> migration_update_counters and thus the current bandwidth and downtime - >> but as you say, not the rate_limit. >> >> So really, should this f->bytes_xfer += size go in >> qemu_update_position ? > > For tcp migration, bytes_xfer is updated before qemu_fflush(f) which > actually send data. > but qemu_update_position is invoked by qemu_rdma_write_one, which > after call ibv_post_send. > and qemu_rdma_save_page is asynchronous, it may merge the page. > I think it's more safe to limiting rate before send data > >> >> Juan: I'm not sure I know why we have both bytes_xfer and pos. > > Maybe the reasion is bytes_xfer is updated before send data, > and bytes_xfer will be reset by migration_update_counters. > >> >> Dave >> >>> if (ret != RAM_SAVE_CONTROL_DELAYED) { >>> if (bytes_sent && *bytes_sent > 0) { >>> qemu_update_position(f, *bytes_sent); >>> -- >>> 1.8.3.1 >>> >> -- >> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Lidong Chen <jemmy858585@gmail.com> wrote: > RDMA migration implement save_page function for QEMUFile, but > ram_control_save_page do not increase bytes_xfer. So when doing > RDMA migration, it will use whole bandwidth. > > Signed-off-by: Lidong Chen <lidongchen@tencent.com> Reviewed-by: Juan Quintela <quintela@redhat.com> This part of the code is a mess. To answer David: - pos: Where we need to write that bit of stuff - bytex_xfer: how much have we written WHen we are doing snapshots on qcow2, we store memory in a contiguous piece of memory, so we can "overwrite" that "page" if a new verion cames. Nothing else (except the block) uses te "pos" parameter, so we can't not trust on it. And that has been for a fast look at the code, that I got really confused (again). > --- > migration/qemu-file.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/migration/qemu-file.c b/migration/qemu-file.c > index 2ab2bf3..217609d 100644 > --- a/migration/qemu-file.c > +++ b/migration/qemu-file.c > @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset, > if (f->hooks && f->hooks->save_page) { > int ret = f->hooks->save_page(f, f->opaque, block_offset, > offset, size, bytes_sent); > - > + f->bytes_xfer += size; > if (ret != RAM_SAVE_CONTROL_DELAYED) { > if (bytes_sent && *bytes_sent > 0) { > qemu_update_position(f, *bytes_sent);
On Wed, Mar 21, 2018 at 2:19 AM, Juan Quintela <quintela@redhat.com> wrote: > Lidong Chen <jemmy858585@gmail.com> wrote: >> RDMA migration implement save_page function for QEMUFile, but >> ram_control_save_page do not increase bytes_xfer. So when doing >> RDMA migration, it will use whole bandwidth. >> >> Signed-off-by: Lidong Chen <lidongchen@tencent.com> > > Reviewed-by: Juan Quintela <quintela@redhat.com> > > This part of the code is a mess. > > To answer David: > - pos: Where we need to write that bit of stuff > - bytex_xfer: how much have we written > > WHen we are doing snapshots on qcow2, we store memory in a contiguous > piece of memory, so we can "overwrite" that "page" if a new verion > cames. Nothing else (except the block) uses te "pos" parameter, so we > can't not trust on it. > > And that has been for a fast look at the code, that I got really > confused (again). Hi Juan: what is the problem? Thanks. > > > >> --- >> migration/qemu-file.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/migration/qemu-file.c b/migration/qemu-file.c >> index 2ab2bf3..217609d 100644 >> --- a/migration/qemu-file.c >> +++ b/migration/qemu-file.c >> @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset, >> if (f->hooks && f->hooks->save_page) { >> int ret = f->hooks->save_page(f, f->opaque, block_offset, >> offset, size, bytes_sent); >> - >> + f->bytes_xfer += size; >> if (ret != RAM_SAVE_CONTROL_DELAYED) { >> if (bytes_sent && *bytes_sent > 0) { >> qemu_update_position(f, *bytes_sent);
* Lidong Chen (jemmy858585@gmail.com) wrote: > RDMA migration implement save_page function for QEMUFile, but > ram_control_save_page do not increase bytes_xfer. So when doing > RDMA migration, it will use whole bandwidth. > > Signed-off-by: Lidong Chen <lidongchen@tencent.com> Queued > --- > migration/qemu-file.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/migration/qemu-file.c b/migration/qemu-file.c > index 2ab2bf3..217609d 100644 > --- a/migration/qemu-file.c > +++ b/migration/qemu-file.c > @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset, > if (f->hooks && f->hooks->save_page) { > int ret = f->hooks->save_page(f, f->opaque, block_offset, > offset, size, bytes_sent); > - > + f->bytes_xfer += size; > if (ret != RAM_SAVE_CONTROL_DELAYED) { > if (bytes_sent && *bytes_sent > 0) { > qemu_update_position(f, *bytes_sent); > -- > 1.8.3.1 > > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
diff --git a/migration/qemu-file.c b/migration/qemu-file.c index 2ab2bf3..217609d 100644 --- a/migration/qemu-file.c +++ b/migration/qemu-file.c @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset, if (f->hooks && f->hooks->save_page) { int ret = f->hooks->save_page(f, f->opaque, block_offset, offset, size, bytes_sent); - + f->bytes_xfer += size; if (ret != RAM_SAVE_CONTROL_DELAYED) { if (bytes_sent && *bytes_sent > 0) { qemu_update_position(f, *bytes_sent);
RDMA migration implement save_page function for QEMUFile, but ram_control_save_page do not increase bytes_xfer. So when doing RDMA migration, it will use whole bandwidth. Signed-off-by: Lidong Chen <lidongchen@tencent.com> --- migration/qemu-file.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)