diff mbox

[PULL,2/5] migration: Fix rate limiting issue on RDMA migration

Message ID 20180323202344.70640-3-dgilbert@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Dr. David Alan Gilbert March 23, 2018, 8:23 p.m. UTC
From: Lidong Chen <jemmy858585@gmail.com>

RDMA migration implement save_page function for QEMUFile, but
ram_control_save_page do not increase bytes_xfer. So when doing
RDMA migration, it will use whole bandwidth.

Signed-off-by: Lidong Chen <lidongchen@tencent.com>
Message-Id: <1520692378-1835-1-git-send-email-lidongchen@tencent.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/qemu-file.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox

Patch

diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index e85f501f86..bb63c779cc 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -253,7 +253,7 @@  size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
     if (f->hooks && f->hooks->save_page) {
         int ret = f->hooks->save_page(f, f->opaque, block_offset,
                                       offset, size, bytes_sent);
-
+        f->bytes_xfer += size;
         if (ret != RAM_SAVE_CONTROL_DELAYED) {
             if (bytes_sent && *bytes_sent > 0) {
                 qemu_update_position(f, *bytes_sent);