From patchwork Fri Jun 3 07:52:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhanghailiang X-Patchwork-Id: 9152085 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9F55E6072B for ; Fri, 3 Jun 2016 08:15:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D0F12675C for ; Fri, 3 Jun 2016 08:15:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 81B4D28309; Fri, 3 Jun 2016 08:15:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F0C5B2675C for ; Fri, 3 Jun 2016 08:15:38 +0000 (UTC) Received: from localhost ([::1]:53142 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8kGY-0001SR-42 for patchwork-qemu-devel@patchwork.kernel.org; Fri, 03 Jun 2016 04:15:38 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41588) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8jvz-0007OE-T4 for qemu-devel@nongnu.org; Fri, 03 Jun 2016 03:54:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b8jvx-0004vE-I9 for qemu-devel@nongnu.org; Fri, 03 Jun 2016 03:54:22 -0400 Received: from szxga03-in.huawei.com ([119.145.14.66]:24600) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8jvw-0004tU-SC for qemu-devel@nongnu.org; Fri, 03 Jun 2016 03:54:21 -0400 Received: from 172.24.1.36 (EHLO szxeml422-hub.china.huawei.com) ([172.24.1.36]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id CCR25162; Fri, 03 Jun 2016 15:53:30 +0800 (CST) Received: from localhost (10.177.24.212) by szxeml422-hub.china.huawei.com (10.82.67.152) with Microsoft SMTP Server id 14.3.235.1; Fri, 3 Jun 2016 15:53:16 +0800 From: zhanghailiang To: , , , Date: Fri, 3 Jun 2016 15:52:40 +0800 Message-ID: <1464940366-9880-29-git-send-email-zhang.zhanghailiang@huawei.com> X-Mailer: git-send-email 2.7.2.windows.1 In-Reply-To: <1464940366-9880-1-git-send-email-zhang.zhanghailiang@huawei.com> References: <1464940366-9880-1-git-send-email-zhang.zhanghailiang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.24.212] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090205.575137A0.00D4, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 1963cbe0c64f561b94d230fa6d5849d0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 119.145.14.66 Subject: [Qemu-devel] [PATCH COLO-Frame v17 28/34] COLO: Separate the process of saving/loading ram and device state X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiecl.fnst@cn.fujitsu.com, lizhijian@cn.fujitsu.com, yunhong.jiang@intel.com, eddie.dong@intel.com, peter.huangpeng@huawei.com, zhanghailiang , arei.gonglei@huawei.com, stefanha@redhat.com, zhangchen.fnst@cn.fujitsu.com, hongyang.yang@easystack.cn Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP We separate the process of saving/loading ram and device state when do checkpoint. We add new helpers for save/load ram/device. With this change, we can directly transfer RAM from primary side to secondary side without using channel-buffer as assistant, which also reduce the size of extra memory was used during checkpoint. Besides, we move the colo_flush_ram_cache to the proper position after the above change. Signed-off-by: zhanghailiang Signed-off-by: Li Zhijian Reviewed-by: Dr. David Alan Gilbert --- v16: - Add Reviewd-by tag v14: - split two new patches from this patch - Some minor fixes from Dave v13: - Re-use some existed helper functions to realize saving/loading ram and device. v11: - Remove load configuration section in qemu_loadvm_state_begin() --- migration/colo.c | 48 ++++++++++++++++++++++++++++++++++++++---------- migration/ram.c | 5 ----- migration/savevm.c | 4 ++++ 3 files changed, 42 insertions(+), 15 deletions(-) diff --git a/migration/colo.c b/migration/colo.c index 16f402f..5641031 100644 --- a/migration/colo.c +++ b/migration/colo.c @@ -284,21 +284,37 @@ static int colo_do_checkpoint_transaction(MigrationState *s, goto out; } + colo_send_message(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND, &local_err); + if (local_err) { + goto out; + } + /* Disable block migration */ s->params.blk = 0; s->params.shared = 0; - qemu_savevm_state_header(fb); - qemu_savevm_state_begin(fb, &s->params); + qemu_savevm_state_begin(s->to_dst_file, &s->params); + ret = qemu_file_get_error(s->to_dst_file); + if (ret < 0) { + error_report("Save vm state begin error"); + goto out; + } + qemu_mutex_lock_iothread(); - qemu_savevm_state_complete_precopy(fb, false); + /* + * Only save VM's live state, which not including device state. + * TODO: We may need a timeout mechanism to prevent COLO process + * to be blocked here. + */ + qemu_savevm_live_state(s->to_dst_file); + /* Note: device state is saved into buffer */ + ret = qemu_save_device_state(fb); qemu_mutex_unlock_iothread(); - - qemu_fflush(fb); - - colo_send_message(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND, &local_err); - if (local_err) { + if (ret < 0) { + error_report("Save device state error"); goto out; } + qemu_fflush(fb); + /* * We need the size of the VMstate data in Secondary side, * With which we can decide how much data should be read. @@ -565,6 +581,16 @@ void *colo_process_incoming_thread(void *opaque) goto out; } + ret = qemu_loadvm_state_begin(mis->from_src_file); + if (ret < 0) { + error_report("Load vm state begin error, ret=%d", ret); + goto out; + } + ret = qemu_loadvm_state_main(mis->from_src_file, mis); + if (ret < 0) { + error_report("Load VM's live state (ram) error"); + goto out; + } /* read the VM state total size first */ value = colo_receive_message_value(mis->from_src_file, COLO_MESSAGE_VMSTATE_SIZE, &local_err); @@ -600,8 +626,10 @@ void *colo_process_incoming_thread(void *opaque) qemu_mutex_lock_iothread(); qemu_system_reset(VMRESET_SILENT); vmstate_loading = true; - if (qemu_loadvm_state(fb) < 0) { - error_report("COLO: loadvm failed"); + colo_flush_ram_cache(); + ret = qemu_load_device_state(fb); + if (ret < 0) { + error_report("COLO: load device state failed"); qemu_mutex_unlock_iothread(); goto out; } diff --git a/migration/ram.c b/migration/ram.c index 91d1287..34aa87e 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2466,7 +2466,6 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) * be atomic */ bool postcopy_running = postcopy_state_get() >= POSTCOPY_INCOMING_LISTENING; - bool need_flush = false; seq_iter++; @@ -2501,7 +2500,6 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) /* After going into COLO, we should load the Page into colo_cache */ if (ram_cache_enable) { host = colo_cache_from_block_offset(block, addr); - need_flush = true; } else { host = host_from_ram_block_offset(block, addr); } @@ -2595,9 +2593,6 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) rcu_read_unlock(); - if (!ret && ram_cache_enable && need_flush) { - colo_flush_ram_cache(); - } DPRINTF("Completed load of VM with exit code %d seq iteration " "%" PRIu64 "\n", ret, seq_iter); return ret; diff --git a/migration/savevm.c b/migration/savevm.c index 55a2eab..41ea2bd 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -911,6 +911,10 @@ void qemu_savevm_state_begin(QEMUFile *f, break; } } + if (migration_in_colo_state()) { + qemu_put_byte(f, QEMU_VM_EOF); + qemu_fflush(f); + } } /*