From patchwork Fri Jun 3 07:52:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhanghailiang X-Patchwork-Id: 9152065 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5311860751 for ; Fri, 3 Jun 2016 08:07:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 41D6D2675C for ; Fri, 3 Jun 2016 08:07:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 35ACE282E8; Fri, 3 Jun 2016 08:07:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8BCED2675C for ; Fri, 3 Jun 2016 08:07:15 +0000 (UTC) Received: from localhost ([::1]:53096 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8k8Q-0001g6-Ni for patchwork-qemu-devel@patchwork.kernel.org; Fri, 03 Jun 2016 04:07:14 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41465) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8jvr-0007G0-5r for qemu-devel@nongnu.org; Fri, 03 Jun 2016 03:54:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b8jvn-0004po-WC for qemu-devel@nongnu.org; Fri, 03 Jun 2016 03:54:15 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:25120) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8jvn-0004o6-1i for qemu-devel@nongnu.org; Fri, 03 Jun 2016 03:54:11 -0400 Received: from 172.24.1.36 (EHLO szxeml434-hub.china.huawei.com) ([172.24.1.36]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DIE18622; Fri, 03 Jun 2016 15:53:08 +0800 (CST) Received: from localhost (10.177.24.212) by szxeml434-hub.china.huawei.com (10.82.67.225) with Microsoft SMTP Server id 14.3.235.1; Fri, 3 Jun 2016 15:53:00 +0800 From: zhanghailiang To: , , , Date: Fri, 3 Jun 2016 15:52:22 +0800 Message-ID: <1464940366-9880-11-git-send-email-zhang.zhanghailiang@huawei.com> X-Mailer: git-send-email 2.7.2.windows.1 In-Reply-To: <1464940366-9880-1-git-send-email-zhang.zhanghailiang@huawei.com> References: <1464940366-9880-1-git-send-email-zhang.zhanghailiang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.24.212] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090203.57513794.0036, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 6046b5764bb0635064c036996d0c1dfa X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 119.145.14.65 Subject: [Qemu-devel] [PATCH COLO-Frame v17 10/34] COLO: Load PVM's dirty pages into SVM's RAM cache temporarily X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiecl.fnst@cn.fujitsu.com, lizhijian@cn.fujitsu.com, yunhong.jiang@intel.com, eddie.dong@intel.com, peter.huangpeng@huawei.com, zhanghailiang , arei.gonglei@huawei.com, stefanha@redhat.com, zhangchen.fnst@cn.fujitsu.com, hongyang.yang@easystack.cn Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP We should not load PVM's state directly into SVM, because there maybe some errors happen when SVM is receving data, which will break SVM. We need to ensure receving all data before load the state into SVM. We use an extra memory to cache these data (PVM's ram). The ram cache in secondary side is initially the same as SVM/PVM's memory. And in the process of checkpoint, we cache the dirty pages of PVM into this ram cache firstly, so this ram cache always the same as PVM's memory at every checkpoint, then we flush this cached ram to SVM after we receive all PVM's state. Signed-off-by: zhanghailiang Signed-off-by: Li Zhijian Signed-off-by: Gonglei Reviewed-by: Dr. David Alan Gilbert --- v12: - Fix minor error in error_report (Dave's comment) - Add Reviewed-by tag v11: - Rename 'host_cache' to 'colo_cache' (Dave's suggestion) v10: - Split the process of dirty pages recording into a new patch --- include/exec/ram_addr.h | 1 + include/migration/migration.h | 4 +++ migration/colo.c | 11 +++++++ migration/ram.c | 73 ++++++++++++++++++++++++++++++++++++++++++- 4 files changed, 88 insertions(+), 1 deletion(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 2a9465d..b4c04fb 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -26,6 +26,7 @@ struct RAMBlock { struct rcu_head rcu; struct MemoryRegion *mr; uint8_t *host; + uint8_t *colo_cache; /* For colo, VM's ram cache */ ram_addr_t offset; ram_addr_t used_length; ram_addr_t max_length; diff --git a/include/migration/migration.h b/include/migration/migration.h index 55a2df6..5cd1ff1 100644 --- a/include/migration/migration.h +++ b/include/migration/migration.h @@ -360,4 +360,8 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname, PostcopyState postcopy_state_get(void); /* Set the state and return the old state */ PostcopyState postcopy_state_set(PostcopyState new_state); + +/* ram cache */ +int colo_init_ram_cache(void); +void colo_release_ram_cache(void); #endif diff --git a/migration/colo.c b/migration/colo.c index 8ef1a22..aa8c7e1 100644 --- a/migration/colo.c +++ b/migration/colo.c @@ -287,6 +287,7 @@ void *colo_process_incoming_thread(void *opaque) { MigrationIncomingState *mis = opaque; Error *local_err = NULL; + int ret; migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE, MIGRATION_STATUS_COLO); @@ -303,6 +304,12 @@ void *colo_process_incoming_thread(void *opaque) */ qemu_file_set_blocking(mis->from_src_file, true); + ret = colo_init_ram_cache(); + if (ret < 0) { + error_report("Failed to initialize ram cache"); + goto out; + } + colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_READY, &local_err); if (local_err) { @@ -353,6 +360,10 @@ out: error_report_err(local_err); } + qemu_mutex_lock_iothread(); + colo_release_ram_cache(); + qemu_mutex_unlock_iothread(); + if (mis->to_src_file) { qemu_fclose(mis->to_src_file); } diff --git a/migration/ram.c b/migration/ram.c index ae9a656..327e872 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -227,6 +227,7 @@ static RAMBlock *last_sent_block; static ram_addr_t last_offset; static QemuMutex migration_bitmap_mutex; static uint64_t migration_dirty_pages; +static bool ram_cache_enable; static uint32_t last_version; static bool ram_bulk_stage; @@ -2192,6 +2193,20 @@ static inline void *host_from_ram_block_offset(RAMBlock *block, return block->host + offset; } +static inline void *colo_cache_from_block_offset(RAMBlock *block, + ram_addr_t offset) +{ + if (!offset_in_ramblock(block, offset)) { + return NULL; + } + if (!block->colo_cache) { + error_report("%s: colo_cache is NULL in block :%s", + __func__, block->idstr); + return NULL; + } + return block->colo_cache + offset; +} + /* * If a page (or a whole RDMA chunk) has been * determined to be zero, then zap it. @@ -2468,7 +2483,12 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) { RAMBlock *block = ram_block_from_stream(f, flags); - host = host_from_ram_block_offset(block, addr); + /* After going into COLO, we should load the Page into colo_cache */ + if (ram_cache_enable) { + host = colo_cache_from_block_offset(block, addr); + } else { + host = host_from_ram_block_offset(block, addr); + } if (!host) { error_report("Illegal RAM offset " RAM_ADDR_FMT, addr); ret = -EINVAL; @@ -2563,6 +2583,57 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) return ret; } +/* + * colo cache: this is for secondary VM, we cache the whole + * memory of the secondary VM, it will be called after first migration. + */ +int colo_init_ram_cache(void) +{ + RAMBlock *block; + + rcu_read_lock(); + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { + block->colo_cache = qemu_anon_ram_alloc(block->used_length, NULL); + if (!block->colo_cache) { + error_report("%s: Can't alloc memory for COLO cache of block %s," + "size 0x" RAM_ADDR_FMT, __func__, block->idstr, + block->used_length); + goto out_locked; + } + memcpy(block->colo_cache, block->host, block->used_length); + } + rcu_read_unlock(); + ram_cache_enable = true; + return 0; + +out_locked: + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { + if (block->colo_cache) { + qemu_anon_ram_free(block->colo_cache, block->used_length); + block->colo_cache = NULL; + } + } + + rcu_read_unlock(); + return -errno; +} + +void colo_release_ram_cache(void) +{ + RAMBlock *block; + + ram_cache_enable = false; + + rcu_read_lock(); + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { + if (block->colo_cache) { + qemu_anon_ram_free(block->colo_cache, block->used_length); + block->colo_cache = NULL; + } + } + rcu_read_unlock(); +} + static SaveVMHandlers savevm_ram_handlers = { .save_live_setup = ram_save_setup, .save_live_iterate = ram_save_iterate,