From patchwork Thu Jan 7 12:36:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juergen Gross X-Patchwork-Id: 7976711 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7D0319F3E6 for ; Thu, 7 Jan 2016 12:40:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 703CC2014A for ; Thu, 7 Jan 2016 12:40:09 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8EE812017E for ; Thu, 7 Jan 2016 12:40:07 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aH9oQ-0004bz-1g; Thu, 07 Jan 2016 12:37:06 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aH9oO-0004bS-74 for xen-devel@lists.xen.org; Thu, 07 Jan 2016 12:37:04 +0000 Received: from [85.158.143.35] by server-2.bemta-4.messagelabs.com id 07/9F-08977-FEB5E865; Thu, 07 Jan 2016 12:37:03 +0000 X-Env-Sender: jgross@suse.com X-Msg-Ref: server-5.tower-21.messagelabs.com!1452170223!8879606!1 X-Originating-IP: [195.135.220.15] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 27173 invoked from network); 7 Jan 2016 12:37:03 -0000 Received: from mx2.suse.de (HELO mx2.suse.de) (195.135.220.15) by server-5.tower-21.messagelabs.com with DHE-RSA-CAMELLIA256-SHA encrypted SMTP; 7 Jan 2016 12:37:03 -0000 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 5390DAC65; Thu, 7 Jan 2016 12:37:02 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, ian.jackson@eu.citrix.com, stefano.stabellini@eu.citrix.com, wei.liu2@citrix.com, andrew.cooper3@citrix.com Date: Thu, 7 Jan 2016 13:36:53 +0100 Message-Id: <1452170214-17821-4-git-send-email-jgross@suse.com> X-Mailer: git-send-email 2.6.2 In-Reply-To: <1452170214-17821-1-git-send-email-jgross@suse.com> References: <1452170214-17821-1-git-send-email-jgross@suse.com> Cc: Juergen Gross Subject: [Xen-devel] [PATCH v4 3/4] libxc: stop migration in case of p2m list structural changes X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With support of the virtual mapped linear p2m list for migration it is now possible to detect structural changes of the p2m list which before would either lead to a crashing or otherwise wrong behaving domU. A guest supporting the linear p2m list will increment the p2m_generation counter located in the shared info page before and after each modification of a mapping related to the p2m list. A change of that counter can be detected by the tools and reacted upon. As such a change should occur only very rarely once the domU is up the most simple reaction is to cancel migration in such an event. Signed-off-by: Juergen Gross Reviewed-by: Andrew Cooper Reviewed-by: Wei Liu --- tools/libxc/xc_sr_common.h | 12 +++++++++++ tools/libxc/xc_sr_save.c | 7 ++++++- tools/libxc/xc_sr_save_x86_hvm.c | 7 +++++++ tools/libxc/xc_sr_save_x86_pv.c | 45 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 70 insertions(+), 1 deletion(-) diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h index 9aecde2..60b43e8 100644 --- a/tools/libxc/xc_sr_common.h +++ b/tools/libxc/xc_sr_common.h @@ -83,6 +83,15 @@ struct xc_sr_save_ops int (*end_of_checkpoint)(struct xc_sr_context *ctx); /** + * Check state of guest to decide whether it makes sense to continue + * migration. This is called in each iteration or checkpoint to check + * whether all criteria for the migration are still met. If that's not + * the case either migration is cancelled via a bad rc or the situation + * is handled, e.g. by sending appropriate records. + */ + int (*check_vm_state)(struct xc_sr_context *ctx); + + /** * Clean up the local environment. Will be called exactly once, either * after a successful save, or upon encountering an error. */ @@ -280,6 +289,9 @@ struct xc_sr_context /* Read-only mapping of guests shared info page */ shared_info_any_t *shinfo; + /* p2m generation count for verifying validity of local p2m. */ + uint64_t p2m_generation; + union { struct diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c index cefcef5..88d85ef 100644 --- a/tools/libxc/xc_sr_save.c +++ b/tools/libxc/xc_sr_save.c @@ -394,7 +394,8 @@ static int send_dirty_pages(struct xc_sr_context *ctx, DPRINTF("Bitmap contained more entries than expected..."); xc_report_progress_step(xch, entries, entries); - return 0; + + return ctx->save.ops.check_vm_state(ctx); } /* @@ -751,6 +752,10 @@ static int save(struct xc_sr_context *ctx, uint16_t guest_type) if ( rc ) goto err; + rc = ctx->save.ops.check_vm_state(ctx); + if ( rc ) + goto err; + if ( ctx->save.live ) rc = send_domain_memory_live(ctx); else if ( ctx->save.checkpointed ) diff --git a/tools/libxc/xc_sr_save_x86_hvm.c b/tools/libxc/xc_sr_save_x86_hvm.c index f3d6cee..e347b3b 100644 --- a/tools/libxc/xc_sr_save_x86_hvm.c +++ b/tools/libxc/xc_sr_save_x86_hvm.c @@ -175,6 +175,12 @@ static int x86_hvm_start_of_checkpoint(struct xc_sr_context *ctx) return 0; } +static int x86_hvm_check_vm_state(struct xc_sr_context *ctx) +{ + /* no-op */ + return 0; +} + static int x86_hvm_end_of_checkpoint(struct xc_sr_context *ctx) { int rc; @@ -221,6 +227,7 @@ struct xc_sr_save_ops save_ops_x86_hvm = .start_of_stream = x86_hvm_start_of_stream, .start_of_checkpoint = x86_hvm_start_of_checkpoint, .end_of_checkpoint = x86_hvm_end_of_checkpoint, + .check_vm_state = x86_hvm_check_vm_state, .cleanup = x86_hvm_cleanup, }; diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c index 5448f32..4deb58f 100644 --- a/tools/libxc/xc_sr_save_x86_pv.c +++ b/tools/libxc/xc_sr_save_x86_pv.c @@ -274,6 +274,39 @@ err: } /* + * Get p2m_generation count. + * Returns an error if the generation count has changed since the last call. + */ +static int get_p2m_generation(struct xc_sr_context *ctx) +{ + uint64_t p2m_generation; + int rc; + + p2m_generation = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_generation, + ctx->x86_pv.width); + + rc = (p2m_generation == ctx->x86_pv.p2m_generation) ? 0 : -1; + ctx->x86_pv.p2m_generation = p2m_generation; + + return rc; +} + +static int x86_pv_check_vm_state_p2m_list(struct xc_sr_context *ctx) +{ + xc_interface *xch = ctx->xch; + int rc; + + if ( !ctx->save.live ) + return 0; + + rc = get_p2m_generation(ctx); + if ( rc ) + ERROR("p2m generation count changed. Migration aborted."); + + return rc; +} + +/* * Map the guest p2m frames specified via a cr3 value, a virtual address, and * the maximum pfn. PTE entries are 64 bits for both, 32 and 64 bit guests as * in 32 bit case we support PAE guests only. @@ -297,6 +330,8 @@ static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3) return -1; } + get_p2m_generation(ctx); + p2m_vaddr = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_vaddr, ctx->x86_pv.width); fpp = PAGE_SIZE / ctx->x86_pv.width; @@ -430,6 +465,7 @@ static int map_p2m(struct xc_sr_context *ctx) { uint64_t p2m_cr3; + ctx->x86_pv.p2m_generation = ~0ULL; ctx->x86_pv.max_pfn = GET_FIELD(ctx->x86_pv.shinfo, arch.max_pfn, ctx->x86_pv.width) - 1; p2m_cr3 = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_cr3, ctx->x86_pv.width); @@ -1069,6 +1105,14 @@ static int x86_pv_end_of_checkpoint(struct xc_sr_context *ctx) return 0; } +static int x86_pv_check_vm_state(struct xc_sr_context *ctx) +{ + if ( ctx->x86_pv.p2m_generation == ~0ULL ) + return 0; + + return x86_pv_check_vm_state_p2m_list(ctx); +} + /* * save_ops function. Cleanup. */ @@ -1096,6 +1140,7 @@ struct xc_sr_save_ops save_ops_x86_pv = .start_of_stream = x86_pv_start_of_stream, .start_of_checkpoint = x86_pv_start_of_checkpoint, .end_of_checkpoint = x86_pv_end_of_checkpoint, + .check_vm_state = x86_pv_check_vm_state, .cleanup = x86_pv_cleanup, };