From patchwork Tue Mar 14 16:35:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Druzhinin X-Patchwork-Id: 9623853 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 281A460492 for ; Tue, 14 Mar 2017 16:38:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A05027C0C for ; Tue, 14 Mar 2017 16:38:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0E8472857B; Tue, 14 Mar 2017 16:38:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1721127C0C for ; Tue, 14 Mar 2017 16:38:10 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cnpPx-0005BL-0l; Tue, 14 Mar 2017 16:35:25 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cnpPw-0005B9-25 for xen-devel@lists.xenproject.org; Tue, 14 Mar 2017 16:35:24 +0000 Received: from [85.158.139.211] by server-17.bemta-5.messagelabs.com id 00/10-00654-BCB18C85; Tue, 14 Mar 2017 16:35:23 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrFLMWRWlGSWpSXmKPExsWyU9JRQveU9Ik Ig/Md2hbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bHf0eYC7pcKk5O/czWwNhi3MXIySEh4Cfx ZmU7E4jNJmAgcWrTIhYQW0TAUuLp+g72LkYuDmaBC4wSN7+8AisSFvCSuHlyDpjNIqAq0XTiC iOIzSvgKfHx7T1GiKFyEjfPdTJDxAUlTs58AjaUWUBC4uCLF2BxIQE1iaNdu1gg6tMlDsz7xj iBkWcWkpZZSFoWMDKtYtQoTi0qSy3SNTLRSyrKTM8oyU3MzNE1NDDVy00tLk5MT81JTCrWS87 P3cQIDJN6BgbGHYw3J/sdYpTkYFIS5VURPBEhxJeUn1KZkVicEV9UmpNafIhRhoNDSYK3Ugoo J1iUmp5akZaZAwxYmLQEB4+SCG8XSJq3uCAxtzgzHSJ1ilFRSpx3O0hCACSRUZoH1waLkkuMs lLCvIwMDAxCPAWpRbmZJajyrxjFORiVhHn1QKbwZOaVwE1/BbSYCWhx4s8jIItLEhFSUg2Mmg YV3EJ6dfG5zOtaDwh8XJGkrxm4LjbY0tygP7lFTXadhllRo3S7fj3vt073p0ui4wsN4nwXWv+ PenLv779PGXEHjcwmRptV+ypLXEs18xe9Pn+f7qw9nE0ftPk85NiO9F7nPpfkuTZGbUXGvIcr k+7NTmxZninisufl62NbxKa2yWemBSuxFGckGmoxFxUnAgB4RFktjQIAAA== X-Env-Sender: prvs=2391d9474=igor.druzhinin@citrix.com X-Msg-Ref: server-2.tower-206.messagelabs.com!1489509322!68812926!1 X-Originating-IP: [185.25.65.24] X-SpamReason: No, hits=0.0 required=7.0 tests=received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 44606 invoked from network); 14 Mar 2017 16:35:22 -0000 Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24) by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 14 Mar 2017 16:35:22 -0000 X-IronPort-AV: E=Sophos;i="5.36,164,1486425600"; d="scan'208";a="42435037" From: Igor Druzhinin To: , Date: Tue, 14 Mar 2017 16:35:10 +0000 Message-ID: <1489509310-24166-1-git-send-email-igor.druzhinin@citrix.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-ClientProxiedBy: FTLPEX02CAS01.citrite.net (10.13.99.120) To AMSPEX02CL03.citrite.net (10.69.22.127) Cc: Igor Druzhinin , crosthwaite.peter@gmail.com, qemu-devel@nongnu.org, paul.durrant@citrix.com, pbonzini@redhat.com, xen-devel@lists.xenproject.org, rth@twiddle.net Subject: [Xen-devel] [PATCH v4] xen: don't save/restore the physmap on VM save/restore X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Saving/restoring the physmap to/from xenstore was introduced to QEMU majorly in order to cover up the VRAM region restore issue. The sequence of restore operations implies that we should know the effective guest VRAM address *before* we have the VRAM region restored (which happens later). Unfortunately, in Xen environment VRAM memory does actually belong to a guest - not QEMU itself - which means the position of this region is unknown beforehand and can't be mapped into QEMU address space immediately. Previously, recreating xenstore keys, holding the physmap, by the toolstack helped to get this information in place at the right moment ready to be consumed by QEMU to map the region properly. The extraneous complexity of having those keys transferred by the toolstack and unnecessary redundancy prompted us to propose a solution which doesn't require any extra data in xenstore. The idea is to defer the VRAM region mapping till the point we actually know the effective address and able to map it. To that end, we initially just skip the mapping request for the framebuffer if we unable to map it now. Then, after the memory region restore phase, we perform the mapping again, this time successfully, and update the VRAM region metadata accordingly. Signed-off-by: Igor Druzhinin --- v4: * Use VGA post_load handler for vram_ptr update v3: * Modify qemu_ram_ptr_length similarly with qemu_map_ram_ptr * Add a comment explaining qemu_map_ram_ptr and qemu_ram_ptr_length semantic change for Xen * Dropped some redundant changes v2: * Fix some building and coding style issues --- exec.c | 16 +++++++++ hw/display/vga.c | 5 +++ xen-hvm.c | 104 ++++++++++--------------------------------------------- 3 files changed, 40 insertions(+), 85 deletions(-) diff --git a/exec.c b/exec.c index aabb035..a1ac8cd 100644 --- a/exec.c +++ b/exec.c @@ -2008,6 +2008,14 @@ void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr) } block->host = xen_map_cache(block->offset, block->max_length, 1); + if (block->host == NULL) { + /* In case we cannot establish the mapping right away we might + * still be able to do it later e.g. on a later stage of restore. + * We don't touch the block and return NULL here to indicate + * that intention. + */ + return NULL; + } } return ramblock_ptr(block, addr); } @@ -2041,6 +2049,14 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr, } block->host = xen_map_cache(block->offset, block->max_length, 1); + if (block->host == NULL) { + /* In case we cannot establish the mapping right away we might + * still be able to do it later e.g. on a later stage of restore. + * We don't touch the block and return NULL here to indicate + * that intention. + */ + return NULL; + } } return ramblock_ptr(block, addr); diff --git a/hw/display/vga.c b/hw/display/vga.c index 69c3e1d..f8aebe3 100644 --- a/hw/display/vga.c +++ b/hw/display/vga.c @@ -2035,6 +2035,11 @@ static int vga_common_post_load(void *opaque, int version_id) { VGACommonState *s = opaque; + if (xen_enabled() && !s->vram_ptr) { + /* update VRAM region pointer in case we've failed + * the last time during init phase */ + s->vram_ptr = memory_region_get_ram_ptr(&s->vram); + } /* force refresh */ s->graphic_mode = -1; vbe_update_vgaregs(s); diff --git a/xen-hvm.c b/xen-hvm.c index 5043beb..8bedd9b 100644 --- a/xen-hvm.c +++ b/xen-hvm.c @@ -317,7 +317,6 @@ static int xen_add_to_physmap(XenIOState *state, XenPhysmap *physmap = NULL; hwaddr pfn, start_gpfn; hwaddr phys_offset = memory_region_get_ram_addr(mr); - char path[80], value[17]; const char *mr_name; if (get_physmapping(state, start_addr, size)) { @@ -340,6 +339,22 @@ go_physmap: DPRINTF("mapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx"\n", start_addr, start_addr + size); + mr_name = memory_region_name(mr); + + physmap = g_malloc(sizeof(XenPhysmap)); + + physmap->start_addr = start_addr; + physmap->size = size; + physmap->name = mr_name; + physmap->phys_offset = phys_offset; + + QLIST_INSERT_HEAD(&state->physmap, physmap, list); + + if (runstate_check(RUN_STATE_INMIGRATE)) { + /* If we are migrating the region has been already mapped */ + return 0; + } + pfn = phys_offset >> TARGET_PAGE_BITS; start_gpfn = start_addr >> TARGET_PAGE_BITS; for (i = 0; i < size >> TARGET_PAGE_BITS; i++) { @@ -350,49 +365,17 @@ go_physmap: if (rc) { DPRINTF("add_to_physmap MFN %"PRI_xen_pfn" to PFN %" PRI_xen_pfn" failed: %d (errno: %d)\n", idx, gpfn, rc, errno); + + QLIST_REMOVE(physmap, list); + g_free(physmap); return -rc; } } - mr_name = memory_region_name(mr); - - physmap = g_malloc(sizeof (XenPhysmap)); - - physmap->start_addr = start_addr; - physmap->size = size; - physmap->name = mr_name; - physmap->phys_offset = phys_offset; - - QLIST_INSERT_HEAD(&state->physmap, physmap, list); - xc_domain_pin_memory_cacheattr(xen_xc, xen_domid, start_addr >> TARGET_PAGE_BITS, (start_addr + size - 1) >> TARGET_PAGE_BITS, XEN_DOMCTL_MEM_CACHEATTR_WB); - - snprintf(path, sizeof(path), - "/local/domain/0/device-model/%d/physmap/%"PRIx64"/start_addr", - xen_domid, (uint64_t)phys_offset); - snprintf(value, sizeof(value), "%"PRIx64, (uint64_t)start_addr); - if (!xs_write(state->xenstore, 0, path, value, strlen(value))) { - return -1; - } - snprintf(path, sizeof(path), - "/local/domain/0/device-model/%d/physmap/%"PRIx64"/size", - xen_domid, (uint64_t)phys_offset); - snprintf(value, sizeof(value), "%"PRIx64, (uint64_t)size); - if (!xs_write(state->xenstore, 0, path, value, strlen(value))) { - return -1; - } - if (mr_name) { - snprintf(path, sizeof(path), - "/local/domain/0/device-model/%d/physmap/%"PRIx64"/name", - xen_domid, (uint64_t)phys_offset); - if (!xs_write(state->xenstore, 0, path, mr_name, strlen(mr_name))) { - return -1; - } - } - return 0; } @@ -1152,54 +1135,6 @@ static void xen_exit_notifier(Notifier *n, void *data) xs_daemon_close(state->xenstore); } -static void xen_read_physmap(XenIOState *state) -{ - XenPhysmap *physmap = NULL; - unsigned int len, num, i; - char path[80], *value = NULL; - char **entries = NULL; - - snprintf(path, sizeof(path), - "/local/domain/0/device-model/%d/physmap", xen_domid); - entries = xs_directory(state->xenstore, 0, path, &num); - if (entries == NULL) - return; - - for (i = 0; i < num; i++) { - physmap = g_malloc(sizeof (XenPhysmap)); - physmap->phys_offset = strtoull(entries[i], NULL, 16); - snprintf(path, sizeof(path), - "/local/domain/0/device-model/%d/physmap/%s/start_addr", - xen_domid, entries[i]); - value = xs_read(state->xenstore, 0, path, &len); - if (value == NULL) { - g_free(physmap); - continue; - } - physmap->start_addr = strtoull(value, NULL, 16); - free(value); - - snprintf(path, sizeof(path), - "/local/domain/0/device-model/%d/physmap/%s/size", - xen_domid, entries[i]); - value = xs_read(state->xenstore, 0, path, &len); - if (value == NULL) { - g_free(physmap); - continue; - } - physmap->size = strtoull(value, NULL, 16); - free(value); - - snprintf(path, sizeof(path), - "/local/domain/0/device-model/%d/physmap/%s/name", - xen_domid, entries[i]); - physmap->name = xs_read(state->xenstore, 0, path, &len); - - QLIST_INSERT_HEAD(&state->physmap, physmap, list); - } - free(entries); -} - static void xen_wakeup_notifier(Notifier *notifier, void *data) { xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 0); @@ -1339,7 +1274,6 @@ void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory) goto err; } xen_be_register_common(); - xen_read_physmap(state); /* Disable ACPI build because Xen handles it */ pcms->acpi_build_enabled = false;