From patchwork Wed Feb 22 14:24:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 9586917 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EB92E600CA for ; Wed, 22 Feb 2017 14:27:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CD04E286B5 for ; Wed, 22 Feb 2017 14:27:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C190A286C2; Wed, 22 Feb 2017 14:27:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 47102286B5 for ; Wed, 22 Feb 2017 14:27:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cgXr3-0008T5-UJ; Wed, 22 Feb 2017 14:25:17 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cgXr2-0008Ru-Dh for xen-devel@lists.xenproject.org; Wed, 22 Feb 2017 14:25:16 +0000 Received: from [85.158.137.68] by server-13.bemta-3.messagelabs.com id 10/D7-25657-B4F9DA85; Wed, 22 Feb 2017 14:25:15 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrEIsWRWlGSWpSXmKPExsXitHSDva73/LU RBp1dohbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bxHzvYC+4aVzw9eYStgfGEahcjJ4eEgL/E zzkb2UFsNgEdiYtzd7J1MXJwiAioSNzea9DFyMXBLLCBUeLcj73sIHFhARuJ9a/zQMpZBFQl1 kz9DxbmFbCU2PnFD2KinsSuI79ZQWxOASuJVX3fGUFKhIBKrq6JBgnzCghKnJz5hAXEZhbQlG jd/psdwpaXaN46mxnEFhJQlOif94BtAiPfLCQts5C0zELSsoCReRWjRnFqUVlqka6hpV5SUWZ 6RkluYmaOrqGBsV5uanFxYnpqTmJSsV5yfu4mRmCY1TMwMO5g/H3c7xCjJAeTkijvw+y1EUJ8 SfkplRmJxRnxRaU5qcWHGDU4OAQ2r119gVGKJS8/L1VJgld4HlCdYFFqempFWmYOMBJgSiU4e JREeFtB0rzFBYm5xZnpEKlTjLocX3aeeckkBDZDSpw3E6RIAKQoozQPbgQsKi8xykoJ8zIyMD AI8RSkFuVmlqDKv2IU52BUEuZNAZnCk5lXArfpFdARTEBHWDqDHVGSiJCSamDcdPHrY7ZLkd9 sv56cVWja9FfARvZlGnuSls3tUqZb8V5PLjky3Ku22N3T+Fs68INa58l7+xWs0565FR1edNf2 8/WEK/zC1mFCvT+m9LadTTbbHb41j3lhqV3301mbGTk0Tt03bjl/d8JJJv7g6+p5Efm5tXNit //RKHrwoOlUos58xTdKTxWVWIozEg21mIuKEwFeII/yxQIAAA== X-Env-Sender: prvs=219821433=roger.pau@citrix.com X-Msg-Ref: server-16.tower-31.messagelabs.com!1487773513!79308846!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 48255 invoked from network); 22 Feb 2017 14:25:15 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 22 Feb 2017 14:25:15 -0000 X-IronPort-AV: E=Sophos;i="5.35,194,1484006400"; d="scan'208";a="417767898" From: Roger Pau Monne To: Date: Wed, 22 Feb 2017 14:24:57 +0000 Message-ID: <20170222142459.28199-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.10.1 (Apple Git-78) In-Reply-To: <20170222142459.28199-1-roger.pau@citrix.com> References: <20170222142459.28199-1-roger.pau@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper , boris.ostrovsky@oracle.com, Roger Pau Monne , Jan Beulich Subject: [Xen-devel] [PATCH v7 5/7] xen/x86: parse Dom0 kernel for PVHv2 X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Introduce a helper to parse the Dom0 kernel. A new helper is also introduced to libelf, that's used to store the destination vcpu of the domain. This parameter is needed when loading the kernel on a HVM domain (PVHv2), since hvm_copy_to_guest_phys requires passing the destination vcpu. While there also fix image_base and image_start to be of type "void *", and do the necessary fixup of related functions. Signed-off-by: Roger Pau Monné Reviewed-by: Jan Beulich --- Cc: Jan Beulich Cc: Andrew Cooper --- Changes since v5: - s/hvm_copy_to_guest_phys_vcpu/hvm_copy_to_guest_phys/. - Use void * for image_base and image_start, make the necessary changes. - Introduce elf_set_vcpu in order to store the destination vcpu in elf_binary, and use it in elf_load_image. This avoids having to override current. - Style fixes. - Round up the position of the modlist/start_info to an aligned address depending on the kernel bitness. Changes since v4: - s/hvm/pvh. - Use hvm_copy_to_guest_phys_vcpu. Changes since v3: - Change one error message. - Indent "out" label by one space. - Introduce hvm_copy_to_phys and slightly simplify the code in hvm_load_kernel. Changes since v2: - Remove debug messages. - Don't hardcode the number of modules to 1. --- xen/arch/x86/domain_build.c | 133 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 133 insertions(+) diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c index d2a1105..a47b8d2 100644 --- a/xen/arch/x86/domain_build.c +++ b/xen/arch/x86/domain_build.c @@ -39,6 +39,7 @@ #include #include +#include static long __initdata dom0_nrpages; static long __initdata dom0_min_nrpages; @@ -2022,12 +2023,136 @@ static int __init pvh_setup_p2m(struct domain *d) #undef MB1_PAGES } +static int __init pvh_load_kernel(struct domain *d, const module_t *image, + unsigned long image_headroom, + module_t *initrd, void *image_base, + char *cmdline, paddr_t *entry, + paddr_t *start_info_addr) +{ + void *image_start = image_base + image_headroom; + unsigned long image_len = image->mod_end; + struct elf_binary elf; + struct elf_dom_parms parms; + paddr_t last_addr; + struct hvm_start_info start_info = { 0 }; + struct hvm_modlist_entry mod = { 0 }; + struct vcpu *v = d->vcpu[0]; + int rc; + + if ( (rc = bzimage_parse(image_base, &image_start, &image_len)) != 0 ) + { + printk("Error trying to detect bz compressed kernel\n"); + return rc; + } + + if ( (rc = elf_init(&elf, image_start, image_len)) != 0 ) + { + printk("Unable to init ELF\n"); + return rc; + } +#ifdef VERBOSE + elf_set_verbose(&elf); +#endif + elf_parse_binary(&elf); + if ( (rc = elf_xen_parse(&elf, &parms)) != 0 ) + { + printk("Unable to parse kernel for ELFNOTES\n"); + return rc; + } + + if ( parms.phys_entry == UNSET_ADDR32 ) + { + printk("Unable to find XEN_ELFNOTE_PHYS32_ENTRY address\n"); + return -EINVAL; + } + + printk("OS: %s version: %s loader: %s bitness: %s\n", parms.guest_os, + parms.guest_ver, parms.loader, + elf_64bit(&elf) ? "64-bit" : "32-bit"); + + /* Copy the OS image and free temporary buffer. */ + elf.dest_base = (void *)(parms.virt_kstart - parms.virt_base); + elf.dest_size = parms.virt_kend - parms.virt_kstart; + + elf_set_vcpu(&elf, v); + rc = elf_load_binary(&elf); + if ( rc < 0 ) + { + printk("Failed to load kernel: %d\n", rc); + printk("Xen dom0 kernel broken ELF: %s\n", elf_check_broken(&elf)); + return rc; + } + + last_addr = ROUNDUP(parms.virt_kend - parms.virt_base, PAGE_SIZE); + + if ( initrd != NULL ) + { + rc = hvm_copy_to_guest_phys(last_addr, mfn_to_virt(initrd->mod_start), + initrd->mod_end, v); + if ( rc ) + { + printk("Unable to copy initrd to guest\n"); + return rc; + } + + mod.paddr = last_addr; + mod.size = initrd->mod_end; + last_addr += ROUNDUP(initrd->mod_end, PAGE_SIZE); + } + + /* Free temporary buffers. */ + discard_initial_images(); + + if ( cmdline != NULL ) + { + rc = hvm_copy_to_guest_phys(last_addr, cmdline, strlen(cmdline) + 1, v); + if ( rc ) + { + printk("Unable to copy guest command line\n"); + return rc; + } + start_info.cmdline_paddr = last_addr; + /* + * Round up to 32/64 bits (depending on the guest kernel bitness) so + * the modlist/start_info is aligned. + */ + last_addr += ROUNDUP(strlen(cmdline) + 1, elf_64bit(&elf) ? 8 : 4); + } + if ( initrd != NULL ) + { + rc = hvm_copy_to_guest_phys(last_addr, &mod, sizeof(mod), v); + if ( rc ) + { + printk("Unable to copy guest modules\n"); + return rc; + } + start_info.modlist_paddr = last_addr; + start_info.nr_modules = 1; + last_addr += sizeof(mod); + } + + start_info.magic = XEN_HVM_START_MAGIC_VALUE; + start_info.flags = SIF_PRIVILEGED | SIF_INITDOMAIN; + rc = hvm_copy_to_guest_phys(last_addr, &start_info, sizeof(start_info), v); + if ( rc ) + { + printk("Unable to copy start info to guest\n"); + return rc; + } + + *entry = parms.phys_entry; + *start_info_addr = last_addr; + + return 0; +} + static int __init construct_dom0_pvh(struct domain *d, const module_t *image, unsigned long image_headroom, module_t *initrd, void *(*bootstrap_map)(const module_t *), char *cmdline) { + paddr_t entry, start_info; int rc; printk("** Building a PVH Dom0 **\n"); @@ -2041,6 +2166,14 @@ static int __init construct_dom0_pvh(struct domain *d, const module_t *image, return rc; } + rc = pvh_load_kernel(d, image, image_headroom, initrd, bootstrap_map(image), + cmdline, &entry, &start_info); + if ( rc ) + { + printk("Failed to load Dom0 kernel\n"); + return rc; + } + panic("Building a PVHv2 Dom0 is not yet supported."); return 0; }