From patchwork Sat May 13 01:17:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 13240062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5AA03C77B7F for ; Sat, 13 May 2023 01:17:40 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.534090.831319 (Exim 4.92) (envelope-from ) id 1pxdt3-0001ES-Uc; Sat, 13 May 2023 01:17:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 534090.831319; Sat, 13 May 2023 01:17:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pxdt3-0001EL-RS; Sat, 13 May 2023 01:17:29 +0000 Received: by outflank-mailman (input) for mailman id 534090; Sat, 13 May 2023 01:17:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pxdt2-0001E1-Vd for xen-devel@lists.xenproject.org; Sat, 13 May 2023 01:17:28 +0000 Received: from dfw.source.kernel.org (dfw.source.kernel.org [2604:1380:4641:c500::1]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id eb5e9da5-f12b-11ed-8611-37d641c3527e; Sat, 13 May 2023 03:17:26 +0200 (CEST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0358661DB1; Sat, 13 May 2023 01:17:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C0C7C433D2; Sat, 13 May 2023 01:17:23 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: eb5e9da5-f12b-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683940644; bh=2XytKpG4OSRs3JrzoCNRkbwec3tS/ewt5oNDo9tHoCI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=clI0cieSNQPbJOut//0+xr5HlGcqxjGzeb4OQjYWS7GeROGV1/7ThtgGAqlS8WEDM T8KNWl1ydCZMZloixzBO1PY+xv8aNNd+/3Ba0ToRPTOlVDZE/zXIIZwT8FBbQ911A7 ICmcO7ZhAeWUFc8FE/wykT8xWlN1c58TQmyNrCiu3FgoBhKvqfJfnfrEibnEZ+ABNH up0Hnc5vZ2BJwXJ4gfGuoi7qKw4lXazOF2dX+s4d/2+4i3WGxGiC/B2q02zrKXD98H mj7W3XjeQn6iQL7WR4vzlWyeHBLNSIi4YFq+HXUo7kCl8fUABZ9JtsPivC2jP8T+zf cLj40ROvLnFwA== From: Stefano Stabellini To: roger.pau@citrix.com, andrew.cooper3@citrix.com, jbeulich@suse.com Cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com, Stefano Stabellini Subject: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT generation Date: Fri, 12 May 2023 18:17:19 -0700 Message-Id: <20230513011720.3978354-1-sstabellini@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 From: Stefano Stabellini Xen always generates a XSDT table even if the firmware provided a RSDT table. Instead of copying the XSDT header from the firmware table (that might be missing), generate the XSDT header from a preset. Signed-off-by: Stefano Stabellini --- xen/arch/x86/hvm/dom0_build.c | 32 +++++++++----------------------- 1 file changed, 9 insertions(+), 23 deletions(-) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 307edc6a8c..5fde769863 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr, paddr_t *addr) { struct acpi_table_xsdt *xsdt; - struct acpi_table_header *table; - struct acpi_table_rsdp *rsdp; const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables; unsigned long size = sizeof(*xsdt); unsigned int i, j, num_tables = 0; - paddr_t xsdt_paddr; int rc; + struct acpi_table_header header = { + .signature = "XSDT", + .length = sizeof(struct acpi_table_header), + .revision = 0x1, + .oem_id = "Xen", + .oem_table_id = "HVM", + .oem_revision = 0, + }; /* * Restore original DMAR table signature, we are going to filter it from @@ -1001,26 +1006,7 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr, goto out; } - /* Copy the native XSDT table header. */ - rsdp = acpi_os_map_memory(acpi_os_get_root_pointer(), sizeof(*rsdp)); - if ( !rsdp ) - { - printk("Unable to map RSDP\n"); - rc = -EINVAL; - goto out; - } - xsdt_paddr = rsdp->xsdt_physical_address; - acpi_os_unmap_memory(rsdp, sizeof(*rsdp)); - table = acpi_os_map_memory(xsdt_paddr, sizeof(*table)); - if ( !table ) - { - printk("Unable to map XSDT\n"); - rc = -EINVAL; - goto out; - } - xsdt->header = *table; - acpi_os_unmap_memory(table, sizeof(*table)); - + xsdt->header = header; /* Add the custom MADT. */ xsdt->table_offset_entry[0] = madt_addr; From patchwork Sat May 13 01:17:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 13240061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1A5E4C77B75 for ; Sat, 13 May 2023 01:17:39 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.534091.831325 (Exim 4.92) (envelope-from ) id 1pxdt4-0001Hs-9H; Sat, 13 May 2023 01:17:30 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 534091.831325; Sat, 13 May 2023 01:17:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pxdt4-0001H2-2O; Sat, 13 May 2023 01:17:30 +0000 Received: by outflank-mailman (input) for mailman id 534091; Sat, 13 May 2023 01:17:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pxdt3-0000mh-1G for xen-devel@lists.xenproject.org; Sat, 13 May 2023 01:17:29 +0000 Received: from dfw.source.kernel.org (dfw.source.kernel.org [2604:1380:4641:c500::1]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id ec1f4da1-f12b-11ed-b229-6b7b168915f2; Sat, 13 May 2023 03:17:27 +0200 (CEST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 415B764F81; Sat, 13 May 2023 01:17:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4A4CC4339C; Sat, 13 May 2023 01:17:24 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ec1f4da1-f12b-11ed-b229-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683940645; bh=Vqt3dc2X13y3hImbmGgang3paCHhm/n+3O0YoROZygc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rVLTB26Ei78khR18G+mDArFo+ngYBjl+oolbzCkU2dwF7MUexmfmpaPUabnzLbqsG +vkE39MaENgM/uffCfXaQbrMKC9UG9+UqdGDcRW+W3cQbvroUU0w9UjZM1P9mxQltE plccqG+mqnRPB9q/tVFzYSJW7cVN4Jylht74me8Ek9dTNMFglPSk77xDE3krNYbW7u 7ISjBh54NZ4HnItGmwFkKeyHBNb4Wia1H+KAfClw3D3kSJ39sDhce4modls5SI7mj2 nQNTkroWbKmPPh0qEsnpYZY4nsCo8hjTNOOeWZqQ3EhrXgNHe3/+iH+5bVMkks1UgU K+1myxGuV7NMQ== From: Stefano Stabellini To: roger.pau@citrix.com, andrew.cooper3@citrix.com, jbeulich@suse.com Cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com, Stefano Stabellini Subject: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of mapping Date: Fri, 12 May 2023 18:17:20 -0700 Message-Id: <20230513011720.3978354-2-sstabellini@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 From: Stefano Stabellini Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of the tables in the guest. Instead, copy the tables to Dom0. This is a workaround. Signed-off-by: Stefano Stabellini --- As mentioned in the cover letter, this is a RFC workaround as I don't know the cause of the underlying problem. I do know that this patch solves what would be otherwise a hang at boot when Dom0 PVH attempts to parse ACPI tables. --- xen/arch/x86/hvm/dom0_build.c | 107 +++++++++------------------------- 1 file changed, 27 insertions(+), 80 deletions(-) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 5fde769863..a6037fc6ed 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -73,32 +73,6 @@ static void __init print_order_stats(const struct domain *d) printk("order %2u allocations: %u\n", i, order_stats[i]); } -static int __init modify_identity_mmio(struct domain *d, unsigned long pfn, - unsigned long nr_pages, const bool map) -{ - int rc; - - for ( ; ; ) - { - rc = map ? map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn)) - : unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn)); - if ( rc == 0 ) - break; - if ( rc < 0 ) - { - printk(XENLOG_WARNING - "Failed to identity %smap [%#lx,%#lx) for d%d: %d\n", - map ? "" : "un", pfn, pfn + nr_pages, d->domain_id, rc); - break; - } - nr_pages -= rc; - pfn += rc; - process_pending_softirqs(); - } - - return rc; -} - /* Populate a HVM memory range using the biggest possible order. */ static int __init pvh_populate_memory_range(struct domain *d, unsigned long start, @@ -967,6 +941,8 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr, unsigned long size = sizeof(*xsdt); unsigned int i, j, num_tables = 0; int rc; + struct acpi_table_fadt fadt; + unsigned long fadt_addr = 0, dsdt_addr = 0, facs_addr = 0, fadt_size = 0; struct acpi_table_header header = { .signature = "XSDT", .length = sizeof(struct acpi_table_header), @@ -1013,10 +989,33 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr, /* Copy the addresses of the rest of the allowed tables. */ for( i = 0, j = 1; i < acpi_gbl_root_table_list.count; i++ ) { + void *table; + + pvh_steal_ram(d, tables[i].length, 0, GB(4), addr); + table = acpi_os_map_memory(tables[i].address, tables[i].length); + hvm_copy_to_guest_phys(*addr, table, tables[i].length, d->vcpu[0]); + pvh_add_mem_range(d, *addr, *addr + tables[i].length, E820_ACPI); + + if ( !strncmp(tables[i].signature.ascii, ACPI_SIG_FADT, ACPI_NAME_SIZE) ) + { + memcpy(&fadt, table, tables[i].length); + fadt_addr = *addr; + fadt_size = tables[i].length; + } + else if ( !strncmp(tables[i].signature.ascii, ACPI_SIG_DSDT, ACPI_NAME_SIZE) ) + dsdt_addr = *addr; + else if ( !strncmp(tables[i].signature.ascii, ACPI_SIG_FACS, ACPI_NAME_SIZE) ) + facs_addr = *addr; + if ( pvh_acpi_xsdt_table_allowed(tables[i].signature.ascii, - tables[i].address, tables[i].length) ) - xsdt->table_offset_entry[j++] = tables[i].address; + tables[i].address, tables[i].length) ) + xsdt->table_offset_entry[j++] = *addr; + + acpi_os_unmap_memory(table, tables[i].length); } + fadt.dsdt = dsdt_addr; + fadt.facs = facs_addr; + hvm_copy_to_guest_phys(fadt_addr, &fadt, fadt_size, d->vcpu[0]); xsdt->header.revision = 1; xsdt->header.length = size; @@ -1055,9 +1054,7 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr, static int __init pvh_setup_acpi(struct domain *d, paddr_t start_info) { - unsigned long pfn, nr_pages; paddr_t madt_paddr, xsdt_paddr, rsdp_paddr; - unsigned int i; int rc; struct acpi_table_rsdp *native_rsdp, rsdp = { .signature = ACPI_SIG_RSDP, @@ -1065,56 +1062,6 @@ static int __init pvh_setup_acpi(struct domain *d, paddr_t start_info) .length = sizeof(rsdp), }; - - /* Scan top-level tables and add their regions to the guest memory map. */ - for( i = 0; i < acpi_gbl_root_table_list.count; i++ ) - { - const char *sig = acpi_gbl_root_table_list.tables[i].signature.ascii; - unsigned long addr = acpi_gbl_root_table_list.tables[i].address; - unsigned long size = acpi_gbl_root_table_list.tables[i].length; - - /* - * Make sure the original MADT is also mapped, so that Dom0 can - * properly access the data returned by _MAT methods in case it's - * re-using MADT memory. - */ - if ( strncmp(sig, ACPI_SIG_MADT, ACPI_NAME_SIZE) - ? pvh_acpi_table_allowed(sig, addr, size) - : !acpi_memory_banned(addr, size) ) - pvh_add_mem_range(d, addr, addr + size, E820_ACPI); - } - - /* Identity map ACPI e820 regions. */ - for ( i = 0; i < d->arch.nr_e820; i++ ) - { - if ( d->arch.e820[i].type != E820_ACPI && - d->arch.e820[i].type != E820_NVS ) - continue; - - pfn = PFN_DOWN(d->arch.e820[i].addr); - nr_pages = PFN_UP((d->arch.e820[i].addr & ~PAGE_MASK) + - d->arch.e820[i].size); - - /* Memory below 1MB has been dealt with by pvh_populate_p2m(). */ - if ( pfn < PFN_DOWN(MB(1)) ) - { - if ( pfn + nr_pages <= PFN_DOWN(MB(1)) ) - continue; - - /* This shouldn't happen, but is easy to deal with. */ - nr_pages -= PFN_DOWN(MB(1)) - pfn; - pfn = PFN_DOWN(MB(1)); - } - - rc = modify_identity_mmio(d, pfn, nr_pages, true); - if ( rc ) - { - printk("Failed to map ACPI region [%#lx, %#lx) into Dom0 memory map\n", - pfn, pfn + nr_pages); - return rc; - } - } - rc = pvh_setup_acpi_madt(d, &madt_paddr); if ( rc ) return rc;