From patchwork Thu Sep 26 09:46:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xia, Hongyan" X-Patchwork-Id: 11162209 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 931B313BD for ; Thu, 26 Sep 2019 09:52:21 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6F7F32053B for ; Thu, 26 Sep 2019 09:52:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="Hs1wRF/u" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F7F32053B Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDQQy-0002bW-EV; Thu, 26 Sep 2019 09:51:36 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDQQx-0002Zp-9V for xen-devel@lists.xenproject.org; Thu, 26 Sep 2019 09:51:35 +0000 X-Inumbo-ID: 38c5d266-e043-11e9-964d-12813bfff9fa Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29]) by localhost (Halon) with ESMTPS id 38c5d266-e043-11e9-964d-12813bfff9fa; Thu, 26 Sep 2019 09:51:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1569491494; x=1601027494; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=qEGCuo73XxRhiEqecnqitQbCBG76FFtj6v0UyuE/UME=; b=Hs1wRF/uUVt2FDKbhqQhV+5vGn6Wi/mxK5BePhX7cSZqyPY4t/gLjZPt Afa6ziTyDfxBxnZlGg76LvQw9fi1T0bM7bMbvCRTmy6Vj7bqdIRsVuZbB AUXKuO1lc3w87gGaXDdG1jRP8Js4B8AIkDeESf5+rE/hWtPJQM2LVP7HE Q=; X-IronPort-AV: E=Sophos;i="5.64,551,1559520000"; d="scan'208";a="704637637" Received: from sea3-co-svc-lb6-vlan3.sea.amazon.com (HELO email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com) ([10.47.22.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 26 Sep 2019 09:50:17 +0000 Received: from EX13MTAUEA001.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com (Postfix) with ESMTPS id F15DBC1332; Thu, 26 Sep 2019 09:49:58 +0000 (UTC) Received: from EX13D28EUC003.ant.amazon.com (10.43.164.43) by EX13MTAUEA001.ant.amazon.com (10.43.61.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 09:49:42 +0000 Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by EX13D28EUC003.ant.amazon.com (10.43.164.43) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 09:49:41 +0000 Received: from u9d785c4ba99158.ant.amazon.com (10.125.106.58) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 09:49:38 +0000 From: To: Date: Thu, 26 Sep 2019 10:46:37 +0100 Message-ID: X-Mailer: git-send-email 2.17.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [RFC PATCH 74/84] x86/pv: refactor how building dom0 in PV handles domheap mappings. X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= , Wei Liu , Jan Beulich , Hongyan Xia Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Hongyan Xia Building a PV dom0 is allocating from the domheap but uses it like the xenheap. This is clearly wrong. Fix. Signed-off-by: Hongyan Xia --- xen/arch/x86/pv/dom0_build.c | 40 ++++++++++++++++++++++++++---------- xen/include/asm-x86/mm.h | 1 + 2 files changed, 30 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 15b3ca2191..0ec30988b8 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -623,7 +623,10 @@ int __init dom0_construct_pv(struct domain *d, if ( !is_pv_32bit_domain(d) ) { maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l4_page_table; - l4start = l4tab = __va(mpt_alloc); mpt_alloc += PAGE_SIZE; + l4start = l4tab = __va(mpt_alloc); + map_pages_to_xen((unsigned long)l4start, maddr_to_mfn(mpt_alloc), 1, + PAGE_HYPERVISOR); + mpt_alloc += PAGE_SIZE; clear_page(l4tab); init_xen_l4_slots(l4tab, _mfn(virt_to_mfn(l4start)), d, INVALID_MFN, true); @@ -633,9 +636,12 @@ int __init dom0_construct_pv(struct domain *d, { /* Monitor table already created by switch_compat(). */ l4start = l4tab = __va(pagetable_get_paddr(v->arch.guest_table)); + map_pages_to_xen((unsigned long)l4start, + pagetable_get_mfn(v->arch.guest_table), 1, PAGE_HYPERVISOR); /* See public/xen.h on why the following is needed. */ maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l3_page_table; - l3start = __va(mpt_alloc); mpt_alloc += PAGE_SIZE; + l3start = map_xen_pagetable(maddr_to_mfn(mpt_alloc)); + mpt_alloc += PAGE_SIZE; } l4tab += l4_table_offset(v_start); @@ -645,14 +651,18 @@ int __init dom0_construct_pv(struct domain *d, if ( !((unsigned long)l1tab & (PAGE_SIZE-1)) ) { maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l1_page_table; - l1start = l1tab = __va(mpt_alloc); mpt_alloc += PAGE_SIZE; + UNMAP_XEN_PAGETABLE(l1start); + l1start = l1tab = map_xen_pagetable(maddr_to_mfn(mpt_alloc)); + mpt_alloc += PAGE_SIZE; clear_page(l1tab); if ( count == 0 ) l1tab += l1_table_offset(v_start); if ( !((unsigned long)l2tab & (PAGE_SIZE-1)) ) { maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l2_page_table; - l2start = l2tab = __va(mpt_alloc); mpt_alloc += PAGE_SIZE; + UNMAP_XEN_PAGETABLE(l2start); + l2start = l2tab = map_xen_pagetable(maddr_to_mfn(mpt_alloc)); + mpt_alloc += PAGE_SIZE; clear_page(l2tab); if ( count == 0 ) l2tab += l2_table_offset(v_start); @@ -662,19 +672,21 @@ int __init dom0_construct_pv(struct domain *d, { maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l3_page_table; - l3start = __va(mpt_alloc); mpt_alloc += PAGE_SIZE; + UNMAP_XEN_PAGETABLE(l3start); + l3start = map_xen_pagetable(maddr_to_mfn(mpt_alloc)); + mpt_alloc += PAGE_SIZE; } l3tab = l3start; clear_page(l3tab); if ( count == 0 ) l3tab += l3_table_offset(v_start); - *l4tab = l4e_from_paddr(__pa(l3start), L4_PROT); + *l4tab = l4e_from_paddr(virt_to_maddr_walk(l3start), L4_PROT); l4tab++; } - *l3tab = l3e_from_paddr(__pa(l2start), L3_PROT); + *l3tab = l3e_from_paddr(virt_to_maddr_walk(l2start), L3_PROT); l3tab++; } - *l2tab = l2e_from_paddr(__pa(l1start), L2_PROT); + *l2tab = l2e_from_paddr(virt_to_maddr_walk(l1start), L2_PROT); l2tab++; } if ( count < initrd_pfn || count >= initrd_pfn + PFN_UP(initrd_len) ) @@ -701,9 +713,11 @@ int __init dom0_construct_pv(struct domain *d, if ( !l3e_get_intpte(*l3tab) ) { maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l2_page_table; - l2tab = __va(mpt_alloc); mpt_alloc += PAGE_SIZE; - clear_page(l2tab); - *l3tab = l3e_from_paddr(__pa(l2tab), L3_PROT); + UNMAP_XEN_PAGETABLE(l2start); + l2start = map_xen_pagetable(maddr_to_mfn(mpt_alloc)); + mpt_alloc += PAGE_SIZE; + clear_page(l2start); + *l3tab = l3e_from_paddr(virt_to_maddr_walk(l2start), L3_PROT); } if ( i == 3 ) l3e_get_page(*l3tab)->u.inuse.type_info |= PGT_pae_xen_l2; @@ -714,6 +728,10 @@ int __init dom0_construct_pv(struct domain *d, UNMAP_XEN_PAGETABLE(l2t); } + UNMAP_XEN_PAGETABLE(l1start); + UNMAP_XEN_PAGETABLE(l2start); + UNMAP_XEN_PAGETABLE(l3start); + /* Pages that are part of page tables must be read only. */ mark_pv_pt_pages_rdonly(d, l4start, vpt_start, nr_pt_pages); diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 76ba56bdc3..e5819cbfdf 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -645,6 +645,7 @@ void free_xen_pagetable(mfn_t mfn); l1_pgentry_t *virt_to_xen_l1e(unsigned long v); unsigned long virt_to_mfn_walk(void *va); struct page_info *virt_to_page_walk(void *va); +#define virt_to_maddr_walk(va) mfn_to_maddr(_mfn(virt_to_mfn_walk(va))) DECLARE_PER_CPU(mfn_t, root_pgt_mfn);