From patchwork Mon Sep 30 10:33:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xia, Hongyan" X-Patchwork-Id: 11166469 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C892116B1 for ; Mon, 30 Sep 2019 10:37:14 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A3C8220679 for ; Mon, 30 Sep 2019 10:37:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="LXxXHA+A" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3C8220679 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEt2b-0001ig-5C; Mon, 30 Sep 2019 10:36:29 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEt2Z-0001g7-R0 for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 10:36:27 +0000 X-Inumbo-ID: 27f6a162-e36e-11e9-96cd-12813bfff9fa Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29]) by localhost (Halon) with ESMTPS id 27f6a162-e36e-11e9-96cd-12813bfff9fa; Mon, 30 Sep 2019 10:36:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1569839787; x=1601375787; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=zAmGDTsrkqDrD84a7ePZyWgjTtwq2BJqD/HRmBU5vZQ=; b=LXxXHA+AbHiNgReh+rYOkDuKzfFKWAakZofBS8mPf9b2lN/7XUazkOs3 UdIOLAEF3JwlgGRVgZGLwkZsxDTwiH8oyuFy/Z1u0/9xqOgrKnfK8Vq9h felpF2gVzh8X+sexgzsu4SUlUX6DpZDAbsi9XHMZB/ISVi8kRwrFZQ9wR 0=; X-IronPort-AV: E=Sophos;i="5.64,565,1559520000"; d="scan'208";a="705521373" Received: from sea3-co-svc-lb6-vlan3.sea.amazon.com (HELO email-inbound-relay-1d-474bcd9f.us-east-1.amazon.com) ([10.47.22.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 30 Sep 2019 10:35:09 +0000 Received: from EX13MTAUEA001.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1d-474bcd9f.us-east-1.amazon.com (Postfix) with ESMTPS id C4DC3A1DBA; Mon, 30 Sep 2019 10:34:52 +0000 (UTC) Received: from EX13D23UEE003.ant.amazon.com (10.43.62.173) by EX13MTAUEA001.ant.amazon.com (10.43.61.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 30 Sep 2019 10:34:27 +0000 Received: from EX13MTAUEE001.ant.amazon.com (10.43.62.200) by EX13D23UEE003.ant.amazon.com (10.43.62.173) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 30 Sep 2019 10:34:26 +0000 Received: from u9d785c4ba99158.ant.amazon.com (10.125.106.78) by mail-relay.amazon.com (10.43.62.226) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 30 Sep 2019 10:34:25 +0000 From: Hongyan Xia To: Date: Mon, 30 Sep 2019 11:33:31 +0100 Message-ID: X-Mailer: git-send-email 2.17.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v2 39/55] x86: switch root_pgt to mfn_t and use new APIs X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Wei Liu This then requires moving declaration of root page table mfn into mm.h and modify setup_cpu_root_pgt to have a single exit path. We also need to force map_domain_page to use direct map when switching per-domain mappings. This is contrary to our end goal of removing direct map, but this will be removed once we make map_domain_page context-switch safe in another (large) patch series. Signed-off-by: Wei Liu --- xen/arch/x86/domain.c | 15 ++++++++++--- xen/arch/x86/domain_page.c | 2 +- xen/arch/x86/mm.c | 2 +- xen/arch/x86/pv/domain.c | 2 +- xen/arch/x86/smpboot.c | 40 ++++++++++++++++++++++----------- xen/include/asm-x86/mm.h | 2 ++ xen/include/asm-x86/processor.h | 2 +- 7 files changed, 45 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index dbdf6b1bc2..e9bf47efce 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -69,6 +69,7 @@ #include #include #include +#include DEFINE_PER_CPU(struct vcpu *, curr_vcpu); @@ -1580,12 +1581,20 @@ void paravirt_ctxt_switch_from(struct vcpu *v) void paravirt_ctxt_switch_to(struct vcpu *v) { - root_pgentry_t *root_pgt = this_cpu(root_pgt); + mfn_t rpt_mfn = this_cpu(root_pgt_mfn); - if ( root_pgt ) - root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] = + if ( !mfn_eq(rpt_mfn, INVALID_MFN) ) + { + root_pgentry_t *rpt; + + mapcache_override_current(INVALID_VCPU); + rpt = map_xen_pagetable_new(rpt_mfn); + rpt[root_table_offset(PERDOMAIN_VIRT_START)] = l4e_from_page(v->domain->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW); + UNMAP_XEN_PAGETABLE_NEW(rpt); + mapcache_override_current(NULL); + } if ( unlikely(v->arch.dr7 & DR7_ACTIVE_MASK) ) activate_debugregs(v); diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c index 24083e9a86..cfcffd35f3 100644 --- a/xen/arch/x86/domain_page.c +++ b/xen/arch/x86/domain_page.c @@ -57,7 +57,7 @@ static inline struct vcpu *mapcache_current_vcpu(void) return v; } -void __init mapcache_override_current(struct vcpu *v) +void mapcache_override_current(struct vcpu *v) { this_cpu(override) = v; } diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 8706dc0174..5c1d65d267 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -530,7 +530,7 @@ void write_ptbase(struct vcpu *v) if ( is_pv_vcpu(v) && v->domain->arch.pv.xpti ) { cpu_info->root_pgt_changed = true; - cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)); + cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn)); if ( new_cr4 & X86_CR4_PCIDE ) cpu_info->pv_cr3 |= get_pcid_bits(v, true); switch_cr3_cr4(v->arch.cr3, new_cr4); diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c index 4b6f48dea2..7e70690f03 100644 --- a/xen/arch/x86/pv/domain.c +++ b/xen/arch/x86/pv/domain.c @@ -360,7 +360,7 @@ static void _toggle_guest_pt(struct vcpu *v) if ( d->arch.pv.xpti ) { cpu_info->root_pgt_changed = true; - cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)) | + cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn)) | (d->arch.pv.pcid ? get_pcid_bits(v, true) : 0); } diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index c55aaa65a2..ca8fc6d485 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -811,7 +811,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) return rc; } -DEFINE_PER_CPU(root_pgentry_t *, root_pgt); +DEFINE_PER_CPU(mfn_t, root_pgt_mfn); static root_pgentry_t common_pgt; @@ -819,19 +819,27 @@ extern const char _stextentry[], _etextentry[]; static int setup_cpu_root_pgt(unsigned int cpu) { - root_pgentry_t *rpt; + root_pgentry_t *rpt = NULL; + mfn_t rpt_mfn; unsigned int off; int rc; if ( !opt_xpti_hwdom && !opt_xpti_domu ) - return 0; + { + rc = 0; + goto out; + } - rpt = alloc_xen_pagetable(); - if ( !rpt ) - return -ENOMEM; + rpt_mfn = alloc_xen_pagetable_new(); + if ( mfn_eq(rpt_mfn, INVALID_MFN) ) + { + rc = -ENOMEM; + goto out; + } + rpt = map_xen_pagetable_new(rpt_mfn); clear_page(rpt); - per_cpu(root_pgt, cpu) = rpt; + per_cpu(root_pgt_mfn, cpu) = rpt_mfn; rpt[root_table_offset(RO_MPT_VIRT_START)] = idle_pg_table[root_table_offset(RO_MPT_VIRT_START)]; @@ -848,7 +856,7 @@ static int setup_cpu_root_pgt(unsigned int cpu) rc = clone_mapping(ptr, rpt); if ( rc ) - return rc; + goto out; common_pgt = rpt[root_table_offset(XEN_VIRT_START)]; } @@ -873,19 +881,24 @@ static int setup_cpu_root_pgt(unsigned int cpu) if ( !rc ) rc = clone_mapping((void *)per_cpu(stubs.addr, cpu), rpt); + out: + UNMAP_XEN_PAGETABLE_NEW(rpt); return rc; } static void cleanup_cpu_root_pgt(unsigned int cpu) { - root_pgentry_t *rpt = per_cpu(root_pgt, cpu); + mfn_t rpt_mfn = per_cpu(root_pgt_mfn, cpu); + root_pgentry_t *rpt; unsigned int r; unsigned long stub_linear = per_cpu(stubs.addr, cpu); - if ( !rpt ) + if ( mfn_eq(rpt_mfn, INVALID_MFN) ) return; - per_cpu(root_pgt, cpu) = NULL; + per_cpu(root_pgt_mfn, cpu) = INVALID_MFN; + + rpt = map_xen_pagetable_new(rpt_mfn); for ( r = root_table_offset(DIRECTMAP_VIRT_START); r < root_table_offset(HYPERVISOR_VIRT_END); ++r ) @@ -930,7 +943,8 @@ static void cleanup_cpu_root_pgt(unsigned int cpu) free_xen_pagetable_new(l3t_mfn); } - free_xen_pagetable(rpt); + UNMAP_XEN_PAGETABLE_NEW(rpt); + free_xen_pagetable_new(rpt_mfn); /* Also zap the stub mapping for this CPU. */ if ( stub_linear ) @@ -1134,7 +1148,7 @@ void __init smp_prepare_cpus(void) rc = setup_cpu_root_pgt(0); if ( rc ) panic("Error %d setting up PV root page table\n", rc); - if ( per_cpu(root_pgt, 0) ) + if ( !mfn_eq(per_cpu(root_pgt_mfn, 0), INVALID_MFN) ) { get_cpu_info()->pv_cr3 = 0; diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 80173eb4c3..12a10b270d 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -646,4 +646,6 @@ void free_xen_pagetable_new(mfn_t mfn); l1_pgentry_t *virt_to_xen_l1e(unsigned long v); +DECLARE_PER_CPU(mfn_t, root_pgt_mfn); + #endif /* __ASM_X86_MM_H__ */ diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h index c6fc1987a1..68d1d82071 100644 --- a/xen/include/asm-x86/processor.h +++ b/xen/include/asm-x86/processor.h @@ -469,7 +469,7 @@ static inline void disable_each_ist(idt_entry_t *idt) extern idt_entry_t idt_table[]; extern idt_entry_t *idt_tables[]; -DECLARE_PER_CPU(root_pgentry_t *, root_pgt); +DECLARE_PER_CPU(struct tss_struct, init_tss); extern void write_ptbase(struct vcpu *v);