From patchwork Thu Jan 23 12:21:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11347579 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA1D06C1 for ; Thu, 23 Jan 2020 12:23:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA89420678 for ; Thu, 23 Jan 2020 12:23:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="f03T112p" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA89420678 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iubVE-00072o-2v; Thu, 23 Jan 2020 12:22:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iubVD-00072R-0h for xen-devel@lists.xenproject.org; Thu, 23 Jan 2020 12:22:27 +0000 X-Inumbo-ID: fed7bc04-3dda-11ea-b833-bc764e2007e4 Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id fed7bc04-3dda-11ea-b833-bc764e2007e4; Thu, 23 Jan 2020 12:22:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1579782138; x=1611318138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F2Jn7Oj/403sGkcntZ7Z+Hp3IyknzYHqJrAEOkEXvgM=; b=f03T112pzTV3vMyGfDFNV9Z1SZLot/A/a/mQfabFfYjVBfNyg9LERSEL FEMya5DfY1SXGAKJ4tUr+3MamMvmlJ2Q416IddvjvUnTS2dj5Ym+yEens PYsoWTg+uZU8ATAGVyUVz5PYjZqnRaDK9b6o7KCT/g4JaqC69TSan6EFD I=; IronPort-SDR: k6Ffq+ajyQUoUFpuFsUWmqypjdgFDNX6SaoW9xMH+X2AN/kF0xXgEGj0cmwJp25U3EL6J7ohuZ g+VKeanmIz+g== X-IronPort-AV: E=Sophos;i="5.70,353,1574121600"; d="scan'208";a="21944591" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 23 Jan 2020 12:22:17 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com (Postfix) with ESMTPS id 6E91AA2AD3; Thu, 23 Jan 2020 12:22:16 +0000 (UTC) Received: from EX13D32EUB002.ant.amazon.com (10.43.166.114) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Thu, 23 Jan 2020 12:21:58 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D32EUB002.ant.amazon.com (10.43.166.114) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 23 Jan 2020 12:21:57 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 23 Jan 2020 12:21:55 +0000 From: Paul Durrant To: Date: Thu, 23 Jan 2020 12:21:40 +0000 Message-ID: <20200123122141.1419-4-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200123122141.1419-1-pdurrant@amazon.com> References: <20200123122141.1419-1-pdurrant@amazon.com> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v2 3/3] x86 / vmx: use a 'normal' domheap page for APIC_DEFAULT_PHYS_BASE X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Stefano Stabellini , Julien Grall , Jun Nakajima , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Paul Durrant , Ian Jackson , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" vmx_alloc_vlapic_mapping() currently contains some very odd looking code that allocates a MEMF_no_owner domheap page and then shares with the guest as if it were a xenheap page. This then requires vmx_free_vlapic_mapping() to call a special function in the mm code: free_shared_domheap_page(). By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to alloc_domheap_page()), the odd looking code in vmx_alloc_vlapic_mapping() can simply use get_page_and_type() to set up a writable mapping before insertion in the P2M and vmx_free_vlapic_mapping() can simply release the page using put_page_alloc_ref() followed by put_page_and_type(). This then allows free_shared_domheap_page() to be purged. There is, however, some fall-out from this simplification: - alloc_domheap_page() will now call assign_pages() and run into the fact that 'max_pages' is not set until some time after domain_create(). To avoid an allocation failure, domain_create() is modified to set max_pages to an initial value, sufficient to cover any domheap allocations required to complete domain creation. The value will be set to the 'real' max_pages when the tool-stack later performs the XEN_DOMCTL_max_mem operation, thus allowing the rest of the domain's memory to be allocated. - Because the domheap page is no longer a pseudo-xenheap page, the reference counting will prevent the domain from being destroyed. Thus the call to vmx_free_vlapic_mapping() is moved from the domain_destroy() method into the domain_relinquish_resources() method. Whilst in the area, make the domain_destroy() method an optional alternative_vcall() (since it will no longer peform any function in VMX and is stubbed in SVM anyway). Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Jun Nakajima Cc: Kevin Tian v2: - Set an initial value for max_pages rather than avoiding the check in assign_pages() - Make domain_destroy() optional --- xen/arch/x86/hvm/hvm.c | 4 +++- xen/arch/x86/hvm/svm/svm.c | 5 ----- xen/arch/x86/hvm/vmx/vmx.c | 25 ++++++++++++++++++++----- xen/arch/x86/mm.c | 10 ---------- xen/common/domain.c | 8 ++++++++ xen/include/asm-x86/mm.h | 2 -- 6 files changed, 31 insertions(+), 23 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index e51c077269..d2610f5f01 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -746,7 +746,9 @@ void hvm_domain_destroy(struct domain *d) hvm_destroy_cacheattr_region_list(d); - hvm_funcs.domain_destroy(d); + if ( hvm_funcs.domain_destroy ) + alternative_vcall(hvm_funcs.domain_destroy, d); + rtc_deinit(d); stdvga_deinit(d); vioapic_deinit(d); diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index b1c376d455..b7f67f9f03 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1155,10 +1155,6 @@ static int svm_domain_initialise(struct domain *d) return 0; } -static void svm_domain_destroy(struct domain *d) -{ -} - static int svm_vcpu_initialise(struct vcpu *v) { int rc; @@ -2425,7 +2421,6 @@ static struct hvm_function_table __initdata svm_function_table = { .cpu_up = svm_cpu_up, .cpu_down = svm_cpu_down, .domain_initialise = svm_domain_initialise, - .domain_destroy = svm_domain_destroy, .vcpu_initialise = svm_vcpu_initialise, .vcpu_destroy = svm_vcpu_destroy, .save_cpu_ctxt = svm_save_vmcb_ctxt, diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 8706954d73..f76fdd4f96 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -420,7 +420,7 @@ static int vmx_domain_initialise(struct domain *d) return 0; } -static void vmx_domain_destroy(struct domain *d) +static void vmx_domain_relinquish_resources(struct domain *d) { if ( !has_vlapic(d) ) return; @@ -2241,7 +2241,7 @@ static struct hvm_function_table __initdata vmx_function_table = { .cpu_up_prepare = vmx_cpu_up_prepare, .cpu_dead = vmx_cpu_dead, .domain_initialise = vmx_domain_initialise, - .domain_destroy = vmx_domain_destroy, + .domain_relinquish_resources = vmx_domain_relinquish_resources, .vcpu_initialise = vmx_vcpu_initialise, .vcpu_destroy = vmx_vcpu_destroy, .save_cpu_ctxt = vmx_save_vmcs_ctxt, @@ -3029,12 +3029,22 @@ static int vmx_alloc_vlapic_mapping(struct domain *d) if ( !cpu_has_vmx_virtualize_apic_accesses ) return 0; - pg = alloc_domheap_page(d, MEMF_no_owner); + pg = alloc_domheap_page(d, 0); if ( !pg ) return -ENOMEM; + + if ( !get_page_and_type(pg, d, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so failure + * here is a clear indication of something fishy going on. + */ + domain_crash(d); + return -ENODATA; + } + mfn = page_to_mfn(pg); clear_domain_page(mfn); - share_xen_page_with_guest(pg, d, SHARE_rw); d->arch.hvm.vmx.apic_access_mfn = mfn; return set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn, @@ -3048,7 +3058,12 @@ static void vmx_free_vlapic_mapping(struct domain *d) d->arch.hvm.vmx.apic_access_mfn = INVALID_MFN; if ( !mfn_eq(mfn, INVALID_MFN) ) - free_shared_domheap_page(mfn_to_page(mfn)); + { + struct page_info *pg = mfn_to_page(mfn); + + put_page_alloc_ref(pg); + put_page_and_type(pg); + } } static void vmx_install_vlapic_mapping(struct vcpu *v) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 654190e9e9..2a6d2e8af9 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -496,16 +496,6 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d, spin_unlock(&d->page_alloc_lock); } -void free_shared_domheap_page(struct page_info *page) -{ - put_page_alloc_ref(page); - if ( !test_and_clear_bit(_PGC_xen_heap, &page->count_info) ) - ASSERT_UNREACHABLE(); - page->u.inuse.type_info = 0; - page_set_owner(page, NULL); - free_domheap_page(page); -} - void make_cr3(struct vcpu *v, mfn_t mfn) { struct domain *d = v->domain; diff --git a/xen/common/domain.c b/xen/common/domain.c index ee3f9ffd3e..30c777acb8 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -339,6 +339,8 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config) return arch_sanitise_domain_config(config); } +#define DOMAIN_INIT_PAGES 1 + struct domain *domain_create(domid_t domid, struct xen_domctl_createdomain *config, bool is_priv) @@ -441,6 +443,12 @@ struct domain *domain_create(domid_t domid, radix_tree_init(&d->pirq_tree); } + /* + * Allow a limited number of special pages to be allocated for the + * domain + */ + d->max_pages = DOMAIN_INIT_PAGES; + if ( (err = arch_domain_create(d, config)) != 0 ) goto fail; init_status |= INIT_arch; diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 2ca8882ad0..e429f38228 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -317,8 +317,6 @@ struct page_info #define maddr_get_owner(ma) (page_get_owner(maddr_to_page((ma)))) -extern void free_shared_domheap_page(struct page_info *page); - #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) extern unsigned long max_page; extern unsigned long total_pages;