From patchwork Tue Jan 21 12:00:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11343639 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A4BE924 for ; Tue, 21 Jan 2020 12:02:31 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 467F6217F4 for ; Tue, 21 Jan 2020 12:02:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="DHx8Da3t" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 467F6217F4 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1itsDT-0006V7-Oi; Tue, 21 Jan 2020 12:01:07 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1itsDS-0006UL-1N for xen-devel@lists.xenproject.org; Tue, 21 Jan 2020 12:01:06 +0000 X-Inumbo-ID: b389010e-3c45-11ea-ba7e-12813bfff9fa Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id b389010e-3c45-11ea-ba7e-12813bfff9fa; Tue, 21 Jan 2020 12:01:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1579608065; x=1611144065; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=26snfv5um3S5UUpJu1yivfqgHjQLbD3UX6+RhOv2P3M=; b=DHx8Da3tIk+q72TMsK2LTPMxbiRzVh6/PnrWUwZnbzH18cpuwaIvgNrf cyY3jKPQ4V2/+D/N5GfyK2eXubR+fNEu1kNwF4l1doJFhPO3b7/jzTAOj BSKggg75eUfrNBvISLfSWiEbne0x4RauVQYPwgUUEp89oz6SPgxf0E27c 0=; IronPort-SDR: Ix4Lx1so1bs3+Ry9D+7sANEW8Zg6fVLvXeGSopbL7L9RrRZoFjXW2cI/0jr6ptXQywjr4IHT32 a/0KecP+eU6w== X-IronPort-AV: E=Sophos;i="5.70,345,1574121600"; d="scan'208";a="11690351" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-53356bf6.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 21 Jan 2020 12:01:04 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-53356bf6.us-west-2.amazon.com (Postfix) with ESMTPS id E50C8A2668; Tue, 21 Jan 2020 12:01:03 +0000 (UTC) Received: from EX13D32EUB003.ant.amazon.com (10.43.166.165) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Tue, 21 Jan 2020 12:00:23 +0000 Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by EX13D32EUB003.ant.amazon.com (10.43.166.165) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 21 Jan 2020 12:00:22 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Tue, 21 Jan 2020 12:00:19 +0000 From: Paul Durrant To: Date: Tue, 21 Jan 2020 12:00:09 +0000 Message-ID: <20200121120009.1767-4-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200121120009.1767-1-pdurrant@amazon.com> References: <20200121120009.1767-1-pdurrant@amazon.com> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH 3/3] x86 / vmx: use a 'normal' domheap page for APIC_DEFAULT_PHYS_BASE X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Stefano Stabellini , Julien Grall , Jun Nakajima , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Paul Durrant , Ian Jackson , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" vmx_alloc_vlapic_mapping() currently contains some very odd looking code that allocates a MEMF_no_owner domheap page and then shares with the guest as if it were a xenheap page. This then requires vmx_free_vlapic_mapping() to call a special function in the mm code: free_shared_domheap_page(). By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to alloc_domheap_page()), the odd looking code in vmx_alloc_vlapic_mapping() can simply use get_page_and_type() to set up a writable mapping before insertion in the P2M and vmx_free_vlapic_mapping() can simply release the page using put_page_alloc_ref() followed by put_page_and_type(). This then allows free_shared_domheap_page() to be purged. There is, however, some fall-out from this simplification: - alloc_domheap_page() will now call assign_pages() and run into the fact that 'max_pages' is not set until some time after domain_create(). To avoid an allocation failure, assign_pages() is modified to ignore the max_pages limit if 'creation_finished' is false. That value is not set to true until domain_unpause_by_systemcontroller() is called, and thus the guest cannot run (and hence cause memory allocation) until creation_finished is set to true. - Because the domheap page is no longer a pseudo-xenheap page, the reference counting will prevent the domain from being destroyed. Thus the call to vmx_free_vlapic_mapping() is moved from the domain_destroy() method into the domain_relinquish_resources() method. Signed-off-by: Paul Durrant --- Cc: Jun Nakajima Cc: Kevin Tian Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini --- xen/arch/x86/hvm/vmx/vmx.c | 29 ++++++++++++++++++++++------- xen/arch/x86/mm.c | 10 ---------- xen/common/page_alloc.c | 3 ++- xen/include/asm-x86/mm.h | 2 -- 4 files changed, 24 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 3fd3ac61e1..a2e6081485 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -421,10 +421,6 @@ static int vmx_domain_initialise(struct domain *d) } static void vmx_domain_relinquish_resources(struct domain *d) -{ -} - -static void vmx_domain_destroy(struct domain *d) { if ( !has_vlapic(d) ) return; @@ -432,6 +428,10 @@ static void vmx_domain_destroy(struct domain *d) vmx_free_vlapic_mapping(d); } +static void vmx_domain_destroy(struct domain *d) +{ +} + static int vmx_vcpu_initialise(struct vcpu *v) { int rc; @@ -3034,12 +3034,22 @@ static int vmx_alloc_vlapic_mapping(struct domain *d) if ( !cpu_has_vmx_virtualize_apic_accesses ) return 0; - pg = alloc_domheap_page(d, MEMF_no_owner); + pg = alloc_domheap_page(d, 0); if ( !pg ) return -ENOMEM; + + if ( !get_page_and_type(pg, d, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so failure + * here is a clear indication of something fishy going on. + */ + domain_crash(d); + return -ENODATA; + } + mfn = page_to_mfn(pg); clear_domain_page(mfn); - share_xen_page_with_guest(pg, d, SHARE_rw); d->arch.hvm.vmx.apic_access_mfn = mfn; return set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn, @@ -3052,7 +3062,12 @@ static void vmx_free_vlapic_mapping(struct domain *d) mfn_t mfn = d->arch.hvm.vmx.apic_access_mfn; if ( !mfn_eq(mfn, INVALID_MFN) ) - free_shared_domheap_page(mfn_to_page(mfn)); + { + struct page_info *pg = mfn_to_page(mfn); + + put_page_alloc_ref(pg); + put_page_and_type(pg); + } } static void vmx_install_vlapic_mapping(struct vcpu *v) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 654190e9e9..2a6d2e8af9 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -496,16 +496,6 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d, spin_unlock(&d->page_alloc_lock); } -void free_shared_domheap_page(struct page_info *page) -{ - put_page_alloc_ref(page); - if ( !test_and_clear_bit(_PGC_xen_heap, &page->count_info) ) - ASSERT_UNREACHABLE(); - page->u.inuse.type_info = 0; - page_set_owner(page, NULL); - free_domheap_page(page); -} - void make_cr3(struct vcpu *v, mfn_t mfn) { struct domain *d = v->domain; diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 919a270587..ef327072ed 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -2269,7 +2269,8 @@ int assign_pages( if ( !(memflags & MEMF_no_refcount) ) { - if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) ) + if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) && + d->creation_finished ) { gprintk(XENLOG_INFO, "Over-allocation for domain %u: " "%u > %u\n", d->domain_id, diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 2ca8882ad0..e429f38228 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -317,8 +317,6 @@ struct page_info #define maddr_get_owner(ma) (page_get_owner(maddr_to_page((ma)))) -extern void free_shared_domheap_page(struct page_info *page); - #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) extern unsigned long max_page; extern unsigned long total_pages;