From patchwork Mon Feb 3 10:56:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11362421 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83F131398 for ; Mon, 3 Feb 2020 10:58:20 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 60E7E20658 for ; Mon, 3 Feb 2020 10:58:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="P2iQ19V/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 60E7E20658 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyZPh-0007Ys-Lj; Mon, 03 Feb 2020 10:57:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyZPg-0007Yh-G6 for xen-devel@lists.xenproject.org; Mon, 03 Feb 2020 10:57:08 +0000 X-Inumbo-ID: ebe14cae-4673-11ea-a933-bc764e2007e4 Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ebe14cae-4673-11ea-a933-bc764e2007e4; Mon, 03 Feb 2020 10:57:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1580727428; x=1612263428; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gzobShf8Ff0Sr4nJu807Ny+fAQLwCOU4cbk5kGoMPBw=; b=P2iQ19V/eWgeUGLDoeXA9m9JmEQjzg+Y6zY7JtuP+TkCzDjnjJ2Ve1zS 0k1qB7LDRuxcKgZ25xRGCbxhuM7bYSHuwH9v/xO0MayKph06oYrAh09aO bpce6mcrg8R7R3AxW2QzDPioeEAGlsE+V2rR0cw4kubmUGl486EUC63uL k=; IronPort-SDR: LqY645qCdk/G5a+FxYxIpS5gU8x4LqIJdw8s7VgXd39beUzGDGky2bwVvfogkuhY4qhIAviNJh GtkPNrdL1GDA== X-IronPort-AV: E=Sophos;i="5.70,397,1574121600"; d="scan'208";a="15395179" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com) ([10.43.8.2]) by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 03 Feb 2020 10:57:07 +0000 Received: from EX13MTAUEA002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166]) by email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com (Postfix) with ESMTPS id 76DADA2430; Mon, 3 Feb 2020 10:57:04 +0000 (UTC) Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Mon, 3 Feb 2020 10:57:04 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 3 Feb 2020 10:57:02 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 3 Feb 2020 10:57:00 +0000 From: Paul Durrant To: Date: Mon, 3 Feb 2020 10:56:51 +0000 Message-ID: <20200203105654.22998-2-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200203105654.22998-1-pdurrant@amazon.com> References: <20200203105654.22998-1-pdurrant@amazon.com> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v9 1/4] x86 / vmx: move teardown from domain_destroy()... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Wei Liu , Andrew Cooper , Paul Durrant , George Dunlap , Jun Nakajima , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" ... to domain_relinquish_resources(). The teardown code frees the APICv page. This does not need to be done late so do it in domain_relinquish_resources() rather than domain_destroy(). Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich Reviewed-by: Kevin Tian --- Cc: Jun Nakajima Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" Cc: George Dunlap v4: - New in v4 (disaggregated from v3 patch #3) --- xen/arch/x86/hvm/vmx/vmx.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index b262d38a7c..606f3dc2eb 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -419,7 +419,7 @@ static int vmx_domain_initialise(struct domain *d) return 0; } -static void vmx_domain_destroy(struct domain *d) +static void vmx_domain_relinquish_resources(struct domain *d) { if ( !has_vlapic(d) ) return; @@ -2240,7 +2240,7 @@ static struct hvm_function_table __initdata vmx_function_table = { .cpu_up_prepare = vmx_cpu_up_prepare, .cpu_dead = vmx_cpu_dead, .domain_initialise = vmx_domain_initialise, - .domain_destroy = vmx_domain_destroy, + .domain_relinquish_resources = vmx_domain_relinquish_resources, .vcpu_initialise = vmx_vcpu_initialise, .vcpu_destroy = vmx_vcpu_destroy, .save_cpu_ctxt = vmx_save_vmcs_ctxt, From patchwork Mon Feb 3 10:56:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11362423 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C162E1398 for ; Mon, 3 Feb 2020 10:58:30 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 92E9920658 for ; Mon, 3 Feb 2020 10:58:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="AlqEPMwS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 92E9920658 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyZPz-0007bw-W7; Mon, 03 Feb 2020 10:57:27 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyZPy-0007ba-Jz for xen-devel@lists.xenproject.org; Mon, 03 Feb 2020 10:57:26 +0000 X-Inumbo-ID: f6bd524e-4673-11ea-ad98-bc764e2007e4 Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f6bd524e-4673-11ea-ad98-bc764e2007e4; Mon, 03 Feb 2020 10:57:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1580727446; x=1612263446; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5aa87AlTqLJVmd5I4PbnIUG10KpXmqBv0mV0eyFqvew=; b=AlqEPMwSHKUJIHn89PQd2SpFwR7BhoXBYpFKidDhzPa38AtimeN/asdw 4tbwSxkHwtpHlEyXDWyJG0LGlnjffQ5UXqRtMICmrJ0xqRvv3fPtRDQtV ikWmdEhDYOkRlbpUfY/rt066fLM03Q4xaMkWsV1mZ/DumerquVgKckaFp 0=; IronPort-SDR: sHZVa2M3KxZ/zS6to7XblzmX23e6zthD77cJoqBfqWjYdU6ou1Lv9kQVUDtzzfxkP3U2SdpRWq QFuqcS+iebhw== X-IronPort-AV: E=Sophos;i="5.70,397,1574121600"; d="scan'208";a="15395216" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com) ([10.43.8.2]) by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 03 Feb 2020 10:57:26 +0000 Received: from EX13MTAUEA002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166]) by email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com (Postfix) with ESMTPS id DFD69A20DD; Mon, 3 Feb 2020 10:57:22 +0000 (UTC) Received: from EX13D32EUB002.ant.amazon.com (10.43.166.114) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Mon, 3 Feb 2020 10:57:08 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13D32EUB002.ant.amazon.com (10.43.166.114) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 3 Feb 2020 10:57:07 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 3 Feb 2020 10:57:03 +0000 From: Paul Durrant To: Date: Mon, 3 Feb 2020 10:56:52 +0000 Message-ID: <20200203105654.22998-3-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200203105654.22998-1-pdurrant@amazon.com> References: <20200203105654.22998-1-pdurrant@amazon.com> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v9 2/4] add a domain_tot_pages() helper function X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Paul Durrant , Ian Jackson , Tim Deegan , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" This patch adds a new domain_tot_pages() inline helper function into sched.h, which will be needed by a subsequent patch. No functional change. NOTE: While modifying the comment for 'tot_pages' in sched.h this patch makes some cosmetic fixes to surrounding comments. Suggested-by: Jan Beulich Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich Acked-by: Julien Grall --- Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: George Dunlap Cc: Tim Deegan v9: - Fix missing changes in PV shim - Dropped some comment changes v8: - New in v8 --- xen/arch/arm/arm64/domctl.c | 2 +- xen/arch/x86/domain.c | 2 +- xen/arch/x86/mm.c | 2 +- xen/arch/x86/mm/p2m-pod.c | 10 +++++----- xen/arch/x86/mm/shadow/common.c | 2 +- xen/arch/x86/msi.c | 2 +- xen/arch/x86/numa.c | 2 +- xen/arch/x86/pv/dom0_build.c | 25 +++++++++++++------------ xen/arch/x86/pv/domain.c | 2 +- xen/arch/x86/pv/shim.c | 4 ++-- xen/common/domctl.c | 2 +- xen/common/grant_table.c | 4 ++-- xen/common/keyhandler.c | 2 +- xen/common/memory.c | 2 +- xen/common/page_alloc.c | 15 ++++++++------- xen/include/public/memory.h | 4 ++-- xen/include/xen/sched.h | 24 ++++++++++++++++++------ 17 files changed, 60 insertions(+), 46 deletions(-) diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c index ab8781fb91..0de89b42c4 100644 --- a/xen/arch/arm/arm64/domctl.c +++ b/xen/arch/arm/arm64/domctl.c @@ -18,7 +18,7 @@ static long switch_mode(struct domain *d, enum domain_type type) if ( d == NULL ) return -EINVAL; - if ( d->tot_pages != 0 ) + if ( domain_tot_pages(d) != 0 ) return -EBUSY; if ( d->arch.type == type ) return 0; diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 28fefa1f81..643c23ffb0 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -218,7 +218,7 @@ void dump_pageframe_info(struct domain *d) printk("Memory pages belonging to domain %u:\n", d->domain_id); - if ( d->tot_pages >= 10 && d->is_dying < DOMDYING_dead ) + if ( domain_tot_pages(d) >= 10 && d->is_dying < DOMDYING_dead ) { printk(" DomPage list too long to display\n"); } diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index f50c065af3..e1b041e2df 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4870,7 +4870,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) else if ( rc >= 0 ) { p2m = p2m_get_hostp2m(d); - target.tot_pages = d->tot_pages; + target.tot_pages = domain_tot_pages(d); target.pod_cache_pages = p2m->pod.count; target.pod_entries = p2m->pod.entry_count; diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 096e2773fb..f2c9409568 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -302,7 +302,7 @@ out: * The following equations should hold: * 0 <= P <= T <= B <= M * d->arch.p2m->pod.entry_count == B - P - * d->tot_pages == P + d->arch.p2m->pod.count + * domain_tot_pages(d) == P + d->arch.p2m->pod.count * * Now we have the following potential cases to cover: * B tot_pages - p2m->pod.count; + populated = domain_tot_pages(d) - p2m->pod.count; if ( populated > 0 && p2m->pod.entry_count == 0 ) goto out; @@ -348,7 +348,7 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target) * T' < B: Don't reduce the cache size; let the balloon driver * take care of it. */ - if ( target < d->tot_pages ) + if ( target < domain_tot_pages(d) ) goto out; pod_target = target - populated; @@ -1231,8 +1231,8 @@ out_of_memory: pod_unlock(p2m); printk("%s: Dom%d out of PoD memory! (tot=%"PRIu32" ents=%ld dom%d)\n", - __func__, d->domain_id, d->tot_pages, p2m->pod.entry_count, - current->domain->domain_id); + __func__, d->domain_id, domain_tot_pages(d), + p2m->pod.entry_count, current->domain->domain_id); domain_crash(d); return false; out_fail: diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c index 6212ec2c4a..cba3ab1eba 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -1256,7 +1256,7 @@ static unsigned int sh_min_allocation(const struct domain *d) * up of slot zero and an LAPIC page), plus one for HVM's 1-to-1 pagetable. */ return shadow_min_acceptable_pages(d) + - max(max(d->tot_pages / 256, + max(max(domain_tot_pages(d) / 256, is_hvm_domain(d) ? CONFIG_PAGING_LEVELS + 2 : 0U) + is_hvm_domain(d), d->arch.paging.shadow.p2m_pages); diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c index df97ce0c72..2fabaaa155 100644 --- a/xen/arch/x86/msi.c +++ b/xen/arch/x86/msi.c @@ -991,7 +991,7 @@ static int msix_capability_init(struct pci_dev *dev, seg, bus, slot, func, d->domain_id); if ( !is_hardware_domain(d) && /* Assume a domain without memory has no mappings yet. */ - (!is_hardware_domain(currd) || d->tot_pages) ) + (!is_hardware_domain(currd) || domain_tot_pages(d)) ) domain_crash(d); /* XXX How to deal with existing mappings? */ } diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index 7e1f563012..7f0d27c153 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -419,7 +419,7 @@ static void dump_numa(unsigned char key) { process_pending_softirqs(); - printk("Domain %u (total: %u):\n", d->domain_id, d->tot_pages); + printk("Domain %u (total: %u):\n", d->domain_id, domain_tot_pages(d)); for_each_online_node ( i ) page_num_node[i] = 0; diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 9a97cf4abf..5678da782d 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -110,8 +110,9 @@ static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn, while ( vphysmap_start < vphysmap_end ) { - if ( d->tot_pages + ((round_pgup(vphysmap_end) - vphysmap_start) - >> PAGE_SHIFT) + 3 > nr_pages ) + if ( domain_tot_pages(d) + + ((round_pgup(vphysmap_end) - vphysmap_start) >> PAGE_SHIFT) + + 3 > nr_pages ) panic("Dom0 allocation too small for initial P->M table\n"); if ( pl1e ) @@ -264,7 +265,7 @@ static struct page_info * __init alloc_chunk(struct domain *d, { struct page_info *pg2; - if ( d->tot_pages + (1 << order) > d->max_pages ) + if ( domain_tot_pages(d) + (1 << order) > d->max_pages ) continue; pg2 = alloc_domheap_pages(d, order, MEMF_exact_node | MEMF_no_scrub); if ( pg2 > page ) @@ -500,13 +501,13 @@ int __init dom0_construct_pv(struct domain *d, if ( page == NULL ) panic("Not enough RAM for domain 0 allocation\n"); alloc_spfn = mfn_x(page_to_mfn(page)); - alloc_epfn = alloc_spfn + d->tot_pages; + alloc_epfn = alloc_spfn + domain_tot_pages(d); if ( initrd_len ) { initrd_pfn = vinitrd_start ? (vinitrd_start - v_start) >> PAGE_SHIFT : - d->tot_pages; + domain_tot_pages(d); initrd_mfn = mfn = initrd->mod_start; count = PFN_UP(initrd_len); if ( d->arch.physaddr_bitsize && @@ -541,9 +542,9 @@ int __init dom0_construct_pv(struct domain *d, printk("PHYSICAL MEMORY ARRANGEMENT:\n" " Dom0 alloc.: %"PRIpaddr"->%"PRIpaddr, pfn_to_paddr(alloc_spfn), pfn_to_paddr(alloc_epfn)); - if ( d->tot_pages < nr_pages ) + if ( domain_tot_pages(d) < nr_pages ) printk(" (%lu pages to be allocated)", - nr_pages - d->tot_pages); + nr_pages - domain_tot_pages(d)); if ( initrd ) { mpt_alloc = (paddr_t)initrd->mod_start << PAGE_SHIFT; @@ -755,7 +756,7 @@ int __init dom0_construct_pv(struct domain *d, snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%d%s", elf_64bit(&elf) ? 64 : 32, parms.pae ? "p" : ""); - count = d->tot_pages; + count = domain_tot_pages(d); /* Set up the phys->machine table if not part of the initial mapping. */ if ( parms.p2m_base != UNSET_ADDR ) @@ -786,7 +787,7 @@ int __init dom0_construct_pv(struct domain *d, process_pending_softirqs(); } si->first_p2m_pfn = pfn; - si->nr_p2m_frames = d->tot_pages - count; + si->nr_p2m_frames = domain_tot_pages(d) - count; page_list_for_each ( page, &d->page_list ) { mfn = mfn_x(page_to_mfn(page)); @@ -804,15 +805,15 @@ int __init dom0_construct_pv(struct domain *d, process_pending_softirqs(); } } - BUG_ON(pfn != d->tot_pages); + BUG_ON(pfn != domain_tot_pages(d)); #ifndef NDEBUG alloc_epfn += PFN_UP(initrd_len) + si->nr_p2m_frames; #endif while ( pfn < nr_pages ) { - if ( (page = alloc_chunk(d, nr_pages - d->tot_pages)) == NULL ) + if ( (page = alloc_chunk(d, nr_pages - domain_tot_pages(d))) == NULL ) panic("Not enough RAM for DOM0 reservation\n"); - while ( pfn < d->tot_pages ) + while ( pfn < domain_tot_pages(d) ) { mfn = mfn_x(page_to_mfn(page)); #ifndef NDEBUG diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c index 4da0b2afff..c95652d1b8 100644 --- a/xen/arch/x86/pv/domain.c +++ b/xen/arch/x86/pv/domain.c @@ -173,7 +173,7 @@ int switch_compat(struct domain *d) BUILD_BUG_ON(offsetof(struct shared_info, vcpu_info) != 0); - if ( is_hvm_domain(d) || d->tot_pages != 0 ) + if ( is_hvm_domain(d) || domain_tot_pages(d) != 0 ) return -EACCES; if ( is_pv_32bit_domain(d) ) return 0; diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c index 7a898fdbe5..f6d8794c62 100644 --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -268,7 +268,7 @@ void __init pv_shim_setup_dom(struct domain *d, l4_pgentry_t *l4start, * Set the max pages to the current number of pages to prevent the * guest from depleting the shim memory pool. */ - d->max_pages = d->tot_pages; + d->max_pages = domain_tot_pages(d); } static void write_start_info(struct domain *d) @@ -280,7 +280,7 @@ static void write_start_info(struct domain *d) snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%s", is_pv_32bit_domain(d) ? "32p" : "64"); - si->nr_pages = d->tot_pages; + si->nr_pages = domain_tot_pages(d); si->shared_info = virt_to_maddr(d->shared_info); si->flags = 0; BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_STORE_PFN, &si->store_mfn)); diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 8b819f56e5..bdc24bbd7c 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -191,7 +191,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info) xsm_security_domaininfo(d, info); - info->tot_pages = d->tot_pages; + info->tot_pages = domain_tot_pages(d); info->max_pages = d->max_pages; info->outstanding_pages = d->outstanding_pages; info->shr_pages = atomic_read(&d->shr_pages); diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 5536d282b9..8bee6b3b66 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -2261,7 +2261,7 @@ gnttab_transfer( * pages when it is dying. */ if ( unlikely(e->is_dying) || - unlikely(e->tot_pages >= e->max_pages) ) + unlikely(domain_tot_pages(e) >= e->max_pages) ) { spin_unlock(&e->page_alloc_lock); @@ -2271,7 +2271,7 @@ gnttab_transfer( else gdprintk(XENLOG_INFO, "Transferee d%d has no headroom (tot %u, max %u)\n", - e->domain_id, e->tot_pages, e->max_pages); + e->domain_id, domain_tot_pages(e), e->max_pages); gop.status = GNTST_general_error; goto unlock_and_copyback; diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c index f50490d0f3..87bd145374 100644 --- a/xen/common/keyhandler.c +++ b/xen/common/keyhandler.c @@ -271,7 +271,7 @@ static void dump_domains(unsigned char key) atomic_read(&d->pause_count)); printk(" nr_pages=%d xenheap_pages=%d shared_pages=%u paged_pages=%u " "dirty_cpus={%*pbl} max_pages=%u\n", - d->tot_pages, d->xenheap_pages, atomic_read(&d->shr_pages), + domain_tot_pages(d), d->xenheap_pages, atomic_read(&d->shr_pages), atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask), d->max_pages); printk(" handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-" diff --git a/xen/common/memory.c b/xen/common/memory.c index c7d2bac452..38cb5d0bb4 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1267,7 +1267,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) switch ( op ) { case XENMEM_current_reservation: - rc = d->tot_pages; + rc = domain_tot_pages(d); break; case XENMEM_maximum_reservation: rc = d->max_pages; diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 919a270587..bbd3163909 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -518,8 +518,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages) goto out; } - /* disallow a claim not exceeding current tot_pages or above max_pages */ - if ( (pages <= d->tot_pages) || (pages > d->max_pages) ) + /* disallow a claim not exceeding domain_tot_pages() or above max_pages */ + if ( (pages <= domain_tot_pages(d)) || (pages > d->max_pages) ) { ret = -EINVAL; goto out; @@ -532,9 +532,9 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages) /* * Note, if domain has already allocated memory before making a claim - * then the claim must take tot_pages into account + * then the claim must take domain_tot_pages() into account */ - claim = pages - d->tot_pages; + claim = pages - domain_tot_pages(d); if ( claim > avail_pages ) goto out; @@ -2269,11 +2269,12 @@ int assign_pages( if ( !(memflags & MEMF_no_refcount) ) { - if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) ) + unsigned int tot_pages = domain_tot_pages(d) + (1 << order); + + if ( unlikely(tot_pages > d->max_pages) ) { gprintk(XENLOG_INFO, "Over-allocation for domain %u: " - "%u > %u\n", d->domain_id, - d->tot_pages + (1 << order), d->max_pages); + "%u > %u\n", d->domain_id, tot_pages, d->max_pages); rc = -E2BIG; goto out; } diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index cfdda6e2a8..126d0ff06e 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -553,8 +553,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t); * * Note that a valid claim may be staked even after memory has been * allocated for a domain. In this case, the claim is not incremental, - * i.e. if the domain's tot_pages is 3, and a claim is staked for 10, - * only 7 additional pages are claimed. + * i.e. if the domain's total page count is 3, and a claim is staked + * for 10, only 7 additional pages are claimed. * * Caller must be privileged or the hypercall fails. */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 7c5c437247..1b6d7b941f 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -364,12 +364,18 @@ struct domain spinlock_t page_alloc_lock; /* protects all the following fields */ struct page_list_head page_list; /* linked list */ struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */ - unsigned int tot_pages; /* number of pages currently possesed */ - unsigned int xenheap_pages; /* # pages allocated from Xen heap */ - unsigned int outstanding_pages; /* pages claimed but not possessed */ - unsigned int max_pages; /* maximum value for tot_pages */ - atomic_t shr_pages; /* number of shared pages */ - atomic_t paged_pages; /* number of paged-out pages */ + + /* + * This field should only be directly accessed by domain_adjust_tot_pages() + * and the domain_tot_pages() helper function defined below. + */ + unsigned int tot_pages; + + unsigned int xenheap_pages; /* pages allocated from Xen heap */ + unsigned int outstanding_pages; /* pages claimed but not possessed */ + unsigned int max_pages; /* maximum value for domain_tot_pages() */ + atomic_t shr_pages; /* shared pages */ + atomic_t paged_pages; /* paged-out pages */ /* Scheduling. */ void *sched_priv; /* scheduler-specific data */ @@ -539,6 +545,12 @@ struct domain #endif }; +/* Return number of pages currently posessed by the domain */ +static inline unsigned int domain_tot_pages(const struct domain *d) +{ + return d->tot_pages; +} + /* Protect updates/reads (resp.) of domain_list and domain_hash. */ extern spinlock_t domlist_update_lock; extern rcu_read_lock_t domlist_read_lock; From patchwork Mon Feb 3 10:56:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11362429 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 964E214B4 for ; Mon, 3 Feb 2020 10:58:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 67AEE20658 for ; Mon, 3 Feb 2020 10:58:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="qPdpPKLV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67AEE20658 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyZQN-0007kI-7O; Mon, 03 Feb 2020 10:57:51 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyZQL-0007jg-E0 for xen-devel@lists.xenproject.org; Mon, 03 Feb 2020 10:57:49 +0000 X-Inumbo-ID: ff4a0bd3-4673-11ea-8e48-12813bfff9fa Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ff4a0bd3-4673-11ea-8e48-12813bfff9fa; Mon, 03 Feb 2020 10:57:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1580727463; x=1612263463; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Gf5x/tBmGj3budHRRuST5MQ080Rt3o8FfyCM5UjD4qU=; b=qPdpPKLVM9n4z/zop6opB2l+gCBf48Vi/7PT1oyurVuOYtl1YHwpXjry rpYIZGuAelYhbIRwvQd5GfHgYyBqh4rkKiegGOa7L9+B0zu5q0aTsep4o RyHTI5hJsK/XJmy7R9SPzgMbbp9ax5AeCIB6nI/Hust4cMK1INQsr6bVS I=; IronPort-SDR: +ROVqVMrqm79GCCs1Kq4ERFcNS4o98lk6KfP0W3GiTkrNI5pADj1yFBeoGxJgAvtB2d5B1j7s3 M6GJpnSyb0cg== X-IronPort-AV: E=Sophos;i="5.70,397,1574121600"; d="scan'208";a="24015474" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1a-16acd5e0.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 03 Feb 2020 10:57:30 +0000 Received: from EX13MTAUEA002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1a-16acd5e0.us-east-1.amazon.com (Postfix) with ESMTPS id 488DDA2834; Mon, 3 Feb 2020 10:57:25 +0000 (UTC) Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Mon, 3 Feb 2020 10:57:12 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 3 Feb 2020 10:57:10 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 3 Feb 2020 10:57:07 +0000 From: Paul Durrant To: Date: Mon, 3 Feb 2020 10:56:53 +0000 Message-ID: <20200203105654.22998-4-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200203105654.22998-1-pdurrant@amazon.com> References: <20200203105654.22998-1-pdurrant@amazon.com> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Paul Durrant , Ian Jackson , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Currently it is unsafe to assign a domheap page allocated with MEMF_no_refcount to a domain because the domain't 'tot_pages' will not be incremented, but will be decrement when the page is freed (since free_domheap_pages() has no way of telling that the increment was skipped). This patch allocates a new 'count_info' bit for a PGC_extra flag which is then used to mark pages when alloc_domheap_pages() is called with MEMF_no_refcount. assign_pages() because it still needs to call domain_adjust_tot_pages() to make sure the domain is appropriately referenced. Hence it is modified to do that for PGC_extra pages even if it is passed MEMF_no_refount. The number of PGC_extra pages assigned to a domain is tracked in a new 'extra_pages' counter, which is then subtracted from 'total_pages' in the domain_tot_pages() helper. Thus 'normal' page assignments will still be appropriately checked against 'max_pages'. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich Acked-by: Julien Grall --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Wei Liu Cc: Volodymyr Babchuk Cc: "Roger Pau Monné" v8: - Drop the idea of post-allocation assignment adding an error path to steal_page() if it encounters a PGC_extra page - Tighten up the ASSERTs in assign_pages() v7: - s/PGC_no_refcount/PGC_extra/g - Re-work allocation to account for 'extra' pages, also making it safe to assign PGC_extra pages post-allocation v6: - Add an extra ASSERT into assign_pages() that PGC_no_refcount is not set if MEMF_no_refcount is clear - ASSERT that count_info is 0 in alloc_domheap_pages() and set to PGC_no_refcount rather than ORing v5: - Make sure PGC_no_refcount is set before assign_pages() is called - Don't bother to clear PGC_no_refcount in free_domheap_pages() and drop ASSERT in free_heap_pages() - Don't latch count_info in free_heap_pages() v4: - New in v4 --- xen/arch/x86/mm.c | 3 +- xen/common/page_alloc.c | 63 +++++++++++++++++++++++++++++++--------- xen/include/asm-arm/mm.h | 5 +++- xen/include/asm-x86/mm.h | 7 +++-- xen/include/xen/sched.h | 5 +++- 5 files changed, 64 insertions(+), 19 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index e1b041e2df..fd134edcde 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4217,7 +4217,8 @@ int steal_page( if ( !(owner = page_get_owner_and_reference(page)) ) goto fail; - if ( owner != d || is_xen_heap_page(page) ) + if ( owner != d || is_xen_heap_page(page) || + (page->count_info & PGC_extra) ) goto fail_put; /* diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index bbd3163909..1ac9d9c719 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -2267,7 +2267,29 @@ int assign_pages( goto out; } - if ( !(memflags & MEMF_no_refcount) ) +#ifndef NDEBUG + { + unsigned int extra_pages = 0; + + for ( i = 0; i < (1ul << order); i++ ) + { + ASSERT(!(pg[i].count_info & ~PGC_extra)); + if ( pg[i].count_info & PGC_extra ) + extra_pages++; + } + + ASSERT(!extra_pages || + ((memflags & MEMF_no_refcount) && + extra_pages == 1u << order)); + } +#endif + + if ( pg[0].count_info & PGC_extra ) + { + d->extra_pages += 1u << order; + memflags &= ~MEMF_no_refcount; + } + else if ( !(memflags & MEMF_no_refcount) ) { unsigned int tot_pages = domain_tot_pages(d) + (1 << order); @@ -2278,18 +2300,19 @@ int assign_pages( rc = -E2BIG; goto out; } + } - if ( unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) ) + if ( !(memflags & MEMF_no_refcount) && + unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) ) get_knownalive_domain(d); - } for ( i = 0; i < (1 << order); i++ ) { ASSERT(page_get_owner(&pg[i]) == NULL); - ASSERT(!pg[i].count_info); page_set_owner(&pg[i], d); smp_wmb(); /* Domain pointer must be visible before updating refcnt. */ - pg[i].count_info = PGC_allocated | 1; + pg[i].count_info = + (pg[i].count_info & PGC_extra) | PGC_allocated | 1; page_list_add_tail(&pg[i], &d->page_list); } @@ -2315,11 +2338,6 @@ struct page_info *alloc_domheap_pages( if ( memflags & MEMF_no_owner ) memflags |= MEMF_no_refcount; - else if ( (memflags & MEMF_no_refcount) && d ) - { - ASSERT(!(memflags & MEMF_no_refcount)); - return NULL; - } if ( !dma_bitsize ) memflags &= ~MEMF_no_dma; @@ -2332,11 +2350,23 @@ struct page_info *alloc_domheap_pages( memflags, d)) == NULL)) ) return NULL; - if ( d && !(memflags & MEMF_no_owner) && - assign_pages(d, pg, order, memflags) ) + if ( d && !(memflags & MEMF_no_owner) ) { - free_heap_pages(pg, order, memflags & MEMF_no_scrub); - return NULL; + if ( memflags & MEMF_no_refcount ) + { + unsigned long i; + + for ( i = 0; i < (1ul << order); i++ ) + { + ASSERT(!pg[i].count_info); + pg[i].count_info = PGC_extra; + } + } + if ( assign_pages(d, pg, order, memflags) ) + { + free_heap_pages(pg, order, memflags & MEMF_no_scrub); + return NULL; + } } return pg; @@ -2384,6 +2414,11 @@ void free_domheap_pages(struct page_info *pg, unsigned int order) BUG(); } arch_free_heap_page(d, &pg[i]); + if ( pg[i].count_info & PGC_extra ) + { + ASSERT(d->extra_pages); + d->extra_pages--; + } } drop_dom_ref = !domain_adjust_tot_pages(d, -(1 << order)); diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index 333efd3a60..7df91280bc 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -119,9 +119,12 @@ struct page_info #define PGC_state_offlined PG_mask(2, 9) #define PGC_state_free PG_mask(3, 9) #define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) +/* Page is not reference counted */ +#define _PGC_extra PG_shift(10) +#define PGC_extra PG_mask(1, 10) /* Count of references to this frame. */ -#define PGC_count_width PG_shift(9) +#define PGC_count_width PG_shift(10) #define PGC_count_mask ((1UL<count_info&PGC_state) == PGC_state_##st) +/* Page is not reference counted */ +#define _PGC_extra PG_shift(10) +#define PGC_extra PG_mask(1, 10) - /* Count of references to this frame. */ -#define PGC_count_width PG_shift(9) +/* Count of references to this frame. */ +#define PGC_count_width PG_shift(10) #define PGC_count_mask ((1UL<tot_pages; + ASSERT(d->extra_pages <= d->tot_pages); + + return d->tot_pages - d->extra_pages; } /* Protect updates/reads (resp.) of domain_list and domain_hash. */ From patchwork Mon Feb 3 10:56:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11362427 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C32D514B4 for ; Mon, 3 Feb 2020 10:58:45 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A01D620658 for ; Mon, 3 Feb 2020 10:58:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="tkK1kDXN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A01D620658 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyZQH-0007i7-Sd; Mon, 03 Feb 2020 10:57:45 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyZQG-0007hZ-Dk for xen-devel@lists.xenproject.org; Mon, 03 Feb 2020 10:57:44 +0000 X-Inumbo-ID: ff4a0bd2-4673-11ea-8e48-12813bfff9fa Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ff4a0bd2-4673-11ea-8e48-12813bfff9fa; Mon, 03 Feb 2020 10:57:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1580727462; x=1612263462; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5cTElq8RmajOInGy+Yp6Ro4/ZWvBcXS3G3qbLkmNLKk=; b=tkK1kDXN/96Ny9Y7bTmMAGX1j17jK99+HdB4lL4bdtcSllj9Pq1a/Mu9 SnvrCahpg2LdgpfxYQASJR3B+jMQTcjkhS0f5eaXEJlzArGPiA2N9jQwx SVvmuN5lh+VgOK7SBYaAZuRta+F/Rv9eLv6hCU/mWg9DvHga0JPs9ZJ6r U=; IronPort-SDR: ldCLz2zrB56Lpe5EwQ2BgHHJwusEXMZhzNwyPOQ0h03RrDi0ZycL9n+J4RGEjuIac6a7xj6O2L tO0ORxApes8Q== X-IronPort-AV: E=Sophos;i="5.70,397,1574121600"; d="scan'208";a="24015475" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1a-67b371d8.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 03 Feb 2020 10:57:30 +0000 Received: from EX13MTAUEA002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166]) by email-inbound-relay-1a-67b371d8.us-east-1.amazon.com (Postfix) with ESMTPS id B79E9A1EC8; Mon, 3 Feb 2020 10:57:26 +0000 (UTC) Received: from EX13D32EUB002.ant.amazon.com (10.43.166.114) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Mon, 3 Feb 2020 10:57:15 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13D32EUB002.ant.amazon.com (10.43.166.114) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 3 Feb 2020 10:57:14 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 3 Feb 2020 10:57:11 +0000 From: Paul Durrant To: Date: Mon, 3 Feb 2020 10:56:54 +0000 Message-ID: <20200203105654.22998-5-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200203105654.22998-1-pdurrant@amazon.com> References: <20200203105654.22998-1-pdurrant@amazon.com> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v9 4/4] x86 / vmx: use a MEMF_no_refcount domheap page for APIC_DEFAULT_PHYS_BASE X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Wei Liu , Andrew Cooper , Paul Durrant , Jun Nakajima , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" vmx_alloc_vlapic_mapping() currently contains some very odd looking code that allocates a MEMF_no_owner domheap page and then shares with the guest as if it were a xenheap page. This then requires vmx_free_vlapic_mapping() to call a special function in the mm code: free_shared_domheap_page(). By using a MEMF_no_refcount domheap page instead, the odd looking code in vmx_alloc_vlapic_mapping() can simply use get_page_and_type() to set up a writable mapping before insertion in the P2M and vmx_free_vlapic_mapping() can simply release the page using put_page_alloc_ref() followed by put_page_and_type(). This then allows free_shared_domheap_page() to be purged. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich Reviewed-by: Kevin Tian --- Cc: Jun Nakajima Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" v4: - Use a MEMF_no_refcount page rather than a 'normal' page v2: - Set an initial value for max_pages rather than avoiding the check in assign_pages() - Make domain_destroy() optional --- xen/arch/x86/hvm/vmx/vmx.c | 21 ++++++++++++++++++--- xen/arch/x86/mm.c | 10 ---------- xen/include/asm-x86/mm.h | 2 -- 3 files changed, 18 insertions(+), 15 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 606f3dc2eb..7423d2421b 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -3028,12 +3028,22 @@ static int vmx_alloc_vlapic_mapping(struct domain *d) if ( !cpu_has_vmx_virtualize_apic_accesses ) return 0; - pg = alloc_domheap_page(d, MEMF_no_owner); + pg = alloc_domheap_page(d, MEMF_no_refcount); if ( !pg ) return -ENOMEM; + + if ( !get_page_and_type(pg, d, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so failure + * here is a clear indication of something fishy going on. + */ + domain_crash(d); + return -ENODATA; + } + mfn = page_to_mfn(pg); clear_domain_page(mfn); - share_xen_page_with_guest(pg, d, SHARE_rw); d->arch.hvm.vmx.apic_access_mfn = mfn; return set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn, @@ -3047,7 +3057,12 @@ static void vmx_free_vlapic_mapping(struct domain *d) d->arch.hvm.vmx.apic_access_mfn = _mfn(0); if ( !mfn_eq(mfn, _mfn(0)) ) - free_shared_domheap_page(mfn_to_page(mfn)); + { + struct page_info *pg = mfn_to_page(mfn); + + put_page_alloc_ref(pg); + put_page_and_type(pg); + } } static void vmx_install_vlapic_mapping(struct vcpu *v) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index fd134edcde..1e49bb0156 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -496,16 +496,6 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d, spin_unlock(&d->page_alloc_lock); } -void free_shared_domheap_page(struct page_info *page) -{ - put_page_alloc_ref(page); - if ( !test_and_clear_bit(_PGC_xen_heap, &page->count_info) ) - ASSERT_UNREACHABLE(); - page->u.inuse.type_info = 0; - page_set_owner(page, NULL); - free_domheap_page(page); -} - void make_cr3(struct vcpu *v, mfn_t mfn) { struct domain *d = v->domain; diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 06d64d494d..fafb3af46d 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -320,8 +320,6 @@ struct page_info #define maddr_get_owner(ma) (page_get_owner(maddr_to_page((ma)))) -extern void free_shared_domheap_page(struct page_info *page); - #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) extern unsigned long max_page; extern unsigned long total_pages;