From patchwork Thu Jan 30 14:57:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11358211 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C75A013A4 for ; Thu, 30 Jan 2020 15:00:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 97B922051A for ; Thu, 30 Jan 2020 15:00:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="tYlWg0QA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 97B922051A Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ixBGs-0003sq-OZ; Thu, 30 Jan 2020 14:58:18 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ixBGr-0003sW-VW for xen-devel@lists.xenproject.org; Thu, 30 Jan 2020 14:58:18 +0000 X-Inumbo-ID: f24b79d0-4370-11ea-b211-bc764e2007e4 Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f24b79d0-4370-11ea-b211-bc764e2007e4; Thu, 30 Jan 2020 14:58:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1580396297; x=1611932297; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=j5NBo/tw3yOjyh2WahhpTwnW1c7mtM31ZE9wJjxJN2g=; b=tYlWg0QAgbWXFVWblBYGhIe96Z3hDqKPbdZEPU6FfsJVzc1znhYtxwDN AoLuU2lCHD4H3X0h31fCDuj8dXLldvZQCYceTegxgm3SYNGq8sW39kxuY LXAIvJVhaLDvA1L9fjEeh2q/Tll9nrUNKGWCzwFXwc7mEtaT7gf6jxdyh 4=; IronPort-SDR: y8ZOLF7GRs0cQIbfV5vyHCaMCSBj5rS1a7Xhg3JhLowyz1URiTWDxIYvspB+VpWRYihkWPXnD4 jgv4yCxDaL0w== X-IronPort-AV: E=Sophos;i="5.70,382,1574121600"; d="scan'208";a="13644075" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 30 Jan 2020 14:58:05 +0000 Received: from EX13MTAUEA002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com (Postfix) with ESMTPS id 6D683282A26; Thu, 30 Jan 2020 14:58:01 +0000 (UTC) Received: from EX13D32EUB004.ant.amazon.com (10.43.166.212) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Thu, 30 Jan 2020 14:58:00 +0000 Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by EX13D32EUB004.ant.amazon.com (10.43.166.212) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 30 Jan 2020 14:57:59 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 30 Jan 2020 14:57:56 +0000 From: Paul Durrant To: Date: Thu, 30 Jan 2020 14:57:43 +0000 Message-ID: <20200130145745.1306-3-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200130145745.1306-1-pdurrant@amazon.com> References: <20200130145745.1306-1-pdurrant@amazon.com> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v8 2/4] add a domain_tot_pages() helper function X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Paul Durrant , Ian Jackson , Tim Deegan , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" This patch adds a new domain_tot_pages() inline helper function into sched.h, which will be needed by a subsequent patch. No functional change. NOTE: While modifying the comment for 'tot_pages' in sched.h this patch makes some cosmetic fixes to surrounding comments. Suggested-by: Jan Beulich Signed-off-by: Paul Durrant --- Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: George Dunlap Cc: Tim Deegan v8: - New in v8 --- xen/arch/x86/domain.c | 2 +- xen/arch/x86/mm.c | 6 +++--- xen/arch/x86/mm/p2m-pod.c | 10 +++++----- xen/arch/x86/mm/shadow/common.c | 2 +- xen/arch/x86/msi.c | 2 +- xen/arch/x86/numa.c | 2 +- xen/arch/x86/pv/dom0_build.c | 25 +++++++++++++------------ xen/arch/x86/pv/domain.c | 2 +- xen/common/domctl.c | 2 +- xen/common/grant_table.c | 4 ++-- xen/common/keyhandler.c | 2 +- xen/common/memory.c | 4 ++-- xen/common/page_alloc.c | 15 ++++++++------- xen/include/public/memory.h | 4 ++-- xen/include/xen/sched.h | 24 ++++++++++++++++++------ 15 files changed, 60 insertions(+), 46 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 28fefa1f81..643c23ffb0 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -218,7 +218,7 @@ void dump_pageframe_info(struct domain *d) printk("Memory pages belonging to domain %u:\n", d->domain_id); - if ( d->tot_pages >= 10 && d->is_dying < DOMDYING_dead ) + if ( domain_tot_pages(d) >= 10 && d->is_dying < DOMDYING_dead ) { printk(" DomPage list too long to display\n"); } diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index f50c065af3..8bb66cf30c 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4194,8 +4194,8 @@ long do_mmu_update( * - page caching attributes cleaned up * - removed from the domain's page_list * - * If MEMF_no_refcount is not set, the domain's tot_pages will be - * adjusted. If this results in the page count falling to 0, + * If MEMF_no_refcount is not set, the domain_adjust_tot_pages() will + * be called. If this results in the page count falling to 0, * put_domain() will be called. * * The caller should either call free_domheap_page() to free the @@ -4870,7 +4870,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) else if ( rc >= 0 ) { p2m = p2m_get_hostp2m(d); - target.tot_pages = d->tot_pages; + target.tot_pages = domain_tot_pages(d); target.pod_cache_pages = p2m->pod.count; target.pod_entries = p2m->pod.entry_count; diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 096e2773fb..f2c9409568 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -302,7 +302,7 @@ out: * The following equations should hold: * 0 <= P <= T <= B <= M * d->arch.p2m->pod.entry_count == B - P - * d->tot_pages == P + d->arch.p2m->pod.count + * domain_tot_pages(d) == P + d->arch.p2m->pod.count * * Now we have the following potential cases to cover: * B tot_pages - p2m->pod.count; + populated = domain_tot_pages(d) - p2m->pod.count; if ( populated > 0 && p2m->pod.entry_count == 0 ) goto out; @@ -348,7 +348,7 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target) * T' < B: Don't reduce the cache size; let the balloon driver * take care of it. */ - if ( target < d->tot_pages ) + if ( target < domain_tot_pages(d) ) goto out; pod_target = target - populated; @@ -1231,8 +1231,8 @@ out_of_memory: pod_unlock(p2m); printk("%s: Dom%d out of PoD memory! (tot=%"PRIu32" ents=%ld dom%d)\n", - __func__, d->domain_id, d->tot_pages, p2m->pod.entry_count, - current->domain->domain_id); + __func__, d->domain_id, domain_tot_pages(d), + p2m->pod.entry_count, current->domain->domain_id); domain_crash(d); return false; out_fail: diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c index 6212ec2c4a..cba3ab1eba 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -1256,7 +1256,7 @@ static unsigned int sh_min_allocation(const struct domain *d) * up of slot zero and an LAPIC page), plus one for HVM's 1-to-1 pagetable. */ return shadow_min_acceptable_pages(d) + - max(max(d->tot_pages / 256, + max(max(domain_tot_pages(d) / 256, is_hvm_domain(d) ? CONFIG_PAGING_LEVELS + 2 : 0U) + is_hvm_domain(d), d->arch.paging.shadow.p2m_pages); diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c index df97ce0c72..2fabaaa155 100644 --- a/xen/arch/x86/msi.c +++ b/xen/arch/x86/msi.c @@ -991,7 +991,7 @@ static int msix_capability_init(struct pci_dev *dev, seg, bus, slot, func, d->domain_id); if ( !is_hardware_domain(d) && /* Assume a domain without memory has no mappings yet. */ - (!is_hardware_domain(currd) || d->tot_pages) ) + (!is_hardware_domain(currd) || domain_tot_pages(d)) ) domain_crash(d); /* XXX How to deal with existing mappings? */ } diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index 7e1f563012..7f0d27c153 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -419,7 +419,7 @@ static void dump_numa(unsigned char key) { process_pending_softirqs(); - printk("Domain %u (total: %u):\n", d->domain_id, d->tot_pages); + printk("Domain %u (total: %u):\n", d->domain_id, domain_tot_pages(d)); for_each_online_node ( i ) page_num_node[i] = 0; diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 9a97cf4abf..5678da782d 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -110,8 +110,9 @@ static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn, while ( vphysmap_start < vphysmap_end ) { - if ( d->tot_pages + ((round_pgup(vphysmap_end) - vphysmap_start) - >> PAGE_SHIFT) + 3 > nr_pages ) + if ( domain_tot_pages(d) + + ((round_pgup(vphysmap_end) - vphysmap_start) >> PAGE_SHIFT) + + 3 > nr_pages ) panic("Dom0 allocation too small for initial P->M table\n"); if ( pl1e ) @@ -264,7 +265,7 @@ static struct page_info * __init alloc_chunk(struct domain *d, { struct page_info *pg2; - if ( d->tot_pages + (1 << order) > d->max_pages ) + if ( domain_tot_pages(d) + (1 << order) > d->max_pages ) continue; pg2 = alloc_domheap_pages(d, order, MEMF_exact_node | MEMF_no_scrub); if ( pg2 > page ) @@ -500,13 +501,13 @@ int __init dom0_construct_pv(struct domain *d, if ( page == NULL ) panic("Not enough RAM for domain 0 allocation\n"); alloc_spfn = mfn_x(page_to_mfn(page)); - alloc_epfn = alloc_spfn + d->tot_pages; + alloc_epfn = alloc_spfn + domain_tot_pages(d); if ( initrd_len ) { initrd_pfn = vinitrd_start ? (vinitrd_start - v_start) >> PAGE_SHIFT : - d->tot_pages; + domain_tot_pages(d); initrd_mfn = mfn = initrd->mod_start; count = PFN_UP(initrd_len); if ( d->arch.physaddr_bitsize && @@ -541,9 +542,9 @@ int __init dom0_construct_pv(struct domain *d, printk("PHYSICAL MEMORY ARRANGEMENT:\n" " Dom0 alloc.: %"PRIpaddr"->%"PRIpaddr, pfn_to_paddr(alloc_spfn), pfn_to_paddr(alloc_epfn)); - if ( d->tot_pages < nr_pages ) + if ( domain_tot_pages(d) < nr_pages ) printk(" (%lu pages to be allocated)", - nr_pages - d->tot_pages); + nr_pages - domain_tot_pages(d)); if ( initrd ) { mpt_alloc = (paddr_t)initrd->mod_start << PAGE_SHIFT; @@ -755,7 +756,7 @@ int __init dom0_construct_pv(struct domain *d, snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%d%s", elf_64bit(&elf) ? 64 : 32, parms.pae ? "p" : ""); - count = d->tot_pages; + count = domain_tot_pages(d); /* Set up the phys->machine table if not part of the initial mapping. */ if ( parms.p2m_base != UNSET_ADDR ) @@ -786,7 +787,7 @@ int __init dom0_construct_pv(struct domain *d, process_pending_softirqs(); } si->first_p2m_pfn = pfn; - si->nr_p2m_frames = d->tot_pages - count; + si->nr_p2m_frames = domain_tot_pages(d) - count; page_list_for_each ( page, &d->page_list ) { mfn = mfn_x(page_to_mfn(page)); @@ -804,15 +805,15 @@ int __init dom0_construct_pv(struct domain *d, process_pending_softirqs(); } } - BUG_ON(pfn != d->tot_pages); + BUG_ON(pfn != domain_tot_pages(d)); #ifndef NDEBUG alloc_epfn += PFN_UP(initrd_len) + si->nr_p2m_frames; #endif while ( pfn < nr_pages ) { - if ( (page = alloc_chunk(d, nr_pages - d->tot_pages)) == NULL ) + if ( (page = alloc_chunk(d, nr_pages - domain_tot_pages(d))) == NULL ) panic("Not enough RAM for DOM0 reservation\n"); - while ( pfn < d->tot_pages ) + while ( pfn < domain_tot_pages(d) ) { mfn = mfn_x(page_to_mfn(page)); #ifndef NDEBUG diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c index 4da0b2afff..c95652d1b8 100644 --- a/xen/arch/x86/pv/domain.c +++ b/xen/arch/x86/pv/domain.c @@ -173,7 +173,7 @@ int switch_compat(struct domain *d) BUILD_BUG_ON(offsetof(struct shared_info, vcpu_info) != 0); - if ( is_hvm_domain(d) || d->tot_pages != 0 ) + if ( is_hvm_domain(d) || domain_tot_pages(d) != 0 ) return -EACCES; if ( is_pv_32bit_domain(d) ) return 0; diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 8b819f56e5..bdc24bbd7c 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -191,7 +191,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info) xsm_security_domaininfo(d, info); - info->tot_pages = d->tot_pages; + info->tot_pages = domain_tot_pages(d); info->max_pages = d->max_pages; info->outstanding_pages = d->outstanding_pages; info->shr_pages = atomic_read(&d->shr_pages); diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 5536d282b9..8bee6b3b66 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -2261,7 +2261,7 @@ gnttab_transfer( * pages when it is dying. */ if ( unlikely(e->is_dying) || - unlikely(e->tot_pages >= e->max_pages) ) + unlikely(domain_tot_pages(e) >= e->max_pages) ) { spin_unlock(&e->page_alloc_lock); @@ -2271,7 +2271,7 @@ gnttab_transfer( else gdprintk(XENLOG_INFO, "Transferee d%d has no headroom (tot %u, max %u)\n", - e->domain_id, e->tot_pages, e->max_pages); + e->domain_id, domain_tot_pages(e), e->max_pages); gop.status = GNTST_general_error; goto unlock_and_copyback; diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c index f50490d0f3..87bd145374 100644 --- a/xen/common/keyhandler.c +++ b/xen/common/keyhandler.c @@ -271,7 +271,7 @@ static void dump_domains(unsigned char key) atomic_read(&d->pause_count)); printk(" nr_pages=%d xenheap_pages=%d shared_pages=%u paged_pages=%u " "dirty_cpus={%*pbl} max_pages=%u\n", - d->tot_pages, d->xenheap_pages, atomic_read(&d->shr_pages), + domain_tot_pages(d), d->xenheap_pages, atomic_read(&d->shr_pages), atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask), d->max_pages); printk(" handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-" diff --git a/xen/common/memory.c b/xen/common/memory.c index c7d2bac452..bf464e8799 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -717,7 +717,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg) /* * Pages in in_chunk_list is stolen without - * decreasing the tot_pages. If the domain is dying when + * decreasing domain_tot_pages(). If the domain is dying when * assign pages, we need decrease the count. For those pages * that has been assigned, it should be covered by * domain_relinquish_resources(). @@ -1267,7 +1267,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) switch ( op ) { case XENMEM_current_reservation: - rc = d->tot_pages; + rc = domain_tot_pages(d); break; case XENMEM_maximum_reservation: rc = d->max_pages; diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 919a270587..bbd3163909 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -518,8 +518,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages) goto out; } - /* disallow a claim not exceeding current tot_pages or above max_pages */ - if ( (pages <= d->tot_pages) || (pages > d->max_pages) ) + /* disallow a claim not exceeding domain_tot_pages() or above max_pages */ + if ( (pages <= domain_tot_pages(d)) || (pages > d->max_pages) ) { ret = -EINVAL; goto out; @@ -532,9 +532,9 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages) /* * Note, if domain has already allocated memory before making a claim - * then the claim must take tot_pages into account + * then the claim must take domain_tot_pages() into account */ - claim = pages - d->tot_pages; + claim = pages - domain_tot_pages(d); if ( claim > avail_pages ) goto out; @@ -2269,11 +2269,12 @@ int assign_pages( if ( !(memflags & MEMF_no_refcount) ) { - if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) ) + unsigned int tot_pages = domain_tot_pages(d) + (1 << order); + + if ( unlikely(tot_pages > d->max_pages) ) { gprintk(XENLOG_INFO, "Over-allocation for domain %u: " - "%u > %u\n", d->domain_id, - d->tot_pages + (1 << order), d->max_pages); + "%u > %u\n", d->domain_id, tot_pages, d->max_pages); rc = -E2BIG; goto out; } diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index cfdda6e2a8..126d0ff06e 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -553,8 +553,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t); * * Note that a valid claim may be staked even after memory has been * allocated for a domain. In this case, the claim is not incremental, - * i.e. if the domain's tot_pages is 3, and a claim is staked for 10, - * only 7 additional pages are claimed. + * i.e. if the domain's total page count is 3, and a claim is staked + * for 10, only 7 additional pages are claimed. * * Caller must be privileged or the hypercall fails. */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 7c5c437247..1b6d7b941f 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -364,12 +364,18 @@ struct domain spinlock_t page_alloc_lock; /* protects all the following fields */ struct page_list_head page_list; /* linked list */ struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */ - unsigned int tot_pages; /* number of pages currently possesed */ - unsigned int xenheap_pages; /* # pages allocated from Xen heap */ - unsigned int outstanding_pages; /* pages claimed but not possessed */ - unsigned int max_pages; /* maximum value for tot_pages */ - atomic_t shr_pages; /* number of shared pages */ - atomic_t paged_pages; /* number of paged-out pages */ + + /* + * This field should only be directly accessed by domain_adjust_tot_pages() + * and the domain_tot_pages() helper function defined below. + */ + unsigned int tot_pages; + + unsigned int xenheap_pages; /* pages allocated from Xen heap */ + unsigned int outstanding_pages; /* pages claimed but not possessed */ + unsigned int max_pages; /* maximum value for domain_tot_pages() */ + atomic_t shr_pages; /* shared pages */ + atomic_t paged_pages; /* paged-out pages */ /* Scheduling. */ void *sched_priv; /* scheduler-specific data */ @@ -539,6 +545,12 @@ struct domain #endif }; +/* Return number of pages currently posessed by the domain */ +static inline unsigned int domain_tot_pages(const struct domain *d) +{ + return d->tot_pages; +} + /* Protect updates/reads (resp.) of domain_list and domain_hash. */ extern spinlock_t domlist_update_lock; extern rcu_read_lock_t domlist_read_lock;