From patchwork Tue Jun 21 16:04:50 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Lai X-Patchwork-Id: 9190957 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7821F601C0 for ; Tue, 21 Jun 2016 16:41:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 67E3E2818A for ; Tue, 21 Jun 2016 16:41:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5CBE2281F9; Tue, 21 Jun 2016 16:41:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 910B72818A for ; Tue, 21 Jun 2016 16:41:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bFOhw-00080p-CP; Tue, 21 Jun 2016 16:39:24 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bFOhv-00080U-0n for xen-devel@lists.xensource.com; Tue, 21 Jun 2016 16:39:23 +0000 Received: from [85.158.137.68] by server-1.bemta-3.messagelabs.com id 34/74-28758-ABD69675; Tue, 21 Jun 2016 16:39:22 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmplkeJIrShJLcpLzFFi42I5YG4SobszNzP cYNlFDYt7U96zOzB6bO/bxR7AGMWamZeUX5HAmnFu3mzWgksZFSsfP2FuYJwZ0MXIwSEkUCGx 7QZ/FyMnh4QAr8SRZTNYIewAiReL34DZICVfX98AsrmA7OWMElc+TWKCSJRIfNxxlwXEZhNQl Vh+bhIjiC0ioCixbvU7sDizgI7ErvkbwOqFBYIkbnR+ZAOxWYDq7x8+BxbnFXCWOL5oAhPEYj mJk8cmgy3mFHCRuLV1KSPELmeJsx8WMk1g5F/AyLCKUaM4tagstUjX0FwvqSgzPaMkNzEzR9f QwFgvN7W4ODE9NScxqVgvOT93EyMweBiAYAfjy9OehxglOZiURHmZlTPChfiS8lMqMxKLM+KL SnNSiw8xynBwKEnwcmZlhgsJFqWmp1akZeYAwxgmLcHBoyTCKwaS5i0uSMwtzkyHSJ1iVJQS5 32eCZQQAElklObBtcFi5xKjrJQwLyPQIUI8BalFuZklqPKvGMU5GJWEecVBxvNk5pXATX8FtJ gJaPGy/nSQxSWJCCmpBkb7PysN7/8Tm/UhYNnJf0teKBREbljcHdKiJPuye09kgOHZs3MelYh sefklqd0z/lf/f76Jb2YlXL623vPb3S38Og8L15y4s+/O5c7eV8Yb12dnxBk4duw687zi0l/f 6S4z5KZM/yNuVpZVerV9VunqF1w6BQZBJ52j/86K+nFJik2k1GeG14cQJZbijERDLeai4kQAs fng0JgCAAA= X-Env-Sender: pclai@intel.com X-Msg-Ref: server-13.tower-31.messagelabs.com!1466527160!45913228!1 X-Originating-IP: [192.55.52.88] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 35429 invoked from network); 21 Jun 2016 16:39:21 -0000 Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88) by server-13.tower-31.messagelabs.com with SMTP; 21 Jun 2016 16:39:21 -0000 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP; 21 Jun 2016 09:39:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.26,505,1459839600"; d="scan'208"; a="1002419007" Received: from scymds01.sc.intel.com ([10.82.194.37]) by orsmga002.jf.intel.com with ESMTP; 21 Jun 2016 09:39:17 -0700 Received: from pclaidev.sc.intel.com (pclaidev.sc.intel.com [143.183.85.146]) by scymds01.sc.intel.com with ESMTP id u5LGdHgS021953; Tue, 21 Jun 2016 09:39:17 -0700 Received: by pclaidev.sc.intel.com (Postfix, from userid 1002) id 4013A228C3; Tue, 21 Jun 2016 09:05:00 -0700 (PDT) From: Paul Lai To: xen-devel@lists.xensource.com Date: Tue, 21 Jun 2016 09:04:50 -0700 Message-Id: <1466525090-1692-4-git-send-email-paul.c.lai@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1466525090-1692-1-git-send-email-paul.c.lai@intel.com> References: <1466525090-1692-1-git-send-email-paul.c.lai@intel.com> Cc: ravi.sahita@intel.com, jbeulich@suse.com Subject: [Xen-devel] [PATCH v1 Altp2m cleanup 3/3] Making altp2m struct dynamically allocated. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Ravi Sahita's dynamically allocated altp2m structs Signed-off-by: Paul Lai --- xen/arch/x86/hvm/hvm.c | 8 +++--- xen/arch/x86/hvm/vmx/vmx.c | 2 +- xen/arch/x86/mm/altp2m.c | 18 +++++++------- xen/arch/x86/mm/mm-locks.h | 4 +-- xen/arch/x86/mm/p2m-ept.c | 8 +++--- xen/arch/x86/mm/p2m.c | 59 ++++++++++++++++++++++++-------------------- xen/include/asm-x86/altp2m.h | 7 +++++- xen/include/asm-x86/domain.h | 12 ++++++++- xen/include/asm-x86/p2m.h | 2 +- 9 files changed, 70 insertions(+), 50 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 1595b3e..40270d0 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -5228,7 +5228,7 @@ static int do_altp2m_op( if ( (a.cmd != HVMOP_altp2m_get_domain_state) && (a.cmd != HVMOP_altp2m_set_domain_state) && - !d->arch.altp2m_active ) + ! altp2m_active(d) ) { rc = -EOPNOTSUPP; goto out; @@ -5262,11 +5262,11 @@ static int do_altp2m_op( break; } - ostate = d->arch.altp2m_active; - d->arch.altp2m_active = !!a.u.domain_state.state; + ostate = altp2m_active(d); + set_altp2m_active(d, !!a.u.domain_state.state); /* If the alternate p2m state has changed, handle appropriately */ - if ( d->arch.altp2m_active != ostate && + if ( altp2m_active(d) != ostate && (ostate || !(rc = p2m_init_altp2m_by_id(d, 0))) ) { for_each_vcpu( d, v ) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 670d7dc..b522578 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2017,7 +2017,7 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v) { v->arch.hvm_vmx.secondary_exec_control |= mask; __vmwrite(VM_FUNCTION_CONTROL, VMX_VMFUNC_EPTP_SWITCHING); - __vmwrite(EPTP_LIST_ADDR, virt_to_maddr(d->arch.altp2m_eptp)); + __vmwrite(EPTP_LIST_ADDR, virt_to_maddr(d->arch.altp2m->altp2m_eptp)); if ( cpu_has_vmx_virt_exceptions ) { diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c index 1caf6b4..77187c9 100644 --- a/xen/arch/x86/mm/altp2m.c +++ b/xen/arch/x86/mm/altp2m.c @@ -72,23 +72,23 @@ hvm_altp2m_init( struct domain *d) { unsigned int i = 0; /* Init alternate p2m data */ - if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL ) + if ( (d->arch.altp2m->altp2m_eptp = alloc_xenheap_page()) == NULL ) { rv = -ENOMEM; goto out; } for ( i = 0; i < MAX_EPTP; i++ ) - d->arch.altp2m_eptp[i] = INVALID_MFN; + d->arch.altp2m->altp2m_eptp[i] = INVALID_MFN; for ( i = 0; i < MAX_ALTP2M; i++ ) { - rv = p2m_alloc_table(d->arch.altp2m_p2m[i]); + rv = p2m_alloc_table(d->arch.altp2m->altp2m_p2m[i]); if ( rv != 0 ) goto out; } - d->arch.altp2m_active = 0; + set_altp2m_active(d, 0); out: return rv; } @@ -96,16 +96,16 @@ hvm_altp2m_init( struct domain *d) { void hvm_altp2m_teardown( struct domain *d) { unsigned int i = 0; - d->arch.altp2m_active = 0; + set_altp2m_active(d, 0); - if ( d->arch.altp2m_eptp ) + if ( d->arch.altp2m->altp2m_eptp ) { - free_xenheap_page(d->arch.altp2m_eptp); - d->arch.altp2m_eptp = NULL; + free_xenheap_page(d->arch.altp2m->altp2m_eptp); + d->arch.altp2m->altp2m_eptp = NULL; } for ( i = 0; i < MAX_ALTP2M; i++ ) - p2m_teardown(d->arch.altp2m_p2m[i]); + p2m_teardown(d->arch.altp2m->altp2m_p2m[i]); } /* diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h index 086c8bb..4d17b0a 100644 --- a/xen/arch/x86/mm/mm-locks.h +++ b/xen/arch/x86/mm/mm-locks.h @@ -251,8 +251,8 @@ declare_mm_rwlock(p2m); */ declare_mm_lock(altp2mlist) -#define altp2m_list_lock(d) mm_lock(altp2mlist, &(d)->arch.altp2m_list_lock) -#define altp2m_list_unlock(d) mm_unlock(&(d)->arch.altp2m_list_lock) +#define altp2m_list_lock(d) mm_lock(altp2mlist, &(d)->arch.altp2m->altp2m_list_lock) +#define altp2m_list_unlock(d) mm_unlock(&(d)->arch.altp2m->altp2m_list_lock) /* P2M lock (per-altp2m-table) * diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index dff34b1..754b660 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -1330,14 +1330,14 @@ void setup_ept_dump(void) } void p2m_init_altp2m_helper( struct domain *d, unsigned int i) { - struct p2m_domain *p2m = d->arch.altp2m_p2m[i]; + struct p2m_domain *p2m = d->arch.altp2m->altp2m_p2m[i]; struct ept_data *ept; p2m->min_remapped_gfn = INVALID_GFN; p2m->max_remapped_gfn = 0; ept = &p2m->ept; ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m)); - d->arch.altp2m_eptp[i] = ept_get_eptp(ept); + d->arch.altp2m->altp2m_eptp[i] = ept_get_eptp(ept); } unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp) @@ -1350,10 +1350,10 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp) for ( i = 0; i < MAX_ALTP2M; i++ ) { - if ( d->arch.altp2m_eptp[i] == INVALID_MFN ) + if ( d->arch.altp2m->altp2m_eptp[i] == INVALID_MFN ) continue; - p2m = d->arch.altp2m_p2m[i]; + p2m = d->arch.altp2m->altp2m_p2m[i]; ept = &p2m->ept; if ( eptp == ept_get_eptp(ept) ) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 90f2d95..70a8b15 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -193,12 +193,15 @@ static void p2m_teardown_altp2m(struct domain *d) for ( i = 0; i < MAX_ALTP2M; i++ ) { - if ( !d->arch.altp2m_p2m[i] ) + if ( !d->arch.altp2m->altp2m_p2m[i] ) continue; - p2m = d->arch.altp2m_p2m[i]; + p2m = d->arch.altp2m->altp2m_p2m[i]; p2m_free_one(p2m); - d->arch.altp2m_p2m[i] = NULL; + d->arch.altp2m->altp2m_p2m[i] = NULL; } + + if (d->arch.altp2m) + xfree(d->arch.altp2m); } static int p2m_init_altp2m(struct domain *d) @@ -206,10 +209,12 @@ static int p2m_init_altp2m(struct domain *d) unsigned int i; struct p2m_domain *p2m; - mm_lock_init(&d->arch.altp2m_list_lock); + d->arch.altp2m = xzalloc(struct altp2m_domain); + + mm_lock_init(&d->arch.altp2m->altp2m_list_lock); for ( i = 0; i < MAX_ALTP2M; i++ ) { - d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d); + d->arch.altp2m->altp2m_p2m[i] = p2m = p2m_init_one(d); if ( p2m == NULL ) { p2m_teardown_altp2m(d); @@ -1838,10 +1843,10 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, if ( altp2m_idx ) { if ( altp2m_idx >= MAX_ALTP2M || - d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN ) + d->arch.altp2m->altp2m_eptp[altp2m_idx] == INVALID_MFN ) return -EINVAL; - ap2m = d->arch.altp2m_p2m[altp2m_idx]; + ap2m = d->arch.altp2m->altp2m_p2m[altp2m_idx]; } switch ( access ) @@ -2280,7 +2285,7 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx) altp2m_list_lock(d); - if ( d->arch.altp2m_eptp[idx] != INVALID_MFN ) + if ( d->arch.altp2m->altp2m_eptp[idx] != INVALID_MFN ) { if ( idx != vcpu_altp2m(v).p2midx ) { @@ -2365,11 +2370,11 @@ void p2m_flush_altp2m(struct domain *d) for ( i = 0; i < MAX_ALTP2M; i++ ) { - p2m_flush_table(d->arch.altp2m_p2m[i]); + p2m_flush_table(d->arch.altp2m->altp2m_p2m[i]); /* Uninit and reinit ept to force TLB shootdown */ - ept_p2m_uninit(d->arch.altp2m_p2m[i]); - ept_p2m_init(d->arch.altp2m_p2m[i]); - d->arch.altp2m_eptp[i] = INVALID_MFN; + ept_p2m_uninit(d->arch.altp2m->altp2m_p2m[i]); + ept_p2m_init(d->arch.altp2m->altp2m_p2m[i]); + d->arch.altp2m->altp2m_eptp[i] = INVALID_MFN; } altp2m_list_unlock(d); @@ -2384,7 +2389,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx) altp2m_list_lock(d); - if ( d->arch.altp2m_eptp[idx] == INVALID_MFN ) + if ( d->arch.altp2m->altp2m_eptp[idx] == INVALID_MFN ) { p2m_init_altp2m_helper(d, idx); rc = 0; @@ -2403,7 +2408,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx) for ( i = 0; i < MAX_ALTP2M; i++ ) { - if ( d->arch.altp2m_eptp[i] != INVALID_MFN ) + if ( d->arch.altp2m->altp2m_eptp[i] != INVALID_MFN ) continue; p2m_init_altp2m_helper(d, i); @@ -2429,17 +2434,17 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx) altp2m_list_lock(d); - if ( d->arch.altp2m_eptp[idx] != INVALID_MFN ) + if ( d->arch.altp2m->altp2m_eptp[idx] != INVALID_MFN ) { - p2m = d->arch.altp2m_p2m[idx]; + p2m = d->arch.altp2m->altp2m_p2m[idx]; if ( !_atomic_read(p2m->active_vcpus) ) { - p2m_flush_table(d->arch.altp2m_p2m[idx]); + p2m_flush_table(d->arch.altp2m->altp2m_p2m[idx]); /* Uninit and reinit ept to force TLB shootdown */ - ept_p2m_uninit(d->arch.altp2m_p2m[idx]); - ept_p2m_init(d->arch.altp2m_p2m[idx]); - d->arch.altp2m_eptp[idx] = INVALID_MFN; + ept_p2m_uninit(d->arch.altp2m->altp2m_p2m[idx]); + ept_p2m_init(d->arch.altp2m->altp2m_p2m[idx]); + d->arch.altp2m->altp2m_eptp[idx] = INVALID_MFN; rc = 0; } } @@ -2463,7 +2468,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx) altp2m_list_lock(d); - if ( d->arch.altp2m_eptp[idx] != INVALID_MFN ) + if ( d->arch.altp2m->altp2m_eptp[idx] != INVALID_MFN ) { for_each_vcpu( d, v ) if ( idx != vcpu_altp2m(v).p2midx ) @@ -2494,11 +2499,11 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, unsigned int page_order; int rc = -EINVAL; - if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == INVALID_MFN ) + if ( idx >= MAX_ALTP2M || d->arch.altp2m->altp2m_eptp[idx] == INVALID_MFN ) return rc; hp2m = p2m_get_hostp2m(d); - ap2m = d->arch.altp2m_p2m[idx]; + ap2m = d->arch.altp2m->altp2m_p2m[idx]; p2m_lock(ap2m); @@ -2589,10 +2594,10 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn, for ( i = 0; i < MAX_ALTP2M; i++ ) { - if ( d->arch.altp2m_eptp[i] == INVALID_MFN ) + if ( d->arch.altp2m->altp2m_eptp[i] == INVALID_MFN ) continue; - p2m = d->arch.altp2m_p2m[i]; + p2m = d->arch.altp2m->altp2m_p2m[i]; m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL); /* Check for a dropped page that may impact this altp2m */ @@ -2613,10 +2618,10 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn, for ( i = 0; i < MAX_ALTP2M; i++ ) { if ( i == last_reset_idx || - d->arch.altp2m_eptp[i] == INVALID_MFN ) + d->arch.altp2m->altp2m_eptp[i] == INVALID_MFN ) continue; - p2m = d->arch.altp2m_p2m[i]; + p2m = d->arch.altp2m->altp2m_p2m[i]; p2m_lock(p2m); p2m_reset_altp2m(p2m); p2m_unlock(p2m); diff --git a/xen/include/asm-x86/altp2m.h b/xen/include/asm-x86/altp2m.h index 7ce047d..eca0ec7 100644 --- a/xen/include/asm-x86/altp2m.h +++ b/xen/include/asm-x86/altp2m.h @@ -24,7 +24,12 @@ /* Alternate p2m HVM on/off per domain */ static inline bool_t altp2m_active(const struct domain *d) { - return d->arch.altp2m_active; + return d->arch.altp2m->altp2m_active; +} + +static inline void set_altp2m_active(const struct domain *d, bool_t v) +{ + d->arch.altp2m->altp2m_active = v; } /* Alternate p2m VCPU */ diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h index 783fa4f..614dd40 100644 --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -274,6 +274,13 @@ struct monitor_write_data { uint64_t cr4; }; +struct altp2m_domain { + bool_t altp2m_active; + struct p2m_domain *altp2m_p2m[MAX_ALTP2M]; + mm_lock_t altp2m_list_lock; + uint64_t *altp2m_eptp; +}; + struct arch_domain { struct page_info *perdomain_l3_pg; @@ -320,10 +327,13 @@ struct arch_domain mm_lock_t nested_p2m_lock; /* altp2m: allow multiple copies of host p2m */ + /* bool_t altp2m_active; struct p2m_domain *altp2m_p2m[MAX_ALTP2M]; mm_lock_t altp2m_list_lock; - uint64_t *altp2m_eptp; + uint64_t *altp2m_eptp; + */ + struct altp2m_domain *altp2m; /* NB. protected by d->event_lock and by irq_desc[irq].lock */ struct radix_tree_root irq_pirq; diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index d7c8c12..3185ec7 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -776,7 +776,7 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v) BUG_ON(index >= MAX_ALTP2M); - return v->domain->arch.altp2m_p2m[index]; + return v->domain->arch.altp2m->altp2m_p2m[index]; } /* Switch alternate p2m for a single vcpu */