From patchwork Wed Nov 30 16:49:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 9454605 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 178B86071C for ; Wed, 30 Nov 2016 16:52:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F2FC7219AC for ; Wed, 30 Nov 2016 16:52:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E78A128479; Wed, 30 Nov 2016 16:52:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1BDC0219AC for ; Wed, 30 Nov 2016 16:52:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cC85V-0002Mm-1H; Wed, 30 Nov 2016 16:50:29 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cC85T-0002Mf-H4 for xen-devel@lists.xenproject.org; Wed, 30 Nov 2016 16:50:27 +0000 Received: from [85.158.139.211] by server-4.bemta-5.messagelabs.com id B2/56-22514-2530F385; Wed, 30 Nov 2016 16:50:26 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprFIsWRWlGSWpSXmKPExsXitHRDpG4Qs32 EwZsJXBbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bX120sBcfSKq7tW8PYwNjs38XIySEh4C/x ZMJTZhCbTUBH4uLcnWwgtohArMS6xqtMXYxcHMwC+xklju04BpYQFnCVmDnzLzuIzSKgKnHrz zOWLkYODl4BS4mNB2wgZupK3F62ix0kzClgJfHhqDqIKQRU0ba5EqSCV0BQ4uTMJywgNrOApk Tr9t/sELa8RPPW2WDXCAkoSvTPe8A2gZFvFpKWWUhaZiFpWcDIvIpRvTi1qCy1SNdSL6koMz2 jJDcxM0fX0MBULze1uDgxPTUnMalYLzk/dxMjMMwYgGAH49pW50OMkhxMSqK8RxnsI4T4kvJT KjMSizPii0pzUosPMcpwcChJ8LYyAeUEi1LTUyvSMnOAAQ+TluDgURLhrQRJ8xYXJOYWZ6ZDp E4xKkqJ88aBJARAEhmleXBtsCi7xCgrJczLCHSIEE9BalFuZgmq/CtGcQ5GJWHeTJApPJl5JX DTXwEtZgJa/Pa1NcjikkSElFQDI+sevh/nmKccP/x6Ukzbp/KI7382OovdFvp9UOkIdzhr0Sb maM+jqTrytxp2VmtHFgWt4J7vs/DGnouNr+YduPr682r37HVp0VLp/r8d1ggqzWSM7QpXVtH9 oLhLd12bg3DNqU1SoTWMR6L393S4PNh503mafIPsTecpySe6r9U5LH51UHWGhBJLcUaioRZzU XEiAAK0sXOtAgAA X-Env-Sender: prvs=1355645f8=roger.pau@citrix.com X-Msg-Ref: server-13.tower-206.messagelabs.com!1480524616!57515058!2 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.0.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 7192 invoked from network); 30 Nov 2016 16:50:20 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 30 Nov 2016 16:50:20 -0000 X-IronPort-AV: E=Sophos;i="5.31,574,1473120000"; d="scan'208";a="392272514" From: Roger Pau Monne To: , , Date: Wed, 30 Nov 2016 16:49:40 +0000 Message-ID: <20161130164950.43543-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.9.3 (Apple Git-75) In-Reply-To: <20161130164950.43543-1-roger.pau@citrix.com> References: <20161130164950.43543-1-roger.pau@citrix.com> MIME-Version: 1.0 Cc: George Dunlap , Andrew Cooper , Tim Deegan , Jan Beulich , Roger Pau Monne Subject: [Xen-devel] [PATCH v4 04/14] x86/paging: introduce paging_set_allocation X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ... and remove hap_set_alloc_for_pvh_dom0. While there also change the last parameter of the {hap/shadow}_set_allocation functions to be a boolean. Signed-off-by: Roger Pau Monné Acked-by: Tim Deegan Acked-by: George Dunlap Reviewed-by: Jan Beulich --- Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Tim Deegan --- Changes since v3: - Rename sh_set_allocation to shadow_set_allocation (public shadow functions use the shadow prefix instead of sh). Changes since v2: - Convert the preempt parameter into a bool. - Fix Dom0 builder comment to reflect that paging.mode should be correct before calling paging_set_allocation. Changes since RFC: - Make paging_set_allocation preemtable. - Move comments. --- xen/arch/x86/domain_build.c | 21 +++++++++++++++------ xen/arch/x86/mm/hap/hap.c | 22 +++++----------------- xen/arch/x86/mm/paging.c | 19 ++++++++++++++++++- xen/arch/x86/mm/shadow/common.c | 31 +++++++++++++------------------ xen/include/asm-x86/hap.h | 4 ++-- xen/include/asm-x86/paging.h | 7 +++++++ xen/include/asm-x86/shadow.h | 11 ++++++++++- 7 files changed, 70 insertions(+), 45 deletions(-) diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c index 0a02d65..17f8e91 100644 --- a/xen/arch/x86/domain_build.c +++ b/xen/arch/x86/domain_build.c @@ -35,7 +35,6 @@ #include #include /* for bzimage_parse */ #include -#include #include #include @@ -1383,15 +1382,25 @@ int __init construct_dom0( nr_pages); } - if ( is_pvh_domain(d) ) - hap_set_alloc_for_pvh_dom0(d, dom0_paging_pages(d, nr_pages)); - /* - * We enable paging mode again so guest_physmap_add_page will do the - * right thing for us. + * We enable paging mode again so guest_physmap_add_page and + * paging_set_allocation will do the right thing for us. */ d->arch.paging.mode = save_pvh_pg_mode; + if ( is_pvh_domain(d) ) + { + bool preempted; + + do { + preempted = false; + paging_set_allocation(d, dom0_paging_pages(d, nr_pages), + &preempted); + process_pending_softirqs(); + } while ( preempted ); + } + + /* Write the phys->machine and machine->phys table entries. */ for ( pfn = 0; pfn < count; pfn++ ) { diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index b9faba6..e6dc088 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -334,8 +334,7 @@ hap_get_allocation(struct domain *d) /* Set the pool of pages to the required number of pages. * Returns 0 for success, non-zero for failure. */ -static int -hap_set_allocation(struct domain *d, unsigned int pages, int *preempted) +int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted) { struct page_info *pg; @@ -381,7 +380,7 @@ hap_set_allocation(struct domain *d, unsigned int pages, int *preempted) /* Check to see if we need to yield and try again */ if ( preempted && general_preempt_check() ) { - *preempted = 1; + *preempted = true; return 0; } } @@ -561,7 +560,7 @@ void hap_final_teardown(struct domain *d) paging_unlock(d); } -void hap_teardown(struct domain *d, int *preempted) +void hap_teardown(struct domain *d, bool *preempted) { struct vcpu *v; mfn_t mfn; @@ -609,7 +608,8 @@ out: int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc, XEN_GUEST_HANDLE_PARAM(void) u_domctl) { - int rc, preempted = 0; + int rc; + bool preempted = false; switch ( sc->op ) { @@ -636,18 +636,6 @@ int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc, } } -void __init hap_set_alloc_for_pvh_dom0(struct domain *d, - unsigned long hap_pages) -{ - int rc; - - paging_lock(d); - rc = hap_set_allocation(d, hap_pages, NULL); - paging_unlock(d); - - BUG_ON(rc); -} - static const struct paging_mode hap_paging_real_mode; static const struct paging_mode hap_paging_protected_mode; static const struct paging_mode hap_paging_pae_mode; diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c index cc44682..853a035 100644 --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -809,7 +809,8 @@ long paging_domctl_continuation(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) /* Call when destroying a domain */ int paging_teardown(struct domain *d) { - int rc, preempted = 0; + int rc; + bool preempted = false; if ( hap_enabled(d) ) hap_teardown(d, &preempted); @@ -954,6 +955,22 @@ void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, safe_write_pte(p, new); } +int paging_set_allocation(struct domain *d, unsigned int pages, bool *preempted) +{ + int rc; + + ASSERT(paging_mode_enabled(d)); + + paging_lock(d); + if ( hap_enabled(d) ) + rc = hap_set_allocation(d, pages, preempted); + else + rc = shadow_set_allocation(d, pages, preempted); + paging_unlock(d); + + return rc; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c index ddbdb73..9f3bed9 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -1611,13 +1611,7 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg) paging_unlock(d); } -/* Set the pool of shadow pages to the required number of pages. - * Input will be rounded up to at least shadow_min_acceptable_pages(), - * plus space for the p2m table. - * Returns 0 for success, non-zero for failure. */ -static int sh_set_allocation(struct domain *d, - unsigned int pages, - int *preempted) +int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted) { struct page_info *sp; unsigned int lower_bound; @@ -1683,7 +1677,7 @@ static int sh_set_allocation(struct domain *d, /* Check to see if we need to yield and try again */ if ( preempted && general_preempt_check() ) { - *preempted = 1; + *preempted = true; return 0; } } @@ -3154,10 +3148,10 @@ int shadow_enable(struct domain *d, u32 mode) if ( old_pages == 0 ) { paging_lock(d); - rv = sh_set_allocation(d, 1024, NULL); /* Use at least 4MB */ + rv = shadow_set_allocation(d, 1024, NULL); /* Use at least 4MB */ if ( rv != 0 ) { - sh_set_allocation(d, 0, NULL); + shadow_set_allocation(d, 0, NULL); goto out_locked; } paging_unlock(d); @@ -3239,7 +3233,7 @@ int shadow_enable(struct domain *d, u32 mode) return rv; } -void shadow_teardown(struct domain *d, int *preempted) +void shadow_teardown(struct domain *d, bool *preempted) /* Destroy the shadow pagetables of this domain and free its shadow memory. * Should only be called for dying domains. */ { @@ -3301,7 +3295,7 @@ void shadow_teardown(struct domain *d, int *preempted) if ( d->arch.paging.shadow.total_pages != 0 ) { /* Destroy all the shadows and release memory to domheap */ - sh_set_allocation(d, 0, preempted); + shadow_set_allocation(d, 0, preempted); if ( preempted && *preempted ) goto out; @@ -3366,7 +3360,7 @@ void shadow_final_teardown(struct domain *d) p2m_teardown(p2m_get_hostp2m(d)); /* Free any shadow memory that the p2m teardown released */ paging_lock(d); - sh_set_allocation(d, 0, NULL); + shadow_set_allocation(d, 0, NULL); SHADOW_PRINTK("dom %u final teardown done." " Shadow pages total = %u, free = %u, p2m=%u\n", d->domain_id, @@ -3392,9 +3386,9 @@ static int shadow_one_bit_enable(struct domain *d, u32 mode) if ( d->arch.paging.shadow.total_pages == 0 ) { /* Init the shadow memory allocation if the user hasn't done so */ - if ( sh_set_allocation(d, 1, NULL) != 0 ) + if ( shadow_set_allocation(d, 1, NULL) != 0 ) { - sh_set_allocation(d, 0, NULL); + shadow_set_allocation(d, 0, NULL); return -ENOMEM; } } @@ -3463,7 +3457,7 @@ static int shadow_one_bit_disable(struct domain *d, u32 mode) } /* Pull down the memory allocation */ - if ( sh_set_allocation(d, 0, NULL) != 0 ) + if ( shadow_set_allocation(d, 0, NULL) != 0 ) BUG(); /* In fact, we will have BUG()ed already */ shadow_hash_teardown(d); SHADOW_PRINTK("un-shadowing of domain %u done." @@ -3876,7 +3870,8 @@ int shadow_domctl(struct domain *d, xen_domctl_shadow_op_t *sc, XEN_GUEST_HANDLE_PARAM(void) u_domctl) { - int rc, preempted = 0; + int rc; + bool preempted = false; switch ( sc->op ) { @@ -3907,7 +3902,7 @@ int shadow_domctl(struct domain *d, paging_unlock(d); return -EINVAL; } - rc = sh_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted); + rc = shadow_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted); paging_unlock(d); if ( preempted ) /* Not finished. Set up to re-run the call. */ diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h index c613836..dedb4b1 100644 --- a/xen/include/asm-x86/hap.h +++ b/xen/include/asm-x86/hap.h @@ -38,7 +38,7 @@ int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc, XEN_GUEST_HANDLE_PARAM(void) u_domctl); int hap_enable(struct domain *d, u32 mode); void hap_final_teardown(struct domain *d); -void hap_teardown(struct domain *d, int *preempted); +void hap_teardown(struct domain *d, bool *preempted); void hap_vcpu_init(struct vcpu *v); int hap_track_dirty_vram(struct domain *d, unsigned long begin_pfn, @@ -46,7 +46,7 @@ int hap_track_dirty_vram(struct domain *d, XEN_GUEST_HANDLE_64(uint8) dirty_bitmap); extern const struct paging_mode *hap_paging_get_mode(struct vcpu *); -void hap_set_alloc_for_pvh_dom0(struct domain *d, unsigned long num_pages); +int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted); #endif /* XEN_HAP_H */ diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h index 56eef6b..f83ed8b 100644 --- a/xen/include/asm-x86/paging.h +++ b/xen/include/asm-x86/paging.h @@ -347,6 +347,13 @@ void pagetable_dying(struct domain *d, paddr_t gpa); void paging_dump_domain_info(struct domain *d); void paging_dump_vcpu_info(struct vcpu *v); +/* Set the pool of shadow pages to the required number of pages. + * Input might be rounded up to at minimum amount of pages, plus + * space for the p2m table. + * Returns 0 for success, non-zero for failure. */ +int paging_set_allocation(struct domain *d, unsigned int pages, + bool *preempted); + #endif /* XEN_PAGING_H */ /* diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h index 6d0aefb..bac952f 100644 --- a/xen/include/asm-x86/shadow.h +++ b/xen/include/asm-x86/shadow.h @@ -73,7 +73,7 @@ int shadow_domctl(struct domain *d, XEN_GUEST_HANDLE_PARAM(void) u_domctl); /* Call when destroying a domain */ -void shadow_teardown(struct domain *d, int *preempted); +void shadow_teardown(struct domain *d, bool *preempted); /* Call once all of the references to the domain have gone away */ void shadow_final_teardown(struct domain *d); @@ -83,6 +83,13 @@ void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all); /* Discard _all_ mappings from the domain's shadows. */ void shadow_blow_tables_per_domain(struct domain *d); +/* Set the pool of shadow pages to the required number of pages. + * Input will be rounded up to at least shadow_min_acceptable_pages(), + * plus space for the p2m table. + * Returns 0 for success, non-zero for failure. */ +int shadow_set_allocation(struct domain *d, unsigned int pages, + bool *preempted); + #else /* !CONFIG_SHADOW_PAGING */ #define shadow_teardown(d, p) ASSERT(is_pv_domain(d)) @@ -91,6 +98,8 @@ void shadow_blow_tables_per_domain(struct domain *d); ({ ASSERT(is_pv_domain(d)); -EOPNOTSUPP; }) #define shadow_track_dirty_vram(d, begin_pfn, nr, bitmap) \ ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; }) +#define shadow_set_allocation(d, pages, preempted) \ + ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; }) static inline void sh_remove_shadows(struct domain *d, mfn_t gmfn, bool_t fast, bool_t all) {}