From patchwork Sun Jul 30 15:43:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Liu X-Patchwork-Id: 9870557 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AC2CA60353 for ; Sun, 30 Jul 2017 15:48:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BE4028462 for ; Sun, 30 Jul 2017 15:48:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9098F28599; Sun, 30 Jul 2017 15:48:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9036A28462 for ; Sun, 30 Jul 2017 15:48:36 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dbqQE-0007Kj-1y; Sun, 30 Jul 2017 15:46:26 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dbqQC-0007Jw-SS for xen-devel@lists.xenproject.org; Sun, 30 Jul 2017 15:46:25 +0000 Received: from [193.109.254.147] by server-11.bemta-6.messagelabs.com id 9A/AB-03612-05FFD795; Sun, 30 Jul 2017 15:46:24 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprMIsWRWlGSWpSXmKPExsXitHRDpK7//9p Ig5WLxC2+b5nM5MDocfjDFZYAxijWzLyk/IoE1ox7t3czFezaxFjxYvdWtgbG/pwuRk4OCQF/ iX0zp7KB2GwCyhI/O3vBbBEBPYmmA88Zuxi5OJgF5jBKTJ3bxQiSEBbwkvh7+y2YzSKgKnG9Y RUriM0rYCmxcvp/doih8hK72i6CxTmB4puWdDGB2EICqRJnjq1gh7AVJDqmH2OC6BWUODnzCQ uIzSwgIXHwxQvmCYy8s5CkZiFJLWBkWsWoUZxaVJZapGtkpJdUlJmeUZKbmJmja2hgppebWly cmJ6ak5hUrJecn7uJERhADECwg3HN/MBDjJIcTEqivOu4ayOF+JLyUyozEosz4otKc1KLDzHK cHAoSfB++AuUEyxKTU+tSMvMAYYyTFqCg0dJhJfnH1Cat7ggMbc4Mx0idYpRl+PVhP/fmIRY8 vLzUqXEeVtBigRAijJK8+BGwOLqEqOslDAvI9BRQjwFqUW5mSWo8q8YxTkYlYR5Q0Cm8GTmlc BtegV0BBPQEZKlYEeUJCKkpBoY+3gbzbXqWswPBMvWda5X4G49tFz0sZWA4pMKCcvVmWu4rFW W3+Z5xFe24eL1b8IzXB/9beq68TLnSbbfG2a397c2MhQy5T1efb1hd8kpKamT7jtdio/nhhYt 8Dgmu8JzrZbn+kKWaxwW/gsOPOlZvsXv0R4uh4eTPh0Ma1n57/iEcztz1+yyVGIpzkg01GIuK k4EAEtoVUKmAgAA X-Env-Sender: prvs=37723ee83=wei.liu2@citrix.com X-Msg-Ref: server-12.tower-27.messagelabs.com!1501429581!108057347!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 15705 invoked from network); 30 Jul 2017 15:46:23 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 30 Jul 2017 15:46:23 -0000 X-IronPort-AV: E=Sophos;i="5.40,437,1496102400"; d="scan'208";a="433622973" From: Wei Liu To: Xen-devel Date: Sun, 30 Jul 2017 16:43:34 +0100 Message-ID: <20170730154335.24313-11-wei.liu2@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170730154335.24313-1-wei.liu2@citrix.com> References: <20170720160426.2343-1-wei.liu2@citrix.com> <20170730154335.24313-1-wei.liu2@citrix.com> MIME-Version: 1.0 Cc: George Dunlap , Andrew Cooper , Wei Liu , Jan Beulich Subject: [Xen-devel] [PATCH v3 extra 10/11] x86/mm: move {get, put}_page_from_l{2, 3, 4}e X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP They are only used by PV code. Fix coding style issues while moving. Move declarations to PV specific header file. Signed-off-by: Wei Liu --- xen/arch/x86/mm.c | 253 -------------------------------------------- xen/arch/x86/pv/mm.c | 246 ++++++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/mm.h | 10 -- xen/include/asm-x86/pv/mm.h | 29 +++++ 4 files changed, 275 insertions(+), 263 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 40fb761d08..ade3ed2c48 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -511,72 +511,6 @@ int get_page_and_type_from_mfn(mfn_t mfn, unsigned long type, struct domain *d, return rc; } -static void put_data_page( - struct page_info *page, int writeable) -{ - if ( writeable ) - put_page_and_type(page); - else - put_page(page); -} - -/* - * We allow root tables to map each other (a.k.a. linear page tables). It - * needs some special care with reference counts and access permissions: - * 1. The mapping entry must be read-only, or the guest may get write access - * to its own PTEs. - * 2. We must only bump the reference counts for an *already validated* - * L2 table, or we can end up in a deadlock in get_page_type() by waiting - * on a validation that is required to complete that validation. - * 3. We only need to increment the reference counts for the mapped page - * frame if it is mapped by a different root table. This is sufficient and - * also necessary to allow validation of a root table mapping itself. - */ -#define define_get_linear_pagetable(level) \ -static int \ -get_##level##_linear_pagetable( \ - level##_pgentry_t pde, unsigned long pde_pfn, struct domain *d) \ -{ \ - unsigned long x, y; \ - struct page_info *page; \ - unsigned long pfn; \ - \ - if ( (level##e_get_flags(pde) & _PAGE_RW) ) \ - { \ - gdprintk(XENLOG_WARNING, \ - "Attempt to create linear p.t. with write perms\n"); \ - return 0; \ - } \ - \ - if ( (pfn = level##e_get_pfn(pde)) != pde_pfn ) \ - { \ - /* Make sure the mapped frame belongs to the correct domain. */ \ - if ( unlikely(!get_page_from_mfn(_mfn(pfn), d)) ) \ - return 0; \ - \ - /* \ - * Ensure that the mapped frame is an already-validated page table. \ - * If so, atomically increment the count (checking for overflow). \ - */ \ - page = mfn_to_page(pfn); \ - y = page->u.inuse.type_info; \ - do { \ - x = y; \ - if ( unlikely((x & PGT_count_mask) == PGT_count_mask) || \ - unlikely((x & (PGT_type_mask|PGT_validated)) != \ - (PGT_##level##_page_table|PGT_validated)) ) \ - { \ - put_page(page); \ - return 0; \ - } \ - } \ - while ( (y = cmpxchg(&page->u.inuse.type_info, x, x + 1)) != x ); \ - } \ - \ - return 1; \ -} - - bool is_iomem_page(mfn_t mfn) { struct page_info *page; @@ -866,108 +800,6 @@ get_page_from_l1e( } -/* NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'. */ -/* - * get_page_from_l2e returns: - * 1 => page not present - * 0 => success - * <0 => error code - */ -define_get_linear_pagetable(l2); -int -get_page_from_l2e( - l2_pgentry_t l2e, unsigned long pfn, struct domain *d) -{ - unsigned long mfn = l2e_get_pfn(l2e); - int rc; - - if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ) - return 1; - - if ( unlikely((l2e_get_flags(l2e) & L2_DISALLOW_MASK)) ) - { - gdprintk(XENLOG_WARNING, "Bad L2 flags %x\n", - l2e_get_flags(l2e) & L2_DISALLOW_MASK); - return -EINVAL; - } - - if ( !(l2e_get_flags(l2e) & _PAGE_PSE) ) - { - rc = get_page_and_type_from_mfn(_mfn(mfn), PGT_l1_page_table, d, 0, - false); - if ( unlikely(rc == -EINVAL) && get_l2_linear_pagetable(l2e, pfn, d) ) - rc = 0; - return rc; - } - - return -EINVAL; -} - - -/* - * get_page_from_l3e returns: - * 1 => page not present - * 0 => success - * <0 => error code - */ -define_get_linear_pagetable(l3); -int -get_page_from_l3e( - l3_pgentry_t l3e, unsigned long pfn, struct domain *d, int partial) -{ - int rc; - - if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ) - return 1; - - if ( unlikely((l3e_get_flags(l3e) & l3_disallow_mask(d))) ) - { - gdprintk(XENLOG_WARNING, "Bad L3 flags %x\n", - l3e_get_flags(l3e) & l3_disallow_mask(d)); - return -EINVAL; - } - - rc = get_page_and_type_from_mfn(_mfn(l3e_get_pfn(l3e)), PGT_l2_page_table, - d, partial, true); - if ( unlikely(rc == -EINVAL) && - !is_pv_32bit_domain(d) && - get_l3_linear_pagetable(l3e, pfn, d) ) - rc = 0; - - return rc; -} - -/* - * get_page_from_l4e returns: - * 1 => page not present - * 0 => success - * <0 => error code - */ -define_get_linear_pagetable(l4); -int -get_page_from_l4e( - l4_pgentry_t l4e, unsigned long pfn, struct domain *d, int partial) -{ - int rc; - - if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) ) - return 1; - - if ( unlikely((l4e_get_flags(l4e) & L4_DISALLOW_MASK)) ) - { - gdprintk(XENLOG_WARNING, "Bad L4 flags %x\n", - l4e_get_flags(l4e) & L4_DISALLOW_MASK); - return -EINVAL; - } - - rc = get_page_and_type_from_mfn(_mfn(l4e_get_pfn(l4e)), PGT_l3_page_table, - d, partial, true); - if ( unlikely(rc == -EINVAL) && get_l4_linear_pagetable(l4e, pfn, d) ) - rc = 0; - - return rc; -} - void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner) { unsigned long pfn = l1e_get_pfn(l1e); @@ -1028,91 +860,6 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner) } -/* - * NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'. - * Note also that this automatically deals correctly with linear p.t.'s. - */ -int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn) -{ - if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || (l2e_get_pfn(l2e) == pfn) ) - return 1; - - if ( l2e_get_flags(l2e) & _PAGE_PSE ) - { - struct page_info *page = mfn_to_page(l2e_get_pfn(l2e)); - unsigned int i; - - for ( i = 0; i < (1u << PAGETABLE_ORDER); i++, page++ ) - put_page_and_type(page); - } else - put_page_and_type(l2e_get_page(l2e)); - - return 0; -} - -int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, int partial, - bool defer) -{ - struct page_info *pg; - - if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || (l3e_get_pfn(l3e) == pfn) ) - return 1; - - if ( unlikely(l3e_get_flags(l3e) & _PAGE_PSE) ) - { - unsigned long mfn = l3e_get_pfn(l3e); - int writeable = l3e_get_flags(l3e) & _PAGE_RW; - - ASSERT(!(mfn & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1))); - do { - put_data_page(mfn_to_page(mfn), writeable); - } while ( ++mfn & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1) ); - - return 0; - } - - pg = l3e_get_page(l3e); - - if ( unlikely(partial > 0) ) - { - ASSERT(!defer); - return put_page_type_preemptible(pg); - } - - if ( defer ) - { - current->arch.old_guest_table = pg; - return 0; - } - - return put_page_and_type_preemptible(pg); -} - -int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, int partial, - bool defer) -{ - if ( (l4e_get_flags(l4e) & _PAGE_PRESENT) && - (l4e_get_pfn(l4e) != pfn) ) - { - struct page_info *pg = l4e_get_page(l4e); - - if ( unlikely(partial > 0) ) - { - ASSERT(!defer); - return put_page_type_preemptible(pg); - } - - if ( defer ) - { - current->arch.old_guest_table = pg; - return 0; - } - - return put_page_and_type_preemptible(pg); - } - return 1; -} - bool fill_ro_mpt(unsigned long mfn) { l4_pgentry_t *l4tab = map_domain_page(_mfn(mfn)); diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c index 19b2ae588e..ad35808c51 100644 --- a/xen/arch/x86/pv/mm.c +++ b/xen/arch/x86/pv/mm.c @@ -777,6 +777,252 @@ void pv_invalidate_shadow_ldt(struct vcpu *v, bool flush) spin_unlock(&v->arch.pv_vcpu.shadow_ldt_lock); } +/* + * We allow root tables to map each other (a.k.a. linear page tables). It + * needs some special care with reference counts and access permissions: + * 1. The mapping entry must be read-only, or the guest may get write access + * to its own PTEs. + * 2. We must only bump the reference counts for an *already validated* + * L2 table, or we can end up in a deadlock in get_page_type() by waiting + * on a validation that is required to complete that validation. + * 3. We only need to increment the reference counts for the mapped page + * frame if it is mapped by a different root table. This is sufficient and + * also necessary to allow validation of a root table mapping itself. + */ +#define define_get_linear_pagetable(level) \ +static int \ +get_##level##_linear_pagetable( \ + level##_pgentry_t pde, unsigned long pde_pfn, struct domain *d) \ +{ \ + unsigned long x, y; \ + struct page_info *page; \ + unsigned long pfn; \ + \ + if ( (level##e_get_flags(pde) & _PAGE_RW) ) \ + { \ + gdprintk(XENLOG_WARNING, \ + "Attempt to create linear p.t. with write perms\n"); \ + return 0; \ + } \ + \ + if ( (pfn = level##e_get_pfn(pde)) != pde_pfn ) \ + { \ + /* Make sure the mapped frame belongs to the correct domain. */ \ + if ( unlikely(!get_page_from_mfn(_mfn(pfn), d)) ) \ + return 0; \ + \ + /* \ + * Ensure that the mapped frame is an already-validated page table. \ + * If so, atomically increment the count (checking for overflow). \ + */ \ + page = mfn_to_page(pfn); \ + y = page->u.inuse.type_info; \ + do { \ + x = y; \ + if ( unlikely((x & PGT_count_mask) == PGT_count_mask) || \ + unlikely((x & (PGT_type_mask|PGT_validated)) != \ + (PGT_##level##_page_table|PGT_validated)) ) \ + { \ + put_page(page); \ + return 0; \ + } \ + } \ + while ( (y = cmpxchg(&page->u.inuse.type_info, x, x + 1)) != x ); \ + } \ + \ + return 1; \ +} + +/* NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'. */ +/* + * get_page_from_l2e returns: + * 1 => page not present + * 0 => success + * <0 => error code + */ +define_get_linear_pagetable(l2); +int get_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn, struct domain *d) +{ + unsigned long mfn = l2e_get_pfn(l2e); + int rc; + + if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ) + return 1; + + if ( unlikely((l2e_get_flags(l2e) & L2_DISALLOW_MASK)) ) + { + gdprintk(XENLOG_WARNING, "Bad L2 flags %x\n", + l2e_get_flags(l2e) & L2_DISALLOW_MASK); + return -EINVAL; + } + + if ( !(l2e_get_flags(l2e) & _PAGE_PSE) ) + { + rc = get_page_and_type_from_mfn(_mfn(mfn), PGT_l1_page_table, d, 0, + false); + if ( unlikely(rc == -EINVAL) && get_l2_linear_pagetable(l2e, pfn, d) ) + rc = 0; + return rc; + } + + return -EINVAL; +} + +/* + * get_page_from_l3e returns: + * 1 => page not present + * 0 => success + * <0 => error code + */ +define_get_linear_pagetable(l3); +int get_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, struct domain *d, + int partial) +{ + int rc; + + if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ) + return 1; + + if ( unlikely((l3e_get_flags(l3e) & l3_disallow_mask(d))) ) + { + gdprintk(XENLOG_WARNING, "Bad L3 flags %x\n", + l3e_get_flags(l3e) & l3_disallow_mask(d)); + return -EINVAL; + } + + rc = get_page_and_type_from_mfn(_mfn(l3e_get_pfn(l3e)), PGT_l2_page_table, + d, partial, true); + if ( unlikely(rc == -EINVAL) && + !is_pv_32bit_domain(d) && + get_l3_linear_pagetable(l3e, pfn, d) ) + rc = 0; + + return rc; +} + +/* + * get_page_from_l4e returns: + * 1 => page not present + * 0 => success + * <0 => error code + */ +define_get_linear_pagetable(l4); +int get_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, struct domain *d, + int partial) +{ + int rc; + + if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) ) + return 1; + + if ( unlikely((l4e_get_flags(l4e) & L4_DISALLOW_MASK)) ) + { + gdprintk(XENLOG_WARNING, "Bad L4 flags %x\n", + l4e_get_flags(l4e) & L4_DISALLOW_MASK); + return -EINVAL; + } + + rc = get_page_and_type_from_mfn(_mfn(l4e_get_pfn(l4e)), PGT_l3_page_table, + d, partial, true); + if ( unlikely(rc == -EINVAL) && get_l4_linear_pagetable(l4e, pfn, d) ) + rc = 0; + + return rc; +} + +/* + * NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'. + * Note also that this automatically deals correctly with linear p.t.'s. + */ +int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn) +{ + if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || (l2e_get_pfn(l2e) == pfn) ) + return 1; + + if ( l2e_get_flags(l2e) & _PAGE_PSE ) + { + struct page_info *page = mfn_to_page(l2e_get_pfn(l2e)); + unsigned int i; + + for ( i = 0; i < (1u << PAGETABLE_ORDER); i++, page++ ) + put_page_and_type(page); + } else + put_page_and_type(l2e_get_page(l2e)); + + return 0; +} + +static void put_data_page(struct page_info *page, bool writeable) +{ + if ( writeable ) + put_page_and_type(page); + else + put_page(page); +} + +int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, int partial, + bool defer) +{ + struct page_info *pg; + + if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || (l3e_get_pfn(l3e) == pfn) ) + return 1; + + if ( unlikely(l3e_get_flags(l3e) & _PAGE_PSE) ) + { + unsigned long mfn = l3e_get_pfn(l3e); + int writeable = l3e_get_flags(l3e) & _PAGE_RW; + + ASSERT(!(mfn & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1))); + do { + put_data_page(mfn_to_page(mfn), writeable); + } while ( ++mfn & ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1) ); + + return 0; + } + + pg = l3e_get_page(l3e); + + if ( unlikely(partial > 0) ) + { + ASSERT(!defer); + return put_page_type_preemptible(pg); + } + + if ( defer ) + { + current->arch.old_guest_table = pg; + return 0; + } + + return put_page_and_type_preemptible(pg); +} + +int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, int partial, + bool defer) +{ + if ( (l4e_get_flags(l4e) & _PAGE_PRESENT) && + (l4e_get_pfn(l4e) != pfn) ) + { + struct page_info *pg = l4e_get_page(l4e); + + if ( unlikely(partial > 0) ) + { + ASSERT(!defer); + return put_page_type_preemptible(pg); + } + + if ( defer ) + { + current->arch.old_guest_table = pg; + return 0; + } + + return put_page_and_type_preemptible(pg); + } + return 1; +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 7480341240..4eeaf709c1 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -358,16 +358,6 @@ int put_old_guest_table(struct vcpu *); int get_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner, struct domain *pg_owner); void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner); -int get_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn, struct domain *d); -int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn); -int get_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, struct domain *d, - int partial); -int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, int partial, - bool defer); -int get_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, struct domain *d, - int partial); -int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, int partial, - bool defer); void get_page_light(struct page_info *page); bool get_page_from_mfn(mfn_t mfn, struct domain *d); int get_page_and_type_from_mfn(mfn_t mfn, unsigned long type, struct domain *d, diff --git a/xen/include/asm-x86/pv/mm.h b/xen/include/asm-x86/pv/mm.h index 664d7c3868..fb6dbb97ee 100644 --- a/xen/include/asm-x86/pv/mm.h +++ b/xen/include/asm-x86/pv/mm.h @@ -103,6 +103,17 @@ int pv_free_page_type(struct page_info *page, unsigned long type, void pv_invalidate_shadow_ldt(struct vcpu *v, bool flush); +int get_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn, struct domain *d); +int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn); +int get_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, struct domain *d, + int partial); +int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, int partial, + bool defer); +int get_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, struct domain *d, + int partial); +int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, int partial, + bool defer); + #else #include @@ -142,6 +153,24 @@ static inline int pv_free_page_type(struct page_info *page, unsigned long type, static inline void pv_invalidate_shadow_ldt(struct vcpu *v, bool flush) {} +static inline int get_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn, + struct domain *d) +{ return -EINVAL; } +static inline int put_page_from_l2e(l2_pgentry_t l2e, unsigned long pfn) +{ return -EINVAL; } +static inline int get_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, + struct domain *d, int partial) +{ return -EINVAL; } +static inline int put_page_from_l3e(l3_pgentry_t l3e, unsigned long pfn, + int partial, bool defer) +{ return -EINVAL; } +static inline int get_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, + struct domain *d, int partial) +{ return -EINVAL; } +static inline int put_page_from_l4e(l4_pgentry_t l4e, unsigned long pfn, + int partial, bool defer) +{ return -EINVAL; } + #endif #endif /* __X86_PV_MM_H__ */