From patchwork Fri Dec 18 07:50:03 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 7880281 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 159C6BEEE5 for ; Fri, 18 Dec 2015 07:52:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C44C420497 for ; Fri, 18 Dec 2015 07:52:17 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6231D204A2 for ; Fri, 18 Dec 2015 07:52:16 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1a9pnk-0002Lg-GI; Fri, 18 Dec 2015 07:50:08 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1a9pnj-0002LV-JH for xen-devel@lists.xenproject.org; Fri, 18 Dec 2015 07:50:07 +0000 Received: from [193.109.254.147] by server-11.bemta-14.messagelabs.com id C1/D6-28228-EAAB3765; Fri, 18 Dec 2015 07:50:06 +0000 X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-8.tower-27.messagelabs.com!1450425003!8076994!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 51149 invoked from network); 18 Dec 2015 07:50:05 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 18 Dec 2015 07:50:05 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Fri, 18 Dec 2015 00:50:03 -0700 Message-Id: <5673C8BB02000078000C0FEB@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Fri, 18 Dec 2015 00:50:03 -0700 From: "Jan Beulich" To: "xen-devel" Mime-Version: 1.0 Cc: Andrew Cooper , Kevin Tian , Keir Fraser , Jun Nakajima Subject: [Xen-devel] [PATCH v2] VMX: allocate APIC access page from domain heap X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ... since we don't need its virtual address anywhere (it's a placeholder page only after all). For this to work (and possibly be done elsewhere too) share_xen_page_with_guest() needs to mark pages handed to it as Xen heap ones. To be on the safe side, also explicitly clear the page (not having done so was okay due to the XSA-100 fix, but is still a latent bug since we don't formally guarantee allocations to come out zeroed, and in fact this property may disappear again as soon as the asynchronous runtime scrubbing patches arrive). Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper Acked-by: Kevin Tian --- v2: Introduce free_shared_domheap_page(). --- Alternatives might be to use a - global page across VMs (on the basis that VMs shouldn't be accessing that page anyway) - fake MFN pointing into nowhere (would need to ensure no side effects can occur, like PCIe errors or NMIs) VMX: allocate APIC access page from domain heap ... since we don't need its virtual address anywhere (it's a placeholder page only after all). For this to work (and possibly be done elsewhere too) share_xen_page_with_guest() needs to mark pages handed to it as Xen heap ones. To be on the safe side, also explicitly clear the page (not having done so was okay due to the XSA-100 fix, but is still a latent bug since we don't formally guarantee allocations to come out zeroed, and in fact this property may disappear again as soon as the asynchronous runtime scrubbing patches arrive). Signed-off-by: Jan Beulich --- v2: Introduce free_shared_domheap_page(). --- Alternatives might be to use a - global page across VMs (on the basis that VMs shouldn't be accessing that page anyway) - fake MFN pointing into nowhere (would need to ensure no side effects can occur, like PCIe errors or NMIs) --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2489,18 +2489,21 @@ gp_fault: static int vmx_alloc_vlapic_mapping(struct domain *d) { - void *apic_va; + struct page_info *pg; + unsigned long mfn; if ( !cpu_has_vmx_virtualize_apic_accesses ) return 0; - apic_va = alloc_xenheap_page(); - if ( apic_va == NULL ) + pg = alloc_domheap_page(d, MEMF_no_owner); + if ( !pg ) return -ENOMEM; - share_xen_page_with_guest(virt_to_page(apic_va), d, XENSHARE_writable); - d->arch.hvm_domain.vmx.apic_access_mfn = virt_to_mfn(apic_va); - set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), - _mfn(virt_to_mfn(apic_va)), p2m_get_hostp2m(d)->default_access); + mfn = page_to_mfn(pg); + clear_domain_page(_mfn(mfn)); + share_xen_page_with_guest(pg, d, XENSHARE_writable); + d->arch.hvm_domain.vmx.apic_access_mfn = mfn; + set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), _mfn(mfn), + p2m_get_hostp2m(d)->default_access); return 0; } @@ -2508,8 +2511,9 @@ static int vmx_alloc_vlapic_mapping(stru static void vmx_free_vlapic_mapping(struct domain *d) { unsigned long mfn = d->arch.hvm_domain.vmx.apic_access_mfn; + if ( mfn != 0 ) - free_xenheap_page(mfn_to_virt(mfn)); + free_shared_domheap_page(mfn_to_page(mfn)); } static void vmx_install_vlapic_mapping(struct vcpu *v) --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -454,7 +454,7 @@ void share_xen_page_with_guest( /* Only add to the allocation list if the domain isn't dying. */ if ( !d->is_dying ) { - page->count_info |= PGC_allocated | 1; + page->count_info |= PGC_xen_heap | PGC_allocated | 1; if ( unlikely(d->xenheap_pages++ == 0) ) get_knownalive_domain(d); page_list_add_tail(page, &d->xenpage_list); @@ -469,6 +469,17 @@ void share_xen_page_with_privileged_gues share_xen_page_with_guest(page, dom_xen, readonly); } +void free_shared_domheap_page(struct page_info *page) +{ + if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) + put_page(page); + if ( !test_and_clear_bit(_PGC_xen_heap, &page->count_info) ) + ASSERT_UNREACHABLE(); + page->u.inuse.type_info = 0; + page_set_owner(page, NULL); + free_domheap_page(page); +} + void make_cr3(struct vcpu *v, unsigned long mfn) { v->arch.cr3 = mfn << PAGE_SHIFT; --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -276,6 +276,7 @@ extern void share_xen_page_with_guest( struct page_info *page, struct domain *d, int readonly); extern void share_xen_page_with_privileged_guests( struct page_info *page, int readonly); +extern void free_shared_domheap_page(struct page_info *page); #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) #define spage_table ((struct spage_info *)SPAGETABLE_VIRT_START) --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2489,18 +2489,21 @@ gp_fault: static int vmx_alloc_vlapic_mapping(struct domain *d) { - void *apic_va; + struct page_info *pg; + unsigned long mfn; if ( !cpu_has_vmx_virtualize_apic_accesses ) return 0; - apic_va = alloc_xenheap_page(); - if ( apic_va == NULL ) + pg = alloc_domheap_page(d, MEMF_no_owner); + if ( !pg ) return -ENOMEM; - share_xen_page_with_guest(virt_to_page(apic_va), d, XENSHARE_writable); - d->arch.hvm_domain.vmx.apic_access_mfn = virt_to_mfn(apic_va); - set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), - _mfn(virt_to_mfn(apic_va)), p2m_get_hostp2m(d)->default_access); + mfn = page_to_mfn(pg); + clear_domain_page(_mfn(mfn)); + share_xen_page_with_guest(pg, d, XENSHARE_writable); + d->arch.hvm_domain.vmx.apic_access_mfn = mfn; + set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), _mfn(mfn), + p2m_get_hostp2m(d)->default_access); return 0; } @@ -2508,8 +2511,9 @@ static int vmx_alloc_vlapic_mapping(stru static void vmx_free_vlapic_mapping(struct domain *d) { unsigned long mfn = d->arch.hvm_domain.vmx.apic_access_mfn; + if ( mfn != 0 ) - free_xenheap_page(mfn_to_virt(mfn)); + free_shared_domheap_page(mfn_to_page(mfn)); } static void vmx_install_vlapic_mapping(struct vcpu *v) --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -454,7 +454,7 @@ void share_xen_page_with_guest( /* Only add to the allocation list if the domain isn't dying. */ if ( !d->is_dying ) { - page->count_info |= PGC_allocated | 1; + page->count_info |= PGC_xen_heap | PGC_allocated | 1; if ( unlikely(d->xenheap_pages++ == 0) ) get_knownalive_domain(d); page_list_add_tail(page, &d->xenpage_list); @@ -469,6 +469,17 @@ void share_xen_page_with_privileged_gues share_xen_page_with_guest(page, dom_xen, readonly); } +void free_shared_domheap_page(struct page_info *page) +{ + if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) + put_page(page); + if ( !test_and_clear_bit(_PGC_xen_heap, &page->count_info) ) + ASSERT_UNREACHABLE(); + page->u.inuse.type_info = 0; + page_set_owner(page, NULL); + free_domheap_page(page); +} + void make_cr3(struct vcpu *v, unsigned long mfn) { v->arch.cr3 = mfn << PAGE_SHIFT; --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -276,6 +276,7 @@ extern void share_xen_page_with_guest( struct page_info *page, struct domain *d, int readonly); extern void share_xen_page_with_privileged_guests( struct page_info *page, int readonly); +extern void free_shared_domheap_page(struct page_info *page); #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) #define spage_table ((struct spage_info *)SPAGETABLE_VIRT_START)