From patchwork Wed Aug 16 18:23:19 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 9904479 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C0AF060244 for ; Wed, 16 Aug 2017 18:26:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF5C8289F4 for ; Wed, 16 Aug 2017 18:26:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A415928A3C; Wed, 16 Aug 2017 18:26:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BE696289F4 for ; Wed, 16 Aug 2017 18:26:08 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1di2yU-0004TA-5s; Wed, 16 Aug 2017 18:23:26 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1di2yT-0004T4-Ai for xen-devel@lists.xen.org; Wed, 16 Aug 2017 18:23:25 +0000 Received: from [193.109.254.147] by server-4.bemta-6.messagelabs.com id A5/FB-02962-C9D84995; Wed, 16 Aug 2017 18:23:24 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrPLMWRWlGSWpSXmKPExsXitHSDve6c3im RBvtmsFgs+biYxYHR4+ju30wBjFGsmXlJ+RUJrBltR6cyFzzVrnj45g5bA+MK5S5GTg4JAX+J riPN7CA2m4C+xO4Xn5hAbBEBdYnTHRdZuxi5OJgFtjJK3NpxhBkkISzgJjFp0UVGEJtFQFXi2 6m1QM0cHLwCnhKbfrNCzJSTOH/8J1i5kICaxLX+S2DzeQUEJU7OfMICYjMLSEgcfPGCeQIj9y wkqVlIUgsYmVYxahSnFpWlFukaGuslFWWmZ5TkJmbm6BoamOnlphYXJ6an5iQmFesl5+duYgQ GAwMQ7GD8sizgEKMkB5OSKK9X/pRIIb6k/JTKjMTijPii0pzU4kOMMhwcShK8n7qBcoJFqemp FWmZOcCwhElLcPAoifDK9wCleYsLEnOLM9MhUqcYjTk2rF7/hYnj1YT/35iEWPLy81KlxHl/g 0wSACnNKM2DGwSLl0uMslLCvIxApwnxFKQW5WaWoMq/YhTnYFQS5n0BMoUnM68Ebt8roFOYgE 650j4J5JSSRISUVAPjSfnOAzrCNp+3av2W3nxI0GT6dR8ehg3mq1a2OB3M+Xf3SJvq0eolEWf eduRsW+UmYBXzzSh15wcfh9/2lhOW/O9bq+R2ZDE752rlmyEHDBdk+rZvKM6v61T6lXDnRHLg o7JbtbszD1V+ETbrnvXaTtR1w5cgU59ny5X/BTWuCpxxtk2tsqRIiaU4I9FQi7moOBEASnX0a 5ICAAA= X-Env-Sender: prvs=3941fdc67=Andrew.Cooper3@citrix.com X-Msg-Ref: server-12.tower-27.messagelabs.com!1502907802!110663032!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 56236 invoked from network); 16 Aug 2017 18:23:23 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 16 Aug 2017 18:23:23 -0000 X-IronPort-AV: E=Sophos;i="5.41,383,1498521600"; d="scan'208";a="444057763" From: Andrew Cooper To: Xen-devel Date: Wed, 16 Aug 2017 19:23:19 +0100 Message-ID: <1502907799-24072-1-git-send-email-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.1.4 MIME-Version: 1.0 Cc: Andrew Cooper , Boris Ostrovsky , Suravee Suthikulpanit , Jan Beulich Subject: [Xen-devel] [PATCH] x86/svm: Use physical addresses for HSA and Host VMCB X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP They are only referenced by physical address (either the HSA MSR, or via VMSAVE/VMLOAD which take a physical operand). Allocating xenheap hages and storing their virtual address is wasteful. Allocate them with domheap pages instead, taking the opportunity to suitably NUMA-position them. This avoids Xen needing to perform a virt to phys translation on every context switch. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Boris Ostrovsky CC: Suravee Suthikulpanit TODO at some other point: Figure out why svm_cpu_up_prepare() is reliably called twice for every CPU. --- xen/arch/x86/hvm/svm/svm.c | 72 ++++++++++++++++++++++++++++---------- xen/arch/x86/hvm/svm/vmcb.c | 15 -------- xen/include/asm-x86/hvm/svm/vmcb.h | 1 - 3 files changed, 54 insertions(+), 34 deletions(-) diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 0dc9442..599a8d3 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -72,11 +72,13 @@ static void svm_update_guest_efer(struct vcpu *); static struct hvm_function_table svm_function_table; -/* va of hardware host save area */ -static DEFINE_PER_CPU_READ_MOSTLY(void *, hsa); - -/* vmcb used for extended host state */ -static DEFINE_PER_CPU_READ_MOSTLY(void *, root_vmcb); +/* + * Physical addresses of the Host State Area (for hardware) and vmcb (for Xen) + * which contains Xen's fs/gs/tr/ldtr and GSBASE/STAR/SYSENTER state when in + * guest vcpu context. + */ +static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, hsa); +static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, host_vmcb); static bool_t amd_erratum383_found __read_mostly; @@ -1015,7 +1017,7 @@ static void svm_ctxt_switch_from(struct vcpu *v) svm_tsc_ratio_save(v); svm_sync_vmcb(v); - svm_vmload(per_cpu(root_vmcb, cpu)); + svm_vmload_pa(per_cpu(host_vmcb, cpu)); /* Resume use of ISTs now that the host TR is reinstated. */ set_ist(&idt_tables[cpu][TRAP_double_fault], IST_DF); @@ -1045,7 +1047,7 @@ static void svm_ctxt_switch_to(struct vcpu *v) svm_restore_dr(v); - svm_vmsave(per_cpu(root_vmcb, cpu)); + svm_vmsave_pa(per_cpu(host_vmcb, cpu)); svm_vmload(vmcb); vmcb->cleanbits.bytes = 0; svm_lwp_load(v); @@ -1468,24 +1470,58 @@ static int svm_event_pending(struct vcpu *v) static void svm_cpu_dead(unsigned int cpu) { - free_xenheap_page(per_cpu(hsa, cpu)); - per_cpu(hsa, cpu) = NULL; - free_vmcb(per_cpu(root_vmcb, cpu)); - per_cpu(root_vmcb, cpu) = NULL; + paddr_t *this_hsa = &per_cpu(hsa, cpu); + paddr_t *this_vmcb = &per_cpu(host_vmcb, cpu); + + if ( *this_hsa ) + { + free_domheap_page(maddr_to_page(*this_hsa)); + *this_hsa = 0; + } + + if ( *this_vmcb ) + { + free_domheap_page(maddr_to_page(*this_vmcb)); + *this_vmcb = 0; + } } static int svm_cpu_up_prepare(unsigned int cpu) { - if ( ((per_cpu(hsa, cpu) == NULL) && - ((per_cpu(hsa, cpu) = alloc_host_save_area()) == NULL)) || - ((per_cpu(root_vmcb, cpu) == NULL) && - ((per_cpu(root_vmcb, cpu) = alloc_vmcb()) == NULL)) ) + paddr_t *this_hsa = &per_cpu(hsa, cpu); + paddr_t *this_vmcb = &per_cpu(host_vmcb, cpu); + nodeid_t node = cpu_to_node(cpu); + unsigned int memflags = 0; + struct page_info *pg; + + if ( node != NUMA_NO_NODE ) + memflags = MEMF_node(node); + + if ( !*this_hsa ) + { + pg = alloc_domheap_page(NULL, memflags); + if ( !pg ) + goto err; + + clear_domain_page(_mfn(page_to_mfn(pg))); + *this_hsa = page_to_maddr(pg); + } + + if ( !*this_vmcb ) { - svm_cpu_dead(cpu); - return -ENOMEM; + pg = alloc_domheap_page(NULL, memflags); + if ( !pg ) + goto err; + + clear_domain_page(_mfn(page_to_mfn(pg))); + *this_vmcb = page_to_maddr(pg); } return 0; + + err: + svm_cpu_dead(cpu); + return -ENOMEM; } static void svm_init_erratum_383(const struct cpuinfo_x86 *c) @@ -1544,7 +1580,7 @@ static int _svm_cpu_up(bool bsp) write_efer(read_efer() | EFER_SVME); /* Initialize the HSA for this core. */ - wrmsrl(MSR_K8_VM_HSAVE_PA, (uint64_t)virt_to_maddr(per_cpu(hsa, cpu))); + wrmsrl(MSR_K8_VM_HSAVE_PA, per_cpu(hsa, cpu)); /* check for erratum 383 */ svm_init_erratum_383(c); diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c index 9493215..997e759 100644 --- a/xen/arch/x86/hvm/svm/vmcb.c +++ b/xen/arch/x86/hvm/svm/vmcb.c @@ -50,21 +50,6 @@ void free_vmcb(struct vmcb_struct *vmcb) free_xenheap_page(vmcb); } -struct host_save_area *alloc_host_save_area(void) -{ - struct host_save_area *hsa; - - hsa = alloc_xenheap_page(); - if ( hsa == NULL ) - { - printk(XENLOG_WARNING "Warning: failed to allocate hsa.\n"); - return NULL; - } - - clear_page(hsa); - return hsa; -} - /* This function can directly access fields which are covered by clean bits. */ static int construct_vmcb(struct vcpu *v) { diff --git a/xen/include/asm-x86/hvm/svm/vmcb.h b/xen/include/asm-x86/hvm/svm/vmcb.h index ec22d91..01ce20b 100644 --- a/xen/include/asm-x86/hvm/svm/vmcb.h +++ b/xen/include/asm-x86/hvm/svm/vmcb.h @@ -526,7 +526,6 @@ struct arch_svm_struct { }; struct vmcb_struct *alloc_vmcb(void); -struct host_save_area *alloc_host_save_area(void); void free_vmcb(struct vmcb_struct *vmcb); int svm_create_vmcb(struct vcpu *v);