From patchwork Tue Feb 14 10:29:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9571655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 143B960586 for ; Tue, 14 Feb 2017 10:32:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 104D324B44 for ; Tue, 14 Feb 2017 10:32:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 026CC26907; Tue, 14 Feb 2017 10:32:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1524C268AE for ; Tue, 14 Feb 2017 10:32:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cdaMx-0004zO-9d; Tue, 14 Feb 2017 10:29:59 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cdaMw-0004yy-0O for xen-devel@lists.xenproject.org; Tue, 14 Feb 2017 10:29:58 +0000 Received: from [85.158.139.211] by server-7.bemta-5.messagelabs.com id A4/25-02154-52CD2A85; Tue, 14 Feb 2017 10:29:57 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrHIsWRWlGSWpSXmKPExsXS6fjDS1flzqI Ig4unxS2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oy1izrZC+4XVjxad5SxgXFzRBcjJ4eQQJ7E qRm3WEBsXgE7iZNXv7CB2BIChhJP318Hs1kEVCWObHwPZrMJqEu0PdvO2sXIwSEiYCBx7mhSF yMXB7PAK0aJe/cfMoLUCAs4Stzo62WFmG8nsW1iC9h8TgF7iVcX57KB9PIKCEr83SEMEmYGKl nXNINxAiPPLITMLCQZCFtL4uGvWywQtrbEsoWvmUHKmQWkJZb/44AIW0tMuDKJGVUJiO0m0fb 0IuMCRo5VjBrFqUVlqUW6hpZ6SUWZ6RkluYmZObqGBqZ6uanFxYnpqTmJScV6yfm5mxiB4VrP wMC4g/FRv98hRkkOJiVR3thNCyOE+JLyUyozEosz4otKc1KLDzHKcHAoSfAq3F4UISRYlJqeW pGWmQOMHJi0BAePkgivBEiat7ggMbc4Mx0idYpRUUqc98otoIQASCKjNA+uDRatlxhlpYR5GR kYGIR4ClKLcjNLUOVfMYpzMCoJQ2znycwrgZsOjBKgm0V4WeMWgiwuSURISTUw9vVf8n96Wun C5ntdrI3XPNfJeuR4Lureyjld/NjZHw/Tmf5r37S6f+348+QfDdMlxaTlFf5EFry2/fu318fn gqfo5OPF/Bs+dCXrpCpsyZyRdDKvvTawc9e7+faZC0Wk7u1jMN9xk5+HZ8aVYhHlHV/Sp69cf PiOyA63rVa2O5tPhCx+ubbJVomlOCPRUIu5qDgRAHNE5czRAgAA X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-16.tower-206.messagelabs.com!1487068194!68928942!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 54908 invoked from network); 14 Feb 2017 10:29:55 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 14 Feb 2017 10:29:55 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Tue, 14 Feb 2017 03:29:53 -0700 Message-Id: <58A2EA2E0200007800139961@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.1 Date: Tue, 14 Feb 2017 03:29:50 -0700 From: "Jan Beulich" To: "xen-devel" References: <58A2E8B70200007800139946@prv-mh.provo.novell.com> In-Reply-To: <58A2E8B70200007800139946@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: Boris Ostrovsky , Andrew Cooper , Kevin Tian , Jun Nakajima , Suravee Suthikulpanit Subject: [Xen-devel] [PATCH 2/2] x86: package up context switch hook pointers X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP They're all solely dependent on guest type, so we don't need to repeat all the same four pointers in every vCPU control structure. Instead use static const structures, and store pointers to them in the domain control structure. Since touching it anyway, take the opportunity and move schedule_tail() into the only C file needing it. Signed-off-by: Jan Beulich x86: package up context switch hook pointers They're all solely dependent on guest type, so we don't need to repeat all the same four pointers in every vCPU control structure. Instead use static const structures, and store pointers to them in the domain control structure. Since touching it anyway, take the opportunity and move schedule_tail() into the only C file needing it. Signed-off-by: Jan Beulich --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -426,16 +426,8 @@ int vcpu_initialise(struct vcpu *v) /* PV guests by default have a 100Hz ticker. */ v->periodic_period = MILLISECS(10); } - - v->arch.schedule_tail = continue_nonidle_domain; - v->arch.ctxt_switch_from = paravirt_ctxt_switch_from; - v->arch.ctxt_switch_to = paravirt_ctxt_switch_to; - - if ( is_idle_domain(d) ) - { - v->arch.schedule_tail = continue_idle_domain; - v->arch.cr3 = __pa(idle_pg_table); - } + else + v->arch.cr3 = __pa(idle_pg_table); v->arch.pv_vcpu.ctrlreg[4] = real_cr4_to_pv_guest_cr4(mmu_cr4_features); @@ -642,8 +634,23 @@ int arch_domain_create(struct domain *d, goto fail; } else + { + static const struct arch_csw pv_csw = { + .from = paravirt_ctxt_switch_from, + .to = paravirt_ctxt_switch_to, + .tail = continue_nonidle_domain, + }; + static const struct arch_csw idle_csw = { + .from = paravirt_ctxt_switch_from, + .to = paravirt_ctxt_switch_to, + .tail = continue_idle_domain, + }; + + d->arch.ctxt_switch = is_idle_domain(d) ? &idle_csw : &pv_csw; + /* 64-bit PV guest by default. */ d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0; + } /* initialize default tsc behavior in case tools don't */ tsc_set_info(d, TSC_MODE_DEFAULT, 0UL, 0, 0); @@ -1997,7 +2004,7 @@ static void __context_switch(void) { memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES); vcpu_save_fpu(p); - p->arch.ctxt_switch_from(p); + pd->arch.ctxt_switch->from(p); } /* @@ -2023,7 +2030,7 @@ static void __context_switch(void) set_msr_xss(n->arch.hvm_vcpu.msr_xss); } vcpu_restore_fpu_eager(n); - n->arch.ctxt_switch_to(n); + nd->arch.ctxt_switch->to(n); } psr_ctxt_switch_to(nd); @@ -2066,6 +2073,15 @@ static void __context_switch(void) per_cpu(curr_vcpu, cpu) = n; } +/* + * Schedule tail *should* be a terminal function pointer, but leave a bugframe + * around just incase it returns, to save going back into the context + * switching code and leaving a far more subtle crash to diagnose. + */ +#define schedule_tail(vcpu) do { \ + (((vcpu)->domain->arch.ctxt_switch->tail)(vcpu)); \ + BUG(); \ + } while (0) void context_switch(struct vcpu *prev, struct vcpu *next) { @@ -2100,8 +2116,8 @@ void context_switch(struct vcpu *prev, s if ( (per_cpu(curr_vcpu, cpu) == next) ) { - if ( next->arch.ctxt_switch_same ) - next->arch.ctxt_switch_same(next); + if ( nextd->arch.ctxt_switch->same ) + nextd->arch.ctxt_switch->same(next); local_irq_enable(); } else if ( is_idle_domain(nextd) && cpu_online(cpu) ) --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1144,6 +1144,14 @@ void svm_host_osvw_init() static int svm_domain_initialise(struct domain *d) { + static const struct arch_csw csw = { + .from = svm_ctxt_switch_from, + .to = svm_ctxt_switch_to, + .tail = svm_do_resume, + }; + + d->arch.ctxt_switch = &csw; + return 0; } @@ -1155,10 +1163,6 @@ static int svm_vcpu_initialise(struct vc { int rc; - v->arch.schedule_tail = svm_do_resume; - v->arch.ctxt_switch_from = svm_ctxt_switch_from; - v->arch.ctxt_switch_to = svm_ctxt_switch_to; - v->arch.hvm_svm.launch_core = -1; if ( (rc = svm_create_vmcb(v)) != 0 ) --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -268,8 +268,16 @@ void vmx_pi_hooks_deassign(struct domain static int vmx_domain_initialise(struct domain *d) { + static const struct arch_csw csw = { + .from = vmx_ctxt_switch_from, + .to = vmx_ctxt_switch_to, + .same = vmx_vmcs_reload, + .tail = vmx_do_resume, + }; int rc; + d->arch.ctxt_switch = &csw; + if ( !has_vlapic(d) ) return 0; @@ -295,11 +303,6 @@ static int vmx_vcpu_initialise(struct vc INIT_LIST_HEAD(&v->arch.hvm_vmx.pi_blocking.list); - v->arch.schedule_tail = vmx_do_resume; - v->arch.ctxt_switch_from = vmx_ctxt_switch_from; - v->arch.ctxt_switch_to = vmx_ctxt_switch_to; - v->arch.ctxt_switch_same = vmx_vmcs_reload; - if ( (rc = vmx_create_vmcs(v)) != 0 ) { dprintk(XENLOG_WARNING, --- a/xen/include/asm-x86/current.h +++ b/xen/include/asm-x86/current.h @@ -103,16 +103,6 @@ unsigned long get_stack_dump_bottom (uns }) /* - * Schedule tail *should* be a terminal function pointer, but leave a bugframe - * around just incase it returns, to save going back into the context - * switching code and leaving a far more subtle crash to diagnose. - */ -#define schedule_tail(vcpu) do { \ - (((vcpu)->arch.schedule_tail)(vcpu)); \ - BUG(); \ - } while (0) - -/* * Which VCPU's state is currently running on each CPU? * This is not necesasrily the same as 'current' as a CPU may be * executing a lazy state switch. --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -314,6 +314,13 @@ struct arch_domain } relmem; struct page_list_head relmem_list; + const struct arch_csw { + void (*from)(struct vcpu *); + void (*to)(struct vcpu *); + void (*same)(struct vcpu *); + void (*tail)(struct vcpu *); + } *ctxt_switch; + /* nestedhvm: translate l2 guest physical to host physical */ struct p2m_domain *nested_p2m[MAX_NESTEDP2M]; mm_lock_t nested_p2m_lock; @@ -510,12 +517,6 @@ struct arch_vcpu unsigned long flags; /* TF_ */ - void (*schedule_tail) (struct vcpu *); - - void (*ctxt_switch_from) (struct vcpu *); - void (*ctxt_switch_to) (struct vcpu *); - void (*ctxt_switch_same) (struct vcpu *); - struct vpmu_struct vpmu; /* Virtual Machine Extensions */ Reviewed-by: Boris Ostrovsky --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -426,16 +426,8 @@ int vcpu_initialise(struct vcpu *v) /* PV guests by default have a 100Hz ticker. */ v->periodic_period = MILLISECS(10); } - - v->arch.schedule_tail = continue_nonidle_domain; - v->arch.ctxt_switch_from = paravirt_ctxt_switch_from; - v->arch.ctxt_switch_to = paravirt_ctxt_switch_to; - - if ( is_idle_domain(d) ) - { - v->arch.schedule_tail = continue_idle_domain; - v->arch.cr3 = __pa(idle_pg_table); - } + else + v->arch.cr3 = __pa(idle_pg_table); v->arch.pv_vcpu.ctrlreg[4] = real_cr4_to_pv_guest_cr4(mmu_cr4_features); @@ -642,8 +634,23 @@ int arch_domain_create(struct domain *d, goto fail; } else + { + static const struct arch_csw pv_csw = { + .from = paravirt_ctxt_switch_from, + .to = paravirt_ctxt_switch_to, + .tail = continue_nonidle_domain, + }; + static const struct arch_csw idle_csw = { + .from = paravirt_ctxt_switch_from, + .to = paravirt_ctxt_switch_to, + .tail = continue_idle_domain, + }; + + d->arch.ctxt_switch = is_idle_domain(d) ? &idle_csw : &pv_csw; + /* 64-bit PV guest by default. */ d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0; + } /* initialize default tsc behavior in case tools don't */ tsc_set_info(d, TSC_MODE_DEFAULT, 0UL, 0, 0); @@ -1997,7 +2004,7 @@ static void __context_switch(void) { memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES); vcpu_save_fpu(p); - p->arch.ctxt_switch_from(p); + pd->arch.ctxt_switch->from(p); } /* @@ -2023,7 +2030,7 @@ static void __context_switch(void) set_msr_xss(n->arch.hvm_vcpu.msr_xss); } vcpu_restore_fpu_eager(n); - n->arch.ctxt_switch_to(n); + nd->arch.ctxt_switch->to(n); } psr_ctxt_switch_to(nd); @@ -2066,6 +2073,15 @@ static void __context_switch(void) per_cpu(curr_vcpu, cpu) = n; } +/* + * Schedule tail *should* be a terminal function pointer, but leave a bugframe + * around just incase it returns, to save going back into the context + * switching code and leaving a far more subtle crash to diagnose. + */ +#define schedule_tail(vcpu) do { \ + (((vcpu)->domain->arch.ctxt_switch->tail)(vcpu)); \ + BUG(); \ + } while (0) void context_switch(struct vcpu *prev, struct vcpu *next) { @@ -2100,8 +2116,8 @@ void context_switch(struct vcpu *prev, s if ( (per_cpu(curr_vcpu, cpu) == next) ) { - if ( next->arch.ctxt_switch_same ) - next->arch.ctxt_switch_same(next); + if ( nextd->arch.ctxt_switch->same ) + nextd->arch.ctxt_switch->same(next); local_irq_enable(); } else if ( is_idle_domain(nextd) && cpu_online(cpu) ) --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1144,6 +1144,14 @@ void svm_host_osvw_init() static int svm_domain_initialise(struct domain *d) { + static const struct arch_csw csw = { + .from = svm_ctxt_switch_from, + .to = svm_ctxt_switch_to, + .tail = svm_do_resume, + }; + + d->arch.ctxt_switch = &csw; + return 0; } @@ -1155,10 +1163,6 @@ static int svm_vcpu_initialise(struct vc { int rc; - v->arch.schedule_tail = svm_do_resume; - v->arch.ctxt_switch_from = svm_ctxt_switch_from; - v->arch.ctxt_switch_to = svm_ctxt_switch_to; - v->arch.hvm_svm.launch_core = -1; if ( (rc = svm_create_vmcb(v)) != 0 ) --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -268,8 +268,16 @@ void vmx_pi_hooks_deassign(struct domain static int vmx_domain_initialise(struct domain *d) { + static const struct arch_csw csw = { + .from = vmx_ctxt_switch_from, + .to = vmx_ctxt_switch_to, + .same = vmx_vmcs_reload, + .tail = vmx_do_resume, + }; int rc; + d->arch.ctxt_switch = &csw; + if ( !has_vlapic(d) ) return 0; @@ -295,11 +303,6 @@ static int vmx_vcpu_initialise(struct vc INIT_LIST_HEAD(&v->arch.hvm_vmx.pi_blocking.list); - v->arch.schedule_tail = vmx_do_resume; - v->arch.ctxt_switch_from = vmx_ctxt_switch_from; - v->arch.ctxt_switch_to = vmx_ctxt_switch_to; - v->arch.ctxt_switch_same = vmx_vmcs_reload; - if ( (rc = vmx_create_vmcs(v)) != 0 ) { dprintk(XENLOG_WARNING, --- a/xen/include/asm-x86/current.h +++ b/xen/include/asm-x86/current.h @@ -103,16 +103,6 @@ unsigned long get_stack_dump_bottom (uns }) /* - * Schedule tail *should* be a terminal function pointer, but leave a bugframe - * around just incase it returns, to save going back into the context - * switching code and leaving a far more subtle crash to diagnose. - */ -#define schedule_tail(vcpu) do { \ - (((vcpu)->arch.schedule_tail)(vcpu)); \ - BUG(); \ - } while (0) - -/* * Which VCPU's state is currently running on each CPU? * This is not necesasrily the same as 'current' as a CPU may be * executing a lazy state switch. --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -314,6 +314,13 @@ struct arch_domain } relmem; struct page_list_head relmem_list; + const struct arch_csw { + void (*from)(struct vcpu *); + void (*to)(struct vcpu *); + void (*same)(struct vcpu *); + void (*tail)(struct vcpu *); + } *ctxt_switch; + /* nestedhvm: translate l2 guest physical to host physical */ struct p2m_domain *nested_p2m[MAX_NESTEDP2M]; mm_lock_t nested_p2m_lock; @@ -510,12 +517,6 @@ struct arch_vcpu unsigned long flags; /* TF_ */ - void (*schedule_tail) (struct vcpu *); - - void (*ctxt_switch_from) (struct vcpu *); - void (*ctxt_switch_to) (struct vcpu *); - void (*ctxt_switch_same) (struct vcpu *); - struct vpmu_struct vpmu; /* Virtual Machine Extensions */