Message ID | 58A2EA2E0200007800139961@prv-mh.provo.novell.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 14/02/17 10:29, Jan Beulich wrote: > They're all solely dependent on guest type, so we don't need to repeat > all the same four pointers in every vCPU control structure. Instead use > static const structures, and store pointers to them in the domain > control structure. > > Since touching it anyway, take the opportunity and move schedule_tail() > into the only C file needing it. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> +1. This has been a niggle on my todo list for ages. > @@ -2066,6 +2073,15 @@ static void __context_switch(void) > per_cpu(curr_vcpu, cpu) = n; > } > > +/* > + * Schedule tail *should* be a terminal function pointer, but leave a bugframe > + * around just incase it returns, to save going back into the context > + * switching code and leaving a far more subtle crash to diagnose. > + */ > +#define schedule_tail(vcpu) do { \ > + (((vcpu)->domain->arch.ctxt_switch->tail)(vcpu)); \ > + BUG(); \ > + } while (0) schedule_tail() is used only twice. I'd suggest dropping it entirely and calling the ->tail() function pointer normally, rather than hiding it this. Upon reviewing, this patch, don't we also need a ->same() call in the continue_same() path in the previous patch? ~Andrew
On 02/14/2017 05:29 AM, Jan Beulich wrote: > They're all solely dependent on guest type, so we don't need to repeat > all the same four pointers in every vCPU control structure. Instead use > static const structures, and store pointers to them in the domain > control structure. > > Since touching it anyway, take the opportunity and move schedule_tail() > into the only C file needing it. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> On 14.02.17 at 16:26, <andrew.cooper3@citrix.com> wrote: > On 14/02/17 10:29, Jan Beulich wrote: >> @@ -2066,6 +2073,15 @@ static void __context_switch(void) >> per_cpu(curr_vcpu, cpu) = n; >> } >> >> +/* >> + * Schedule tail *should* be a terminal function pointer, but leave a bugframe >> + * around just incase it returns, to save going back into the context >> + * switching code and leaving a far more subtle crash to diagnose. >> + */ >> +#define schedule_tail(vcpu) do { \ >> + (((vcpu)->domain->arch.ctxt_switch->tail)(vcpu)); \ >> + BUG(); \ >> + } while (0) > > schedule_tail() is used only twice. I'd suggest dropping it entirely > and calling the ->tail() function pointer normally, rather than hiding > it this. I had considered this too, and now that you ask for it I'll happily do so. > Upon reviewing, this patch, don't we also need a ->same() call in the > continue_same() path in the previous patch? No, I did specifically check this already: The call to continue_same() sits (far) ahead of the clearing of ->is_running, and as long as that flag is set, vcpu_pause() (or vcpu_sleep_sync(), to be precise) will spin as necessary. Jan
On 15/02/17 08:42, Jan Beulich wrote: >>>> On 14.02.17 at 16:26, <andrew.cooper3@citrix.com> wrote: >> On 14/02/17 10:29, Jan Beulich wrote: >>> @@ -2066,6 +2073,15 @@ static void __context_switch(void) >>> per_cpu(curr_vcpu, cpu) = n; >>> } >>> >>> +/* >>> + * Schedule tail *should* be a terminal function pointer, but leave a bugframe >>> + * around just incase it returns, to save going back into the context >>> + * switching code and leaving a far more subtle crash to diagnose. >>> + */ >>> +#define schedule_tail(vcpu) do { \ >>> + (((vcpu)->domain->arch.ctxt_switch->tail)(vcpu)); \ >>> + BUG(); \ >>> + } while (0) >> schedule_tail() is used only twice. I'd suggest dropping it entirely >> and calling the ->tail() function pointer normally, rather than hiding >> it this. > I had considered this too, and now that you ask for it I'll happily > do so. Thinking more, it would be a good idea to annotate the respective functions noreturn, so the compiler will catch anyone who accidently puts a return statement in. > >> Upon reviewing, this patch, don't we also need a ->same() call in the >> continue_same() path in the previous patch? > No, I did specifically check this already: The call to continue_same() > sits (far) ahead of the clearing of ->is_running, and as long as that > flag is set, vcpu_pause() (or vcpu_sleep_sync(), to be precise) will > spin as necessary. Ok. ~Andrew
>>> On 15.02.17 at 12:34, <andrew.cooper3@citrix.com> wrote: > On 15/02/17 08:42, Jan Beulich wrote: >>>>> On 14.02.17 at 16:26, <andrew.cooper3@citrix.com> wrote: >>> On 14/02/17 10:29, Jan Beulich wrote: >>>> @@ -2066,6 +2073,15 @@ static void __context_switch(void) >>>> per_cpu(curr_vcpu, cpu) = n; >>>> } >>>> >>>> +/* >>>> + * Schedule tail *should* be a terminal function pointer, but leave a > bugframe >>>> + * around just incase it returns, to save going back into the context >>>> + * switching code and leaving a far more subtle crash to diagnose. >>>> + */ >>>> +#define schedule_tail(vcpu) do { \ >>>> + (((vcpu)->domain->arch.ctxt_switch->tail)(vcpu)); \ >>>> + BUG(); \ >>>> + } while (0) >>> schedule_tail() is used only twice. I'd suggest dropping it entirely >>> and calling the ->tail() function pointer normally, rather than hiding >>> it this. >> I had considered this too, and now that you ask for it I'll happily >> do so. > > Thinking more, it would be a good idea to annotate the respective > functions noreturn, so the compiler will catch anyone who accidently > puts a return statement in. Right, but in another patch I would say. Jan
--- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -426,16 +426,8 @@ int vcpu_initialise(struct vcpu *v) /* PV guests by default have a 100Hz ticker. */ v->periodic_period = MILLISECS(10); } - - v->arch.schedule_tail = continue_nonidle_domain; - v->arch.ctxt_switch_from = paravirt_ctxt_switch_from; - v->arch.ctxt_switch_to = paravirt_ctxt_switch_to; - - if ( is_idle_domain(d) ) - { - v->arch.schedule_tail = continue_idle_domain; - v->arch.cr3 = __pa(idle_pg_table); - } + else + v->arch.cr3 = __pa(idle_pg_table); v->arch.pv_vcpu.ctrlreg[4] = real_cr4_to_pv_guest_cr4(mmu_cr4_features); @@ -642,8 +634,23 @@ int arch_domain_create(struct domain *d, goto fail; } else + { + static const struct arch_csw pv_csw = { + .from = paravirt_ctxt_switch_from, + .to = paravirt_ctxt_switch_to, + .tail = continue_nonidle_domain, + }; + static const struct arch_csw idle_csw = { + .from = paravirt_ctxt_switch_from, + .to = paravirt_ctxt_switch_to, + .tail = continue_idle_domain, + }; + + d->arch.ctxt_switch = is_idle_domain(d) ? &idle_csw : &pv_csw; + /* 64-bit PV guest by default. */ d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0; + } /* initialize default tsc behavior in case tools don't */ tsc_set_info(d, TSC_MODE_DEFAULT, 0UL, 0, 0); @@ -1997,7 +2004,7 @@ static void __context_switch(void) { memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES); vcpu_save_fpu(p); - p->arch.ctxt_switch_from(p); + pd->arch.ctxt_switch->from(p); } /* @@ -2023,7 +2030,7 @@ static void __context_switch(void) set_msr_xss(n->arch.hvm_vcpu.msr_xss); } vcpu_restore_fpu_eager(n); - n->arch.ctxt_switch_to(n); + nd->arch.ctxt_switch->to(n); } psr_ctxt_switch_to(nd); @@ -2066,6 +2073,15 @@ static void __context_switch(void) per_cpu(curr_vcpu, cpu) = n; } +/* + * Schedule tail *should* be a terminal function pointer, but leave a bugframe + * around just incase it returns, to save going back into the context + * switching code and leaving a far more subtle crash to diagnose. + */ +#define schedule_tail(vcpu) do { \ + (((vcpu)->domain->arch.ctxt_switch->tail)(vcpu)); \ + BUG(); \ + } while (0) void context_switch(struct vcpu *prev, struct vcpu *next) { @@ -2100,8 +2116,8 @@ void context_switch(struct vcpu *prev, s if ( (per_cpu(curr_vcpu, cpu) == next) ) { - if ( next->arch.ctxt_switch_same ) - next->arch.ctxt_switch_same(next); + if ( nextd->arch.ctxt_switch->same ) + nextd->arch.ctxt_switch->same(next); local_irq_enable(); } else if ( is_idle_domain(nextd) && cpu_online(cpu) ) --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1144,6 +1144,14 @@ void svm_host_osvw_init() static int svm_domain_initialise(struct domain *d) { + static const struct arch_csw csw = { + .from = svm_ctxt_switch_from, + .to = svm_ctxt_switch_to, + .tail = svm_do_resume, + }; + + d->arch.ctxt_switch = &csw; + return 0; } @@ -1155,10 +1163,6 @@ static int svm_vcpu_initialise(struct vc { int rc; - v->arch.schedule_tail = svm_do_resume; - v->arch.ctxt_switch_from = svm_ctxt_switch_from; - v->arch.ctxt_switch_to = svm_ctxt_switch_to; - v->arch.hvm_svm.launch_core = -1; if ( (rc = svm_create_vmcb(v)) != 0 ) --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -268,8 +268,16 @@ void vmx_pi_hooks_deassign(struct domain static int vmx_domain_initialise(struct domain *d) { + static const struct arch_csw csw = { + .from = vmx_ctxt_switch_from, + .to = vmx_ctxt_switch_to, + .same = vmx_vmcs_reload, + .tail = vmx_do_resume, + }; int rc; + d->arch.ctxt_switch = &csw; + if ( !has_vlapic(d) ) return 0; @@ -295,11 +303,6 @@ static int vmx_vcpu_initialise(struct vc INIT_LIST_HEAD(&v->arch.hvm_vmx.pi_blocking.list); - v->arch.schedule_tail = vmx_do_resume; - v->arch.ctxt_switch_from = vmx_ctxt_switch_from; - v->arch.ctxt_switch_to = vmx_ctxt_switch_to; - v->arch.ctxt_switch_same = vmx_vmcs_reload; - if ( (rc = vmx_create_vmcs(v)) != 0 ) { dprintk(XENLOG_WARNING, --- a/xen/include/asm-x86/current.h +++ b/xen/include/asm-x86/current.h @@ -103,16 +103,6 @@ unsigned long get_stack_dump_bottom (uns }) /* - * Schedule tail *should* be a terminal function pointer, but leave a bugframe - * around just incase it returns, to save going back into the context - * switching code and leaving a far more subtle crash to diagnose. - */ -#define schedule_tail(vcpu) do { \ - (((vcpu)->arch.schedule_tail)(vcpu)); \ - BUG(); \ - } while (0) - -/* * Which VCPU's state is currently running on each CPU? * This is not necesasrily the same as 'current' as a CPU may be * executing a lazy state switch. --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -314,6 +314,13 @@ struct arch_domain } relmem; struct page_list_head relmem_list; + const struct arch_csw { + void (*from)(struct vcpu *); + void (*to)(struct vcpu *); + void (*same)(struct vcpu *); + void (*tail)(struct vcpu *); + } *ctxt_switch; + /* nestedhvm: translate l2 guest physical to host physical */ struct p2m_domain *nested_p2m[MAX_NESTEDP2M]; mm_lock_t nested_p2m_lock; @@ -510,12 +517,6 @@ struct arch_vcpu unsigned long flags; /* TF_ */ - void (*schedule_tail) (struct vcpu *); - - void (*ctxt_switch_from) (struct vcpu *); - void (*ctxt_switch_to) (struct vcpu *); - void (*ctxt_switch_same) (struct vcpu *); - struct vpmu_struct vpmu; /* Virtual Machine Extensions */