Message ID | 20120712191800.30440.16406.sendpatchset@codeblue (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 12/07/12 21:18, Raghavendra K T wrote: > +#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT [...] > + struct { > + bool cpu_relax_intercepted; > + bool dy_eligible; > + } ple; > +#endif [...] > } > vcpu->run = page_address(page); > + vcpu->ple.cpu_relax_intercepted = false; > + vcpu->ple.dy_eligible = false; This struct is only defined if CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT is set, but here it is always accessed. Will break on !x86 && !s390. > > r = kvm_arch_vcpu_init(vcpu); > if (r < 0) > @@ -1577,6 +1579,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) > int pass; > int i; > > + me->ple.cpu_relax_intercepted = true; dito > /* > * We boost the priority of a VCPU that is runnable but not > * currently running, because it got preempted by something > @@ -1602,6 +1605,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) > } > } > } > + me->ple.cpu_relax_intercepted = false; again. maybe define static inline access functions in kvm_host.h that are no-ops if CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT is not set. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/13/2012 01:32 AM, Christian Borntraeger wrote: > On 12/07/12 21:18, Raghavendra K T wrote: >> +#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT > [...] >> + struct { >> + bool cpu_relax_intercepted; >> + bool dy_eligible; >> + } ple; >> +#endif > [...] >> } >> vcpu->run = page_address(page); >> + vcpu->ple.cpu_relax_intercepted = false; >> + vcpu->ple.dy_eligible = false; > > This struct is only defined if CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT is set, but here it > is always accessed. Will break on !x86&& !s390. Yes! I forgot about archs in init function. How about having #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT vcpu->ple.cpu_relax_intercepted = false; vcpu->ple.dy_eligible = false; #endif This would solve all the problem. >> >> r = kvm_arch_vcpu_init(vcpu); >> if (r< 0) >> @@ -1577,6 +1579,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) >> int pass; >> int i; >> >> + me->ple.cpu_relax_intercepted = true; > > dito currently vcpu_on_spin is used only by x86 and s390. so if some other arch in future uses vcpu_on_spin, I believe they also have to enable CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT what do you think? otherwise we have to add hook everywhere >> /* >> * We boost the priority of a VCPU that is runnable but not >> * currently running, because it got preempted by something >> @@ -1602,6 +1605,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) >> } >> } >> } >> + me->ple.cpu_relax_intercepted = false; > > again. > > maybe define static inline access functions in kvm_host.h that are no-ops > if CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT is not set. > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 13/07/12 05:35, Raghavendra K T wrote: > Yes! I forgot about archs in init function. > How about having > #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT > vcpu->ple.cpu_relax_intercepted = false; > vcpu->ple.dy_eligible = false; > #endif > > This would solve all the problem. No, you need to mask all places.... > >>> >>> r = kvm_arch_vcpu_init(vcpu); >>> if (r< 0) >>> @@ -1577,6 +1579,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) >>> int pass; >>> int i; >>> >>> + me->ple.cpu_relax_intercepted = true; >> >> dito > > currently vcpu_on_spin is used only by x86 and s390. so if some other > arch in future uses vcpu_on_spin, I believe they also have to enable > CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT > what do you think? ...because this function is compiled no matter if called or not. > >> maybe define static inline access functions in kvm_host.h that are no-ops >> if CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT is not set. As I already said, can you have a look at using access functions? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/13/2012 11:43 AM, Christian Borntraeger wrote: > On 13/07/12 05:35, Raghavendra K T wrote: >>> maybe define static inline access functions in kvm_host.h that are no-ops >>> if CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT is not set. > > As I already said, can you have a look at using access functions? Yes. will do. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> On 12/07/12 21:18, Raghavendra K T wrote: > > +#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT > [...] > > + struct { > > + bool cpu_relax_intercepted; > > + bool dy_eligible; > > + } ple; > > +#endif > [...] > > } > > vcpu->run = page_address(page); > > + vcpu->ple.cpu_relax_intercepted = false; > > + vcpu->ple.dy_eligible = false; > > This struct is only defined if CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT is set, but here it > is always accessed. Will break on !x86 && !s390. How about moving this struct definition outside the CONFIG. i.e it would be available by default. If any arch cares to use vcpu_on_spin(), they would get the benefit by default. This would avoid all the CONFIG magic that we would have to do otherwise.
On 07/13/2012 07:24 PM, Srikar Dronamraju wrote: >> On 12/07/12 21:18, Raghavendra K T wrote: >>> +#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT >> [...] >>> + struct { >>> + bool cpu_relax_intercepted; >>> + bool dy_eligible; >>> + } ple; >>> +#endif >> [...] >>> } >>> vcpu->run = page_address(page); >>> + vcpu->ple.cpu_relax_intercepted = false; >>> + vcpu->ple.dy_eligible = false; >> >> This struct is only defined if CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT is set, but here it >> is always accessed. Will break on !x86&& !s390. > > How about moving this struct definition outside the CONFIG. > i.e it would be available by default. > If any arch cares to use vcpu_on_spin(), they would get the benefit by > default. > > This would avoid all the CONFIG magic that we would have to do > otherwise. > Okay, after discussing with Christian, - even if ppc uses vcpu_on spin we will still be left with ia64 (though broken currently) and arm (is on way). - those who want to opt-out of this optimization but still wish to use vcpu_spin, we have flexibility. So with that in mind I am spinning V4 with CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT. Let us see how it goes. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c446435..4ec1cf0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -183,6 +183,18 @@ struct kvm_vcpu { } async_pf; #endif +#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT + /* + * Cpu relax intercept or pause loop exit optimization + * cpu_relax_intercepted: set when a vcpu does a pause loop exit + * or cpu relax intercepted. + * dy_eligible: indicates whether vcpu is eligible for directed yield. + */ + struct { + bool cpu_relax_intercepted; + bool dy_eligible; + } ple; +#endif struct kvm_vcpu_arch arch; }; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 7e14068..4ec0120 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -235,6 +235,8 @@ int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) goto fail; } vcpu->run = page_address(page); + vcpu->ple.cpu_relax_intercepted = false; + vcpu->ple.dy_eligible = false; r = kvm_arch_vcpu_init(vcpu); if (r < 0) @@ -1577,6 +1579,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) int pass; int i; + me->ple.cpu_relax_intercepted = true; /* * We boost the priority of a VCPU that is runnable but not * currently running, because it got preempted by something @@ -1602,6 +1605,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) } } } + me->ple.cpu_relax_intercepted = false; } EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);