Message ID | 20190313171342.12814-1-vkuznets@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/kvm/hyper-v: avoid spurious pending stimer on vCPU init | expand |
On 13/03/19 18:13, Vitaly Kuznetsov wrote: > When userspace initializes guest vCPUs it may want to zero all supported > MSRs including Hyper-V related ones including HV_X64_MSR_STIMERn_CONFIG/ > HV_X64_MSR_STIMERn_COUNT. With commit f3b138c5d89a ("kvm/x86: Update SynIC > timers on guest entry only") we began doing stimer_mark_pending() > unconditionally on every config change. > > The issue I'm observing manifests itself as following: > - Qemu writes 0 to STIMERn_{CONFIG,COUNT} MSRs and marks all stimers as > pending in stimer_pending_bitmap, arms KVM_REQ_HV_STIMER; > - kvm_hv_has_stimer_pending() starts returning true; > - kvm_vcpu_has_events() starts returning true; > - kvm_arch_vcpu_runnable() starts returning true; > - when kvm_arch_vcpu_ioctl_run() gets into > (vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED) case: > - kvm_vcpu_block() gets in 'kvm_vcpu_check_block(vcpu) < 0' and returns > immediately, avoiding normal wait path; > - -EAGAIN is returned from kvm_arch_vcpu_ioctl_run() immediately forcing > userspace to retry. > > So instead of normal wait path we get a busy loop on all secondary vCPUs > before they get INIT signal. This seems to be undesirable, especially given > that this happens even when Hyper-V extensions are not used. > > Generally, it seems to be pointless to mark an stimer as pending in > stimer_pending_bitmap and arm KVM_REQ_HV_STIMER as the only thing > kvm_hv_process_stimers() will do is clear the corresponding bit. We may > just not mark disabled timers as pending instead. > > Fixes: f3b138c5d89a ("kvm/x86: Update SynIC timers on guest entry only") > Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> > --- > arch/x86/kvm/hyperv.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c > index 89d20ed1d2e8..371c669696d7 100644 > --- a/arch/x86/kvm/hyperv.c > +++ b/arch/x86/kvm/hyperv.c > @@ -526,7 +526,9 @@ static int stimer_set_config(struct kvm_vcpu_hv_stimer *stimer, u64 config, > new_config.enable = 0; > stimer->config.as_uint64 = new_config.as_uint64; > > - stimer_mark_pending(stimer, false); > + if (stimer->config.enable) > + stimer_mark_pending(stimer, false); > + > return 0; > } > > @@ -542,7 +544,10 @@ static int stimer_set_count(struct kvm_vcpu_hv_stimer *stimer, u64 count, > stimer->config.enable = 0; > else if (stimer->config.auto_enable) > stimer->config.enable = 1; > - stimer_mark_pending(stimer, false); > + > + if (stimer->config.enable) > + stimer_mark_pending(stimer, false); > + > return 0; > } > > Queued for after the merge window. Paolo
On Wed, Mar 13, 2019 at 06:13:42PM +0100, Vitaly Kuznetsov wrote: > When userspace initializes guest vCPUs it may want to zero all supported > MSRs including Hyper-V related ones including HV_X64_MSR_STIMERn_CONFIG/ > HV_X64_MSR_STIMERn_COUNT. With commit f3b138c5d89a ("kvm/x86: Update SynIC > timers on guest entry only") we began doing stimer_mark_pending() > unconditionally on every config change. > > The issue I'm observing manifests itself as following: > - Qemu writes 0 to STIMERn_{CONFIG,COUNT} MSRs and marks all stimers as > pending in stimer_pending_bitmap, arms KVM_REQ_HV_STIMER; > - kvm_hv_has_stimer_pending() starts returning true; > - kvm_vcpu_has_events() starts returning true; > - kvm_arch_vcpu_runnable() starts returning true; > - when kvm_arch_vcpu_ioctl_run() gets into > (vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED) case: > - kvm_vcpu_block() gets in 'kvm_vcpu_check_block(vcpu) < 0' and returns > immediately, avoiding normal wait path; > - -EAGAIN is returned from kvm_arch_vcpu_ioctl_run() immediately forcing > userspace to retry. > > So instead of normal wait path we get a busy loop on all secondary vCPUs > before they get INIT signal. This seems to be undesirable, especially given > that this happens even when Hyper-V extensions are not used. > > Generally, it seems to be pointless to mark an stimer as pending in > stimer_pending_bitmap and arm KVM_REQ_HV_STIMER as the only thing > kvm_hv_process_stimers() will do is clear the corresponding bit. We may > just not mark disabled timers as pending instead. > > Fixes: f3b138c5d89a ("kvm/x86: Update SynIC timers on guest entry only") > Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> > --- > arch/x86/kvm/hyperv.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 89d20ed1d2e8..371c669696d7 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -526,7 +526,9 @@ static int stimer_set_config(struct kvm_vcpu_hv_stimer *stimer, u64 config, new_config.enable = 0; stimer->config.as_uint64 = new_config.as_uint64; - stimer_mark_pending(stimer, false); + if (stimer->config.enable) + stimer_mark_pending(stimer, false); + return 0; } @@ -542,7 +544,10 @@ static int stimer_set_count(struct kvm_vcpu_hv_stimer *stimer, u64 count, stimer->config.enable = 0; else if (stimer->config.auto_enable) stimer->config.enable = 1; - stimer_mark_pending(stimer, false); + + if (stimer->config.enable) + stimer_mark_pending(stimer, false); + return 0; }
When userspace initializes guest vCPUs it may want to zero all supported MSRs including Hyper-V related ones including HV_X64_MSR_STIMERn_CONFIG/ HV_X64_MSR_STIMERn_COUNT. With commit f3b138c5d89a ("kvm/x86: Update SynIC timers on guest entry only") we began doing stimer_mark_pending() unconditionally on every config change. The issue I'm observing manifests itself as following: - Qemu writes 0 to STIMERn_{CONFIG,COUNT} MSRs and marks all stimers as pending in stimer_pending_bitmap, arms KVM_REQ_HV_STIMER; - kvm_hv_has_stimer_pending() starts returning true; - kvm_vcpu_has_events() starts returning true; - kvm_arch_vcpu_runnable() starts returning true; - when kvm_arch_vcpu_ioctl_run() gets into (vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED) case: - kvm_vcpu_block() gets in 'kvm_vcpu_check_block(vcpu) < 0' and returns immediately, avoiding normal wait path; - -EAGAIN is returned from kvm_arch_vcpu_ioctl_run() immediately forcing userspace to retry. So instead of normal wait path we get a busy loop on all secondary vCPUs before they get INIT signal. This seems to be undesirable, especially given that this happens even when Hyper-V extensions are not used. Generally, it seems to be pointless to mark an stimer as pending in stimer_pending_bitmap and arm KVM_REQ_HV_STIMER as the only thing kvm_hv_process_stimers() will do is clear the corresponding bit. We may just not mark disabled timers as pending instead. Fixes: f3b138c5d89a ("kvm/x86: Update SynIC timers on guest entry only") Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> --- arch/x86/kvm/hyperv.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)