diff mbox

[kvm-unit-tests] KVM: x86: add hyperv clock test case

Message ID 20160426103455.GA21656@rkaganb.sw.ru (mailing list archive)
State New, archived
Headers show

Commit Message

Roman Kagan April 26, 2016, 10:34 a.m. UTC
On Mon, Apr 25, 2016 at 11:47:23AM +0300, Roman Kagan wrote:
> On Fri, Apr 22, 2016 at 08:08:47PM +0200, Paolo Bonzini wrote:
> > On 22/04/2016 15:32, Roman Kagan wrote:
> > > The first value is derived from the kvm_clock's tsc_to_system_mul and
> > > tsc_shift, and matches hosts's vcpu->hw_tsc_khz.  The second is
> > > calibrated using emulated HPET.  The difference is those +14 ppm.
> > > 
> > > This is on i7-2600, invariant TSC present, TSC scaling not present.
> > > 
> > > I'll dig further but I'd appreciate any comment on whether it was within
> > > tolerance or not.
> > 
> > The solution to the bug is to change the Hyper-V reference time MSR to
> > use the same formula as the Hyper-V TSC-based clock.  Likewise,
> > KVM_GET_CLOCK and KVM_SET_CLOCK should not use ktime_get_ns().
> 
> Umm, I'm not sure it's a good idea...
> 
> E.g. virtualized HPET sits in userspace and thus uses
> clock_gettime(CLOCK_MONOTONIC), so the drift will remain.
> 
> AFAICT the root cause is the following: KVM master clock uses the same
> multiplier/shift as the vsyscall time in host userspace.  However, the
> offsets in vsyscall_gtod_data get updated all the time with corrections
> from NTP and so on.  Therefore even if the TSC rate is somewhat
> miscalibrated, the error is kept small in vsyscall time functions.  OTOH
> the offsets in KVM clock are basically never updated, so the error keeps
> linearly growing over time.

This seems to be due to a typo:


as a result, the global pvclock_gtod_data was kept up to date, but the
requests to update per-vm copies were never issued.

With the patch I'm now seeing different test failures which I'm looking
into.

Meanwhile I'm wondering if this scheme is not too costly: on my machine
pvclock_gtod_notify() is called at kHz rate, and the work it schedules
does

static void pvclock_gtod_update_fn(struct work_struct *work)
{
[...]
        spin_lock(&kvm_lock);
        list_for_each_entry(kvm, &vm_list, vm_list)
                kvm_for_each_vcpu(i, vcpu, kvm)
                        kvm_make_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu);
        atomic_set(&kvm_guest_has_master_clock, 0);
        spin_unlock(&kvm_lock);
}

KVM_REQ_MASTERCLOCK_UPDATE makes all VCPUs synchronize:

static void kvm_gen_update_masterclock(struct kvm *kvm)
{
[...]
        spin_lock(&ka->pvclock_gtod_sync_lock);
        kvm_make_mclock_inprogress_request(kvm);
        /* no guest entries from this point */
        pvclock_update_vm_gtod_copy(kvm);

        kvm_for_each_vcpu(i, vcpu, kvm)
                kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);

        /* guest entries allowed */
        kvm_for_each_vcpu(i, vcpu, kvm)
                clear_bit(KVM_REQ_MCLOCK_INPROGRESS, &vcpu->requests);

        spin_unlock(&ka->pvclock_gtod_sync_lock);
[...]
}

so on a host with many VMs it may become an issue.

Roman.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Roman Kagan May 25, 2016, 6:33 p.m. UTC | #1
On Tue, Apr 26, 2016 at 01:34:56PM +0300, Roman Kagan wrote:
> On Mon, Apr 25, 2016 at 11:47:23AM +0300, Roman Kagan wrote:
> > On Fri, Apr 22, 2016 at 08:08:47PM +0200, Paolo Bonzini wrote:
> > > On 22/04/2016 15:32, Roman Kagan wrote:
> > > > The first value is derived from the kvm_clock's tsc_to_system_mul and
> > > > tsc_shift, and matches hosts's vcpu->hw_tsc_khz.  The second is
> > > > calibrated using emulated HPET.  The difference is those +14 ppm.
> > > > 
> > > > This is on i7-2600, invariant TSC present, TSC scaling not present.
> > > > 
> > > > I'll dig further but I'd appreciate any comment on whether it was within
> > > > tolerance or not.
> > > 
> > > The solution to the bug is to change the Hyper-V reference time MSR to
> > > use the same formula as the Hyper-V TSC-based clock.  Likewise,
> > > KVM_GET_CLOCK and KVM_SET_CLOCK should not use ktime_get_ns().
> > 
> > Umm, I'm not sure it's a good idea...
> > 
> > E.g. virtualized HPET sits in userspace and thus uses
> > clock_gettime(CLOCK_MONOTONIC), so the drift will remain.
> > 
> > AFAICT the root cause is the following: KVM master clock uses the same
> > multiplier/shift as the vsyscall time in host userspace.  However, the
> > offsets in vsyscall_gtod_data get updated all the time with corrections
> > from NTP and so on.  Therefore even if the TSC rate is somewhat
> > miscalibrated, the error is kept small in vsyscall time functions.  OTOH
> > the offsets in KVM clock are basically never updated, so the error keeps
> > linearly growing over time.
> 
> This seems to be due to a typo:
> 
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5819,7 +5819,7 @@ static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused,
>         /* disable master clock if host does not trust, or does not
>          * use, TSC clocksource
>          */
> -       if (gtod->clock.vclock_mode != VCLOCK_TSC &&
> +       if (gtod->clock.vclock_mode == VCLOCK_TSC &&
>             atomic_read(&kvm_guest_has_master_clock) != 0)
>                 queue_work(system_long_wq, &pvclock_gtod_work);
> 
> 
> as a result, the global pvclock_gtod_data was kept up to date, but the
> requests to update per-vm copies were never issued.
> 
> With the patch I'm now seeing different test failures which I'm looking
> into.
> 
> Meanwhile I'm wondering if this scheme is not too costly: on my machine
> pvclock_gtod_notify() is called at kHz rate, and the work it schedules
> does
> 
> static void pvclock_gtod_update_fn(struct work_struct *work)
> {
> [...]
>         spin_lock(&kvm_lock);
>         list_for_each_entry(kvm, &vm_list, vm_list)
>                 kvm_for_each_vcpu(i, vcpu, kvm)
>                         kvm_make_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu);
>         atomic_set(&kvm_guest_has_master_clock, 0);
>         spin_unlock(&kvm_lock);
> }
> 
> KVM_REQ_MASTERCLOCK_UPDATE makes all VCPUs synchronize:
> 
> static void kvm_gen_update_masterclock(struct kvm *kvm)
> {
> [...]
>         spin_lock(&ka->pvclock_gtod_sync_lock);
>         kvm_make_mclock_inprogress_request(kvm);
>         /* no guest entries from this point */
>         pvclock_update_vm_gtod_copy(kvm);
> 
>         kvm_for_each_vcpu(i, vcpu, kvm)
>                 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
> 
>         /* guest entries allowed */
>         kvm_for_each_vcpu(i, vcpu, kvm)
>                 clear_bit(KVM_REQ_MCLOCK_INPROGRESS, &vcpu->requests);
> 
>         spin_unlock(&ka->pvclock_gtod_sync_lock);
> [...]
> }
> 
> so on a host with many VMs it may become an issue.

Ping

Roman.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Marcelo Tosatti May 29, 2016, 10:34 p.m. UTC | #2
On Wed, May 25, 2016 at 09:33:07PM +0300, Roman Kagan wrote:
> On Tue, Apr 26, 2016 at 01:34:56PM +0300, Roman Kagan wrote:
> > On Mon, Apr 25, 2016 at 11:47:23AM +0300, Roman Kagan wrote:
> > > On Fri, Apr 22, 2016 at 08:08:47PM +0200, Paolo Bonzini wrote:
> > > > On 22/04/2016 15:32, Roman Kagan wrote:
> > > > > The first value is derived from the kvm_clock's tsc_to_system_mul and
> > > > > tsc_shift, and matches hosts's vcpu->hw_tsc_khz.  The second is
> > > > > calibrated using emulated HPET.  The difference is those +14 ppm.
> > > > > 
> > > > > This is on i7-2600, invariant TSC present, TSC scaling not present.
> > > > > 
> > > > > I'll dig further but I'd appreciate any comment on whether it was within
> > > > > tolerance or not.
> > > > 
> > > > The solution to the bug is to change the Hyper-V reference time MSR to
> > > > use the same formula as the Hyper-V TSC-based clock.  Likewise,
> > > > KVM_GET_CLOCK and KVM_SET_CLOCK should not use ktime_get_ns().
> > > 
> > > Umm, I'm not sure it's a good idea...
> > > 
> > > E.g. virtualized HPET sits in userspace and thus uses
> > > clock_gettime(CLOCK_MONOTONIC), so the drift will remain.
> > > 
> > > AFAICT the root cause is the following: KVM master clock uses the same
> > > multiplier/shift as the vsyscall time in host userspace.  However, the
> > > offsets in vsyscall_gtod_data get updated all the time with corrections
> > > from NTP and so on.  Therefore even if the TSC rate is somewhat
> > > miscalibrated, the error is kept small in vsyscall time functions.  OTOH
> > > the offsets in KVM clock are basically never updated, so the error keeps
> > > linearly growing over time.
> > 
> > This seems to be due to a typo:

Its not a typo, the code only updated the notifier on 
VCLOCK_TSC -> !VCLOCK_TSC transition.

> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -5819,7 +5819,7 @@ static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused,
> >         /* disable master clock if host does not trust, or does not
> >          * use, TSC clocksource
> >          */
> > -       if (gtod->clock.vclock_mode != VCLOCK_TSC &&
> > +       if (gtod->clock.vclock_mode == VCLOCK_TSC &&
> >             atomic_read(&kvm_guest_has_master_clock) != 0)
> >                 queue_work(system_long_wq, &pvclock_gtod_work);
> > 
> > 
> > as a result, the global pvclock_gtod_data was kept up to date, but the
> > requests to update per-vm copies were never issued.
> > 
> > With the patch I'm now seeing different test failures which I'm looking
> > into.

The queue_work is not enough: it opens a window where guest clock
(via shared memory) and CLOCK_GETTIME can go out of sync.

> > 
> > Meanwhile I'm wondering if this scheme is not too costly: on my machine
> > pvclock_gtod_notify() is called at kHz rate, and the work it schedules
> > does
> > 
> > static void pvclock_gtod_update_fn(struct work_struct *work)
> > {
> > [...]
> >         spin_lock(&kvm_lock);
> >         list_for_each_entry(kvm, &vm_list, vm_list)
> >                 kvm_for_each_vcpu(i, vcpu, kvm)
> >                         kvm_make_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu);
> >         atomic_set(&kvm_guest_has_master_clock, 0);
> >         spin_unlock(&kvm_lock);
> > }
> > 
> > KVM_REQ_MASTERCLOCK_UPDATE makes all VCPUs synchronize:
> > 
> > static void kvm_gen_update_masterclock(struct kvm *kvm)
> > {
> > [...]
> >         spin_lock(&ka->pvclock_gtod_sync_lock);
> >         kvm_make_mclock_inprogress_request(kvm);
> >         /* no guest entries from this point */
> >         pvclock_update_vm_gtod_copy(kvm);
> > 
> >         kvm_for_each_vcpu(i, vcpu, kvm)
> >                 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
> > 
> >         /* guest entries allowed */
> >         kvm_for_each_vcpu(i, vcpu, kvm)
> >                 clear_bit(KVM_REQ_MCLOCK_INPROGRESS, &vcpu->requests);
> > 
> >         spin_unlock(&ka->pvclock_gtod_sync_lock);
> > [...]
> > }
> > 
> > so on a host with many VMs it may become an issue.
> 
> Ping
> 
> Roman.

1) Can call notifier only when frequency changes.
2) Can calculate how much drift between clocks and do not allow guest
entry.

Will post a patch soon, one or two weeks max (again, independent of your patchset).

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5819,7 +5819,7 @@  static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused,
        /* disable master clock if host does not trust, or does not
         * use, TSC clocksource
         */
-       if (gtod->clock.vclock_mode != VCLOCK_TSC &&
+       if (gtod->clock.vclock_mode == VCLOCK_TSC &&
            atomic_read(&kvm_guest_has_master_clock) != 0)
                queue_work(system_long_wq, &pvclock_gtod_work);