From patchwork Fri Aug 20 08:07:43 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zachary Amsden X-Patchwork-Id: 120519 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o7K8BbSm003307 for ; Fri, 20 Aug 2010 08:12:12 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751368Ab0HTIKy (ORCPT ); Fri, 20 Aug 2010 04:10:54 -0400 Received: from mx1.redhat.com ([209.132.183.28]:63352 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752055Ab0HTIJN (ORCPT ); Fri, 20 Aug 2010 04:09:13 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o7K898h3016756 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 20 Aug 2010 04:09:09 -0400 Received: from mysore (vpn-9-158.rdu.redhat.com [10.11.9.158]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id o7K87qfN027969; Fri, 20 Aug 2010 04:09:06 -0400 From: Zachary Amsden To: kvm@vger.kernel.org Cc: Zachary Amsden , Avi Kivity , Marcelo Tosatti , Glauber Costa , Thomas Gleixner , John Stultz , linux-kernel@vger.kernel.org Subject: [KVM timekeeping 29/35] TSC overrun protection Date: Thu, 19 Aug 2010 22:07:43 -1000 Message-Id: <1282291669-25709-30-git-send-email-zamsden@redhat.com> In-Reply-To: <1282291669-25709-1-git-send-email-zamsden@redhat.com> References: <1282291669-25709-1-git-send-email-zamsden@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Fri, 20 Aug 2010 08:12:12 +0000 (UTC) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 993d13d..9b2d231 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -345,6 +345,7 @@ struct kvm_vcpu_arch { u64 last_tsc_write; bool tsc_rebase; bool tsc_trapping; + bool tsc_overrun; bool nmi_pending; bool nmi_injected; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 23d1d02..887e30f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1015,13 +1015,19 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) u64 tsc = compute_guest_tsc(v, kernel_ns); if (tsc > tsc_timestamp) kvm_x86_ops->adjust_tsc_offset(v, tsc-tsc_timestamp); - } - local_irq_restore(flags); - if (catchup) { - if (this_tsc_khz < v->kvm->arch.virtual_tsc_khz) + local_irq_restore(flags); + + /* Now, see if we need to switch into trap mode */ + if (vcpu->tsc_overrun && !vcpu->tsc_trapping) + kvm_x86_ops->set_tsc_trap(v, 1); + + /* If we're falling behind and not trapping, re-trigger */ + if (!vcpu->tsc_trapping && + this_tsc_khz < v->kvm->arch.virtual_tsc_khz) vcpu->tsc_rebase = 1; return 0; } + local_irq_restore(flags); /* * Time as measured by the TSC may go backwards when resetting the base @@ -1098,6 +1104,17 @@ static void kvm_update_tsc_trapping(struct kvm *kvm) int trap, i; struct kvm_vcpu *vcpu; + /* + * Subtle point; we don't consider TSC rate here as part of + * the decision to trap or not. The reason for it is that + * TSC rate changes happen asynchronously, and are thus racy. + * The only safe place to check for this is above, in + * kvm_guest_time_update, where we've read the HZ value and + * the indication from the asynchronous notifier that TSC + * is in an overrun condition. Even that is racy, however that + * code is guaranteed to be called again if the CPU frequency + * changes yet another time before entering hardware virt. + */ trap = check_tsc_unstable() && atomic_read(&kvm->online_vcpus) > 1; kvm_for_each_vcpu(i, vcpu, kvm) kvm_x86_ops->set_tsc_trap(vcpu, trap && !vcpu->arch.time_page); @@ -1977,8 +1994,11 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_put_guest_fpu(vcpu); vcpu->arch.last_host_tsc = native_read_tsc(); - /* For unstable TSC, force compensation and catchup on next CPU */ - if (check_tsc_unstable()) { + /* + * For unstable TSC, force compensation and catchup on next CPU + * Don't need to do this if there is an overrun, as we'll trap. + */ + if (check_tsc_unstable() && !vcpu->arch.tsc_overrun) { vcpu->arch.tsc_rebase = 1; kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); } @@ -4342,6 +4362,8 @@ static int kvmclock_cpufreq_notifier(struct notifier_block *nb, unsigned long va kvm_for_each_vcpu(i, vcpu, kvm) { if (vcpu->cpu != freq->cpu) continue; + if (freq->new > kvm->arch.virtual_tsc_khz) + vcpu->arch.tsc_overrun = 1; kvm_request_clock_update(vcpu); if (vcpu->cpu != smp_processor_id()) send_ipi = 1;