From patchwork Wed Feb 9 17:29:43 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 544511 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p19HVZPG027358 for ; Wed, 9 Feb 2011 17:31:36 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755412Ab1BIRar (ORCPT ); Wed, 9 Feb 2011 12:30:47 -0500 Received: from tx2ehsobe003.messaging.microsoft.com ([65.55.88.13]:3313 "EHLO TX2EHSOBE005.bigfish.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755169Ab1BIR3z (ORCPT ); Wed, 9 Feb 2011 12:29:55 -0500 Received: from mail87-tx2-R.bigfish.com (10.9.14.246) by TX2EHSOBE005.bigfish.com (10.9.40.25) with Microsoft SMTP Server id 14.1.225.8; Wed, 9 Feb 2011 17:29:55 +0000 Received: from mail87-tx2 (localhost.localdomain [127.0.0.1]) by mail87-tx2-R.bigfish.com (Postfix) with ESMTP id 023C96B8735; Wed, 9 Feb 2011 17:29:55 +0000 (UTC) X-SpamScore: -2 X-BigFish: VPS-2(zzbb2cKzz1202hzz8275bhz32i668h61h) X-Spam-TCS-SCL: 0:0 X-Forefront-Antispam-Report: KIP:(null); UIP:(null); IPVD:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI Received: from mail87-tx2 (localhost.localdomain [127.0.0.1]) by mail87-tx2 (MessageSwitch) id 1297272594568970_24589; Wed, 9 Feb 2011 17:29:54 +0000 (UTC) Received: from TX2EHSMHS042.bigfish.com (unknown [10.9.14.247]) by mail87-tx2.bigfish.com (Postfix) with ESMTP id 82AB913E0053; Wed, 9 Feb 2011 17:29:54 +0000 (UTC) Received: from ausb3twp02.amd.com (163.181.249.109) by TX2EHSMHS042.bigfish.com (10.9.99.142) with Microsoft SMTP Server id 14.1.225.8; Wed, 9 Feb 2011 17:29:54 +0000 X-WSS-ID: 0LGD1XO-02-4UN-02 X-M-MSG: Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com [163.181.249.73]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by ausb3twp02.amd.com (Tumbleweed MailGate 3.7.2) with ESMTP id 284CCC8ACA; Wed, 9 Feb 2011 11:29:47 -0600 (CST) Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep02.amd.com (163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.106.1; Wed, 9 Feb 2011 11:30:45 -0600 Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp02.amd.com (163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.83.0; Wed, 9 Feb 2011 11:29:51 -0600 Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com (172.24.4.3) with Microsoft SMTP Server id 8.3.83.0; Wed, 9 Feb 2011 12:29:50 -0500 Received: from lemmy.osrc.amd.com (lemmy.osrc.amd.com [165.204.15.93]) by gwo.osrc.amd.com (Postfix) with ESMTP id E0BC849C268; Wed, 9 Feb 2011 17:29:49 +0000 (GMT) Received: by lemmy.osrc.amd.com (Postfix, from userid 1000) id CCC5EFFBBD; Wed, 9 Feb 2011 18:29:49 +0100 (CET) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: , , Zachary Amsden , Joerg Roedel Subject: [PATCH 5/6] KVM: X86: Delegate tsc-offset calculation to architecture code Date: Wed, 9 Feb 2011 18:29:43 +0100 Message-ID: <1297272584-22689-6-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1297272584-22689-1-git-send-email-joerg.roedel@amd.com> References: <1297272584-22689-1-git-send-email-joerg.roedel@amd.com> MIME-Version: 1.0 X-OriginatorOrg: amd.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Wed, 09 Feb 2011 17:31:36 +0000 (UTC) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9686950..8c40425 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -593,6 +593,7 @@ struct kvm_x86_ops { void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); bool (*use_virtual_tsc_khz)(struct kvm_vcpu *vcpu); + u64 (*compute_tsc_offset)(struct kvm_vcpu *vcpu, u64 target_tsc); void (*get_exit_info)(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2); const struct trace_print_flags *exit_reasons_str; diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 29833a7..f938585 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -881,7 +881,6 @@ static u64 svm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc) static bool svm_vcpu_init_tsc(struct kvm *kvm, struct vcpu_svm *svm) { - u64 raw_tsc, tsc, new_tsc; u64 ratio; u64 khz; @@ -941,6 +940,15 @@ static bool svm_use_virtual_tsc_khz(struct kvm_vcpu *vcpu) return svm->tsc_scale.enabled; } +static u64 svm_compute_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc) +{ + u64 tsc; + + tsc = svm_scale_tsc(vcpu, native_read_tsc()); + + return target_tsc - tsc; +} + static void init_vmcb(struct vcpu_svm *svm) { struct vmcb_control_area *control = &svm->vmcb->control; @@ -4016,6 +4024,7 @@ static struct kvm_x86_ops svm_x86_ops = { .write_tsc_offset = svm_write_tsc_offset, .adjust_tsc_offset = svm_adjust_tsc_offset, .use_virtual_tsc_khz = svm_use_virtual_tsc_khz, + .compute_tsc_offset = svm_compute_tsc_offset, .set_tdp_cr3 = set_tdp_cr3, }; diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index c227a6b..9bbdf1f 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1169,6 +1169,11 @@ static bool vmx_use_virtual_tsc_khz(struct kvm_vcpu *vcpu) return false; } +static u64 vmx_compute_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc) +{ + return target_tsc - native_read_tsc(); +} + /* * Reads an msr value (of 'msr_index') into 'pdata'. * Returns 0 on success, non-0 otherwise. @@ -4449,6 +4454,7 @@ static struct kvm_x86_ops vmx_x86_ops = { .write_tsc_offset = vmx_write_tsc_offset, .adjust_tsc_offset = vmx_adjust_tsc_offset, .use_virtual_tsc_khz = vmx_use_virtual_tsc_khz, + .compute_tsc_offset = vmx_compute_tsc_offset, .set_tdp_cr3 = vmx_set_cr3, }; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 597abc8..6caaf4b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -987,7 +987,7 @@ static u64 vcpu_tsc_khz(struct kvm_vcpu *vcpu) return __this_cpu_read(cpu_tsc_khz); } -static inline u64 nsec_to_cycles(u64 nsec) +static inline u64 nsec_to_cycles(struct kvm_vcpu *vcpu, u64 nsec) { u64 ret; @@ -995,7 +995,7 @@ static inline u64 nsec_to_cycles(u64 nsec) if (kvm_tsc_changes_freq()) printk_once(KERN_WARNING "kvm: unreliable cycle conversion on adjustable rate TSC\n"); - ret = nsec * __this_cpu_read(cpu_tsc_khz); + ret = nsec * vcpu_tsc_khz(vcpu); do_div(ret, USEC_PER_SEC); return ret; } @@ -1027,7 +1027,7 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, u64 data) s64 sdiff; spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); - offset = data - native_read_tsc(); + offset = kvm_x86_ops->compute_tsc_offset(vcpu, data); ns = get_kernel_ns(); elapsed = ns - kvm->arch.last_tsc_nsec; sdiff = data - kvm->arch.last_tsc_write; @@ -1043,13 +1043,13 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, u64 data) * In that case, for a reliable TSC, we can match TSC offsets, * or make a best guest using elapsed value. */ - if (sdiff < nsec_to_cycles(5ULL * NSEC_PER_SEC) && + if (sdiff < nsec_to_cycles(vcpu, 5ULL * NSEC_PER_SEC) && elapsed < 5ULL * NSEC_PER_SEC) { if (!check_tsc_unstable()) { offset = kvm->arch.last_tsc_offset; pr_debug("kvm: matched tsc offset for %llu\n", data); } else { - u64 delta = nsec_to_cycles(elapsed); + u64 delta = nsec_to_cycles(vcpu, elapsed); offset += delta; pr_debug("kvm: adjusted tsc offset by %llu\n", delta); }