From patchwork Sun Feb 28 12:54:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haozhong Zhang X-Patchwork-Id: 8446241 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AF313C0553 for ; Sun, 28 Feb 2016 13:01:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A7CE32022A for ; Sun, 28 Feb 2016 13:01:43 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 90DE920221 for ; Sun, 28 Feb 2016 13:01:42 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.84) (envelope-from ) id 1aa0tu-00060Q-9u; Sun, 28 Feb 2016 12:56:42 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.84) (envelope-from ) id 1aa0ts-000602-HR for xen-devel@lists.xen.org; Sun, 28 Feb 2016 12:56:40 +0000 Received: from [85.158.139.211] by server-17.bemta-5.messagelabs.com id 04/7A-03800-78EE2D65; Sun, 28 Feb 2016 12:56:39 +0000 X-Env-Sender: haozhong.zhang@intel.com X-Msg-Ref: server-11.tower-206.messagelabs.com!1456664198!13217436!1 X-Originating-IP: [134.134.136.65] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 3552 invoked from network); 28 Feb 2016 12:56:38 -0000 Received: from mga03.intel.com (HELO mga03.intel.com) (134.134.136.65) by server-11.tower-206.messagelabs.com with SMTP; 28 Feb 2016 12:56:38 -0000 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP; 28 Feb 2016 04:56:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,514,1449561600"; d="scan'208";a="922775623" Received: from hz-desktop.sh.intel.com (HELO localhost) ([10.239.13.126]) by orsmga002.jf.intel.com with ESMTP; 28 Feb 2016 04:56:36 -0800 From: Haozhong Zhang To: xen-devel@lists.xen.org Date: Sun, 28 Feb 2016 20:54:51 +0800 Message-Id: <1456664094-5161-3-git-send-email-haozhong.zhang@intel.com> X-Mailer: git-send-email 2.4.8 In-Reply-To: <1456664094-5161-1-git-send-email-haozhong.zhang@intel.com> References: <1456664094-5161-1-git-send-email-haozhong.zhang@intel.com> Cc: Haozhong Zhang , Kevin Tian , Keir Fraser , Jan Beulich , Andrew Cooper , Aravind Gopalakrishnan , Suravee Suthikulpanit , Boris Ostrovsky Subject: [Xen-devel] [PATCH v6 2/5] x86/hvm: Replace architecture TSC scaling by a common function X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch implements a common function hvm_scale_tsc() to scale TSC by using TSC scaling information collected by architecture code. Signed-off-by: Haozhong Zhang Acked-by: Boris Ostrovsky --- CC: Keir Fraser CC: Jan Beulich CC: Andrew Cooper CC: Boris Ostrovsky CC: Suravee Suthikulpanit CC: Aravind Gopalakrishnan CC: Kevin Tian --- Changes in v6: * Turn to named arguments for inline assembly in hvm_scale_tsc(). * I don't take R-b from Jan Beulich and Kevin Tian because of above changes. --- xen/arch/x86/hvm/hvm.c | 21 +++++++++++++++++++-- xen/arch/x86/hvm/svm/svm.c | 8 -------- xen/arch/x86/time.c | 3 +-- xen/include/asm-x86/hvm/hvm.h | 3 +-- 4 files changed, 21 insertions(+), 14 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 6c32e99..25be45c 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -333,6 +333,23 @@ u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz) return ratio > max_ratio ? 0 : ratio; } +u64 hvm_scale_tsc(const struct domain *d, u64 tsc) +{ + u64 ratio = d->arch.hvm_domain.tsc_scaling_ratio; + u64 dummy; + + if ( ratio == hvm_default_tsc_scaling_ratio ) + return tsc; + + /* tsc = (tsc * ratio) >> hvm_funcs.tsc_scaling.ratio_frac_bits */ + asm ( "mulq %[ratio]; shrdq %[frac],%%rdx,%[tsc]" + : [tsc] "+a" (tsc), "=d" (dummy) + : [frac] "c" (hvm_funcs.tsc_scaling.ratio_frac_bits), + [ratio] "rm" (ratio) ); + + return tsc; +} + void hvm_set_guest_tsc_fixed(struct vcpu *v, u64 guest_tsc, u64 at_tsc) { uint64_t tsc; @@ -347,7 +364,7 @@ void hvm_set_guest_tsc_fixed(struct vcpu *v, u64 guest_tsc, u64 at_tsc) { tsc = at_tsc ?: rdtsc(); if ( hvm_tsc_scaling_supported ) - tsc = hvm_funcs.tsc_scaling.scale_tsc(v, tsc); + tsc = hvm_scale_tsc(v->domain, tsc); } delta_tsc = guest_tsc - tsc; @@ -379,7 +396,7 @@ u64 hvm_get_guest_tsc_fixed(struct vcpu *v, uint64_t at_tsc) { tsc = at_tsc ?: rdtsc(); if ( hvm_tsc_scaling_supported ) - tsc = hvm_funcs.tsc_scaling.scale_tsc(v, tsc); + tsc = hvm_scale_tsc(v->domain, tsc); } return tsc + v->arch.hvm_vcpu.cache_tsc_offset; diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 7172f25..979d226 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -819,13 +819,6 @@ static uint64_t scale_tsc(uint64_t host_tsc, uint64_t ratio) return scaled_host_tsc; } -static uint64_t svm_scale_tsc(const struct vcpu *v, uint64_t tsc) -{ - ASSERT(cpu_has_tsc_ratio && !v->domain->arch.vtsc); - - return scale_tsc(tsc, hvm_tsc_scaling_ratio(v->domain)); -} - static uint64_t svm_get_tsc_offset(uint64_t host_tsc, uint64_t guest_tsc, uint64_t ratio) { @@ -2291,7 +2284,6 @@ static struct hvm_function_table __initdata svm_function_table = { .tsc_scaling = { .max_ratio = ~TSC_RATIO_RSVD_BITS, - .scale_tsc = svm_scale_tsc, }, }; diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c index fda9692..687e39b 100644 --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -816,8 +816,7 @@ static void __update_vcpu_system_time(struct vcpu *v, int force) { if ( has_hvm_container_domain(d) && hvm_tsc_scaling_supported ) { - tsc_stamp = - hvm_funcs.tsc_scaling.scale_tsc(v, t->local_tsc_stamp); + tsc_stamp = hvm_scale_tsc(d, t->local_tsc_stamp); _u.tsc_to_system_mul = d->arch.vtsc_to_ns.mul_frac; _u.tsc_shift = d->arch.vtsc_to_ns.shift; } diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index ddb1e33..c5c9328 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -231,8 +231,6 @@ struct hvm_function_table { uint8_t ratio_frac_bits; /* maximum-allowed TSC scaling ratio */ uint64_t max_ratio; - - uint64_t (*scale_tsc)(const struct vcpu *v, uint64_t tsc); } tsc_scaling; }; @@ -278,6 +276,7 @@ u64 hvm_get_guest_tsc_fixed(struct vcpu *v, u64 at_tsc); #define hvm_tsc_scaling_ratio(d) \ ((d)->arch.hvm_domain.tsc_scaling_ratio) +u64 hvm_scale_tsc(const struct domain *d, u64 tsc); u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz); int hvm_set_mode(struct vcpu *v, int mode);