From patchwork Wed Sep 7 22:04:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Lai X-Patchwork-Id: 9320131 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C121260752 for ; Wed, 7 Sep 2016 22:04:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 91EB229442 for ; Wed, 7 Sep 2016 22:04:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 86B9F29444; Wed, 7 Sep 2016 22:04:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C1B6829442 for ; Wed, 7 Sep 2016 22:04:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bhkvI-0003cA-R4; Wed, 07 Sep 2016 22:02:24 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bhkvI-0003be-7f for xen-devel@lists.xensource.com; Wed, 07 Sep 2016 22:02:24 +0000 Received: from [85.158.139.211] by server-2.bemta-5.messagelabs.com id 54/2A-03032-F6E80D75; Wed, 07 Sep 2016 22:02:23 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpjkeJIrShJLcpLzFFi42Jpa+tI0c3ruxB u0Ppez+LelPfsDowe2/t2sQcwRrFm5iXlVySwZlydPY+14Jdrxc9JT1kaGF+bdDFycggJVEr8 /n+WBcSWEOCVOLJsBiuEHSAxdeZ1ZoiaCokp11YB2VxA9lJGiZ/PG9ggEiUSi9efZgex2QRUJ Zafm8QIYosIKEqsW/0ObCizgJvEl8vvwGqEBRIl2rcvZAKxWYDqv3dNB7N5BZwlXjVvZYZYLC dx81wnmM0p4CJx6tACqF3OEu9v/WSdwMi/gJFhFaNGcWpRWWqRrqGlXlJRZnpGSW5iZo6uoYG pXm5qcXFiempOYlKxXnJ+7iZGYPjUMzAw7mB81O93iFGSg0lJlNen+EK4EF9SfkplRmJxRnxR aU5q8SFGGQ4OJQne+b1AOcGi1PTUirTMHGAgw6QlOHiURHiFQdK8xQWJucWZ6RCpU4yKUuK86 SAJAZBERmkeXBssei4xykoJ8zIyMDAI8RSkFuVmlqDKv2IU52BUEuadCDKFJzOvBG76K6DFTE CLhU6dB1lckoiQkmpg3L1aYJ2yhafNB7F34hoTvU6wez979u13qUR2/cK90+JimfTFUn7LMTf 0piXYhinX854s6lJVFK5kvnx1m2Grl9LrvEuNyurPG32ObLQNUFZvnS3EY+byU02Uu+XJg727 bS6onE98+EflbXn4quWXfG0eyBx6Mu3Lbb4dxxf3r5k8z7Tw5IQ+JZbijERDLeai4kQAVZuiR 5kCAAA= X-Env-Sender: pclai@intel.com X-Msg-Ref: server-12.tower-206.messagelabs.com!1473285739!22010612!1 X-Originating-IP: [134.134.136.100] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 43278 invoked from network); 7 Sep 2016 22:02:22 -0000 Received: from mga07.intel.com (HELO mga07.intel.com) (134.134.136.100) by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 7 Sep 2016 22:02:22 -0000 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP; 07 Sep 2016 15:02:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.30,297,1470726000"; d="scan'208"; a="1036761915" Received: from scymds01.sc.intel.com ([10.82.194.37]) by fmsmga001.fm.intel.com with ESMTP; 07 Sep 2016 15:02:07 -0700 Received: from pclaidev.sc.intel.com (pclaidev.sc.intel.com [143.183.85.54]) by scymds01.sc.intel.com with ESMTP id u87M256e011251; Wed, 7 Sep 2016 15:02:05 -0700 Received: by pclaidev.sc.intel.com (Postfix, from userid 1002) id 2987F20024; Wed, 7 Sep 2016 15:04:26 -0700 (PDT) From: Paul Lai To: xen-devel@lists.xensource.com Date: Wed, 7 Sep 2016 15:04:23 -0700 Message-Id: <1473285866-6868-2-git-send-email-paul.c.lai@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1473285866-6868-1-git-send-email-paul.c.lai@intel.com> References: <1473285866-6868-1-git-send-email-paul.c.lai@intel.com> Cc: ravi.sahita@intel.com, george.dunlap@citrix.com, jbeulich@suse.com Subject: [Xen-devel] [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Jan Beulich Consistently consult hvm_cpuid(). With that, BNDCFGS gets better handled outside of VMX specific code, just like XSS. Don't needlessly check for MTRR support when the MSR being accessed clearly is not an MTRR one. Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper --- xen/arch/x86/hvm/hvm.c | 88 ++++++++++++++++++++++++++++++++----------- xen/arch/x86/hvm/vmx/vmx.c | 40 ++++++++++++++------ xen/include/asm-x86/hvm/hvm.h | 10 +++++ 3 files changed, 103 insertions(+), 35 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 98f0740..787f055 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -309,6 +309,14 @@ int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat) return 1; } +bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val) +{ + return hvm_funcs.set_guest_bndcfgs && + is_canonical_address(val) && + !(val & IA32_BNDCFGS_RESERVED) && + hvm_funcs.set_guest_bndcfgs(v, val); +} + /* * Get the ratio to scale host TSC frequency to gtsc_khz. zero will be * returned if TSC scaling is unavailable or ratio cannot be handled @@ -3667,28 +3675,30 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content) { struct vcpu *v = current; uint64_t *var_range_base, *fixed_range_base; - bool_t mtrr; - unsigned int edx, index; + bool mtrr = false; int ret = X86EMUL_OKAY; var_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.var_ranges; fixed_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.fixed_ranges; - hvm_cpuid(1, NULL, NULL, NULL, &edx); - mtrr = !!(edx & cpufeat_mask(X86_FEATURE_MTRR)); + if ( msr == MSR_MTRRcap || + (msr >= MSR_IA32_MTRR_PHYSBASE(0) && msr <= MSR_MTRRdefType) ) + { + unsigned int edx; + + hvm_cpuid(1, NULL, NULL, NULL, &edx); + if ( edx & cpufeat_mask(X86_FEATURE_MTRR) ) + mtrr = true; + } switch ( msr ) { + unsigned int eax, ebx, ecx, index; + case MSR_EFER: *msr_content = v->arch.hvm_vcpu.guest_efer; break; - case MSR_IA32_XSS: - if ( !cpu_has_xsaves ) - goto gp_fault; - *msr_content = v->arch.hvm_vcpu.msr_xss; - break; - case MSR_IA32_TSC: *msr_content = _hvm_rdtsc_intercept(); break; @@ -3754,6 +3764,22 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content) *msr_content = var_range_base[index]; break; + case MSR_IA32_XSS: + ecx = 1; + hvm_cpuid(XSTATE_CPUID, &eax, NULL, &ecx, NULL); + if ( !(eax & cpufeat_mask(X86_FEATURE_XSAVES)) ) + goto gp_fault; + *msr_content = v->arch.hvm_vcpu.msr_xss; + break; + + case MSR_IA32_BNDCFGS: + ecx = 0; + hvm_cpuid(7, NULL, &ebx, &ecx, NULL); + if ( !(ebx & cpufeat_mask(X86_FEATURE_MPX)) || + !hvm_get_guest_bndcfgs(v, msr_content) ) + goto gp_fault; + break; + case MSR_K8_ENABLE_C1E: case MSR_AMD64_NB_CFG: /* @@ -3790,15 +3816,20 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, bool_t may_defer) { struct vcpu *v = current; - bool_t mtrr; - unsigned int edx, index; + bool mtrr = false; int ret = X86EMUL_OKAY; HVMTRACE_3D(MSR_WRITE, msr, (uint32_t)msr_content, (uint32_t)(msr_content >> 32)); - hvm_cpuid(1, NULL, NULL, NULL, &edx); - mtrr = !!(edx & cpufeat_mask(X86_FEATURE_MTRR)); + if ( msr >= MSR_IA32_MTRR_PHYSBASE(0) && msr <= MSR_MTRRdefType ) + { + unsigned int edx; + + hvm_cpuid(1, NULL, NULL, NULL, &edx); + if ( edx & cpufeat_mask(X86_FEATURE_MTRR) ) + mtrr = true; + } if ( may_defer && unlikely(monitored_msr(v->domain, msr)) ) { @@ -3815,18 +3846,13 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, switch ( msr ) { + unsigned int eax, ebx, ecx, index; + case MSR_EFER: if ( hvm_set_efer(msr_content) ) return X86EMUL_EXCEPTION; break; - case MSR_IA32_XSS: - /* No XSS features currently supported for guests. */ - if ( !cpu_has_xsaves || msr_content != 0 ) - goto gp_fault; - v->arch.hvm_vcpu.msr_xss = msr_content; - break; - case MSR_IA32_TSC: hvm_set_guest_tsc(v, msr_content); break; @@ -3863,9 +3889,8 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, break; case MSR_MTRRcap: - if ( !mtrr ) - goto gp_fault; goto gp_fault; + case MSR_MTRRdefType: if ( !mtrr ) goto gp_fault; @@ -3905,6 +3930,23 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, goto gp_fault; break; + case MSR_IA32_XSS: + ecx = 1; + hvm_cpuid(XSTATE_CPUID, &eax, NULL, &ecx, NULL); + /* No XSS features currently supported for guests. */ + if ( !(eax & cpufeat_mask(X86_FEATURE_XSAVES)) || msr_content != 0 ) + goto gp_fault; + v->arch.hvm_vcpu.msr_xss = msr_content; + break; + + case MSR_IA32_BNDCFGS: + ecx = 0; + hvm_cpuid(7, NULL, &ebx, &ecx, NULL); + if ( !(ebx & cpufeat_mask(X86_FEATURE_MPX)) || + !hvm_set_guest_bndcfgs(v, msr_content) ) + goto gp_fault; + break; + case MSR_AMD64_NB_CFG: /* ignore the write */ break; diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 306f482..2759e6f 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1224,6 +1224,28 @@ static int vmx_get_guest_pat(struct vcpu *v, u64 *gpat) return 1; } +static bool vmx_set_guest_bndcfgs(struct vcpu *v, u64 val) +{ + ASSERT(cpu_has_mpx && cpu_has_vmx_mpx); + + vmx_vmcs_enter(v); + __vmwrite(GUEST_BNDCFGS, val); + vmx_vmcs_exit(v); + + return true; +} + +static bool vmx_get_guest_bndcfgs(struct vcpu *v, u64 *val) +{ + ASSERT(cpu_has_mpx && cpu_has_vmx_mpx); + + vmx_vmcs_enter(v); + __vmread(GUEST_BNDCFGS, val); + vmx_vmcs_exit(v); + + return true; +} + static void vmx_handle_cd(struct vcpu *v, unsigned long value) { if ( !paging_mode_hap(v->domain) ) @@ -2323,6 +2345,12 @@ const struct hvm_function_table * __init start_vmx(void) if ( cpu_has_vmx_tsc_scaling ) vmx_function_table.tsc_scaling.ratio_frac_bits = 48; + if ( cpu_has_mpx && cpu_has_vmx_mpx ) + { + vmx_function_table.set_guest_bndcfgs = vmx_set_guest_bndcfgs; + vmx_function_table.get_guest_bndcfgs = vmx_get_guest_bndcfgs; + } + setup_vmcs_dump(); return &vmx_function_table; @@ -2650,11 +2678,6 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content) case MSR_IA32_DEBUGCTLMSR: __vmread(GUEST_IA32_DEBUGCTL, msr_content); break; - case MSR_IA32_BNDCFGS: - if ( !cpu_has_mpx || !cpu_has_vmx_mpx ) - goto gp_fault; - __vmread(GUEST_BNDCFGS, msr_content); - break; case MSR_IA32_FEATURE_CONTROL: case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC: if ( !nvmx_msr_read_intercept(msr, msr_content) ) @@ -2881,13 +2904,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content) break; } - case MSR_IA32_BNDCFGS: - if ( !cpu_has_mpx || !cpu_has_vmx_mpx || - !is_canonical_address(msr_content) || - (msr_content & IA32_BNDCFGS_RESERVED) ) - goto gp_fault; - __vmwrite(GUEST_BNDCFGS, msr_content); - break; case MSR_IA32_FEATURE_CONTROL: case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_TRUE_ENTRY_CTLS: if ( !nvmx_msr_write_intercept(msr, msr_content) ) diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 5d463e0..81b60d5 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -147,6 +147,9 @@ struct hvm_function_table { int (*get_guest_pat)(struct vcpu *v, u64 *); int (*set_guest_pat)(struct vcpu *v, u64); + bool (*get_guest_bndcfgs)(struct vcpu *v, u64 *); + bool (*set_guest_bndcfgs)(struct vcpu *v, u64); + void (*set_tsc_offset)(struct vcpu *v, u64 offset, u64 at_tsc); void (*inject_trap)(const struct hvm_trap *trap); @@ -383,6 +386,13 @@ static inline unsigned long hvm_get_shadow_gs_base(struct vcpu *v) return hvm_funcs.get_shadow_gs_base(v); } +static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val) +{ + return hvm_funcs.get_guest_bndcfgs && + hvm_funcs.get_guest_bndcfgs(v, val); +} + +bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val); #define has_hvm_params(d) \ ((d)->arch.hvm_domain.params != NULL)