From patchwork Mon Jan 9 11:03:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 9504359 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1FD5660710 for ; Mon, 9 Jan 2017 11:18:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C348282ED for ; Mon, 9 Jan 2017 11:18:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 20F092849C; Mon, 9 Jan 2017 11:18:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D1763282ED for ; Mon, 9 Jan 2017 11:18:34 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cQXwM-0003mI-Cg; Mon, 09 Jan 2017 11:16:38 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cQXwL-0003ln-QM for xen-devel@lists.xen.org; Mon, 09 Jan 2017 11:16:37 +0000 Received: from [193.109.254.147] by server-8.bemta-6.messagelabs.com id FC/57-21675-51173785; Mon, 09 Jan 2017 11:16:37 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpmkeJIrShJLcpLzFFi42JxWrrBXleosDj CYP9mS4slHxezODB6HN39mymAMYo1My8pvyKBNePZoeSCYxEVv34eY29gvGDTxcjJISHgL3Ht 0mlWEJtNQF9i94tPTCC2iIC6xOmOi0BxDg5mAV2JVT81QMLCAmESdzsugZWzCKhI7J88GaycV 8BL4v7194wQI+Ukzh//yQxicwLFp/05ARYXEvCUWPjwBTOErSZxrf8SO0SvoMTJmU9YQGxmAQ mJgy9eME9g5J2FJDULSWoBI9MqRo3i1KKy1CJdIxO9pKLM9IyS3MTMHF1DAzO93NTi4sT01Jz EpGK95PzcTYzAwGEAgh2MK9cFHmKU5GBSEuV1jCqIEOJLyk+pzEgszogvKs1JLT7EKMPBoSTB m5dfHCEkWJSanlqRlpkDDGGYtAQHj5II72yQNG9xQWJucWY6ROoUo6KUOO8NkIQASCKjNA+uD RY3lxhlpYR5GYEOEeIpSC3KzSxBlX/FKM7BqCTMuwhkCk9mXgnc9FdAi5mAFkfagS0uSURIST UwyphzXp/OsHIv0wOFdS4NbfayBzcJX1+naJrySr2YTZpZfemZ6bXck5f92iFXGv+jq95VIEQ q4e+LV4ZOGTLqtznWbPrCP7Xv4Nx7UfxiXxgXno+xnnvt64u9jod0JlXZXPbfphHSM5dJOc0t 96uojlXnBpH0nYfk987eMu2xHXdY4KfY8i/LlViKMxINtZiLihMBlbR3Q5YCAAA= X-Env-Sender: prvs=175ce6123=Andrew.Cooper3@citrix.com X-Msg-Ref: server-14.tower-27.messagelabs.com!1483960586!68499925!4 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 47773 invoked from network); 9 Jan 2017 11:16:34 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 9 Jan 2017 11:16:34 -0000 X-IronPort-AV: E=Sophos;i="5.33,339,1477958400"; d="scan'208";a="407515712" From: Andrew Cooper To: Xen-devel Date: Mon, 9 Jan 2017 11:03:30 +0000 Message-ID: <1483959822-30484-14-git-send-email-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1483959822-30484-1-git-send-email-andrew.cooper3@citrix.com> References: <1483959822-30484-1-git-send-email-andrew.cooper3@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper Subject: [Xen-devel] [PATCH v2 13/25] x86/hvm: Improve CPUID and MSR handling using named features X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This avoids hvm_cpuid() recursing into itself, and the MSR paths using hvm_cpuid() to obtain information which is directly available. Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich --- xen/arch/x86/hvm/hvm.c | 95 +++++++++++++++----------------------------------- 1 file changed, 29 insertions(+), 66 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 4ecdb35..a7fcb84 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3293,6 +3293,7 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, { struct vcpu *v = current; struct domain *d = v->domain; + const struct cpuid_policy *p = d->arch.cpuid; unsigned int count, dummy = 0; if ( !eax ) @@ -3330,8 +3331,6 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, switch ( input ) { - unsigned int _ebx, _ecx, _edx; - case 0x1: /* Fix up VLAPIC details. */ *ebx &= 0x00FFFFFFu; @@ -3414,8 +3413,7 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, break; case XSTATE_CPUID: - hvm_cpuid(1, NULL, NULL, &_ecx, NULL); - if ( !(_ecx & cpufeat_mask(X86_FEATURE_XSAVE)) || count >= 63 ) + if ( !p->basic.xsave || count >= 63 ) { *eax = *ebx = *ecx = *edx = 0; break; @@ -3427,7 +3425,7 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, uint64_t xfeature_mask = XSTATE_FP_SSE; uint32_t xstate_size = XSTATE_AREA_MIN_SIZE; - if ( _ecx & cpufeat_mask(X86_FEATURE_AVX) ) + if ( p->basic.avx ) { xfeature_mask |= XSTATE_YMM; xstate_size = max(xstate_size, @@ -3435,10 +3433,7 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, xstate_sizes[_XSTATE_YMM]); } - _ecx = 0; - hvm_cpuid(7, NULL, &_ebx, &_ecx, NULL); - - if ( _ebx & cpufeat_mask(X86_FEATURE_MPX) ) + if ( p->feat.mpx ) { xfeature_mask |= XSTATE_BNDREGS | XSTATE_BNDCSR; xstate_size = max(xstate_size, @@ -3446,7 +3441,7 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, xstate_sizes[_XSTATE_BNDCSR]); } - if ( _ebx & cpufeat_mask(X86_FEATURE_AVX512F) ) + if ( p->feat.avx512f ) { xfeature_mask |= XSTATE_OPMASK | XSTATE_ZMM | XSTATE_HI_ZMM; xstate_size = max(xstate_size, @@ -3460,7 +3455,7 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, xstate_sizes[_XSTATE_HI_ZMM]); } - if ( _ecx & cpufeat_mask(X86_FEATURE_PKU) ) + if ( p->feat.pku ) { xfeature_mask |= XSTATE_PKRU; xstate_size = max(xstate_size, @@ -3468,9 +3463,7 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, xstate_sizes[_XSTATE_PKRU]); } - hvm_cpuid(0x80000001, NULL, NULL, &_ecx, NULL); - - if ( _ecx & cpufeat_mask(X86_FEATURE_LWP) ) + if ( p->extd.lwp ) { xfeature_mask |= XSTATE_LWP; xstate_size = max(xstate_size, @@ -3494,7 +3487,7 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, case 1: *eax &= hvm_featureset[FEATURESET_Da1]; - if ( *eax & cpufeat_mask(X86_FEATURE_XSAVES) ) + if ( p->xstate.xsaves ) { /* * Always read CPUID[0xD,1].EBX from hardware, rather than @@ -3575,14 +3568,11 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx, if ( *eax > count ) *eax = count; - hvm_cpuid(1, NULL, NULL, NULL, &_edx); - count = _edx & (cpufeat_mask(X86_FEATURE_PAE) | - cpufeat_mask(X86_FEATURE_PSE36)) ? 36 : 32; + count = (p->basic.pae || p->basic.pse36) ? 36 : 32; if ( *eax < count ) *eax = count; - hvm_cpuid(0x80000001, NULL, NULL, NULL, &_edx); - *eax |= (_edx & cpufeat_mask(X86_FEATURE_LM) ? vaddr_bits : 32) << 8; + *eax |= (p->extd.lm ? vaddr_bits : 32) << 8; *ebx &= hvm_featureset[FEATURESET_e8b]; break; @@ -3649,26 +3639,16 @@ void hvm_rdtsc_intercept(struct cpu_user_regs *regs) int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content) { struct vcpu *v = current; + struct domain *d = v->domain; uint64_t *var_range_base, *fixed_range_base; - bool mtrr = false; int ret = X86EMUL_OKAY; var_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.var_ranges; fixed_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.fixed_ranges; - if ( msr == MSR_MTRRcap || - (msr >= MSR_IA32_MTRR_PHYSBASE(0) && msr <= MSR_MTRRdefType) ) - { - unsigned int edx; - - hvm_cpuid(1, NULL, NULL, NULL, &edx); - if ( edx & cpufeat_mask(X86_FEATURE_MTRR) ) - mtrr = true; - } - switch ( msr ) { - unsigned int eax, ebx, ecx, index; + unsigned int index; case MSR_EFER: *msr_content = v->arch.hvm_vcpu.guest_efer; @@ -3704,53 +3684,49 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content) break; case MSR_MTRRcap: - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; *msr_content = v->arch.hvm_vcpu.mtrr.mtrr_cap; break; case MSR_MTRRdefType: - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; *msr_content = v->arch.hvm_vcpu.mtrr.def_type | (v->arch.hvm_vcpu.mtrr.enabled << 10); break; case MSR_MTRRfix64K_00000: - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; *msr_content = fixed_range_base[0]; break; case MSR_MTRRfix16K_80000: case MSR_MTRRfix16K_A0000: - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; index = msr - MSR_MTRRfix16K_80000; *msr_content = fixed_range_base[index + 1]; break; case MSR_MTRRfix4K_C0000...MSR_MTRRfix4K_F8000: - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; index = msr - MSR_MTRRfix4K_C0000; *msr_content = fixed_range_base[index + 3]; break; case MSR_IA32_MTRR_PHYSBASE(0)...MSR_IA32_MTRR_PHYSMASK(MTRR_VCNT-1): - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; index = msr - MSR_IA32_MTRR_PHYSBASE(0); *msr_content = var_range_base[index]; break; case MSR_IA32_XSS: - ecx = 1; - hvm_cpuid(XSTATE_CPUID, &eax, NULL, &ecx, NULL); - if ( !(eax & cpufeat_mask(X86_FEATURE_XSAVES)) ) + if ( !d->arch.cpuid->xstate.xsaves ) goto gp_fault; *msr_content = v->arch.hvm_vcpu.msr_xss; break; case MSR_IA32_BNDCFGS: - ecx = 0; - hvm_cpuid(7, NULL, &ebx, &ecx, NULL); - if ( !(ebx & cpufeat_mask(X86_FEATURE_MPX)) || + if ( !d->arch.cpuid->feat.mpx || !hvm_get_guest_bndcfgs(v, msr_content) ) goto gp_fault; break; @@ -3791,21 +3767,12 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, bool_t may_defer) { struct vcpu *v = current; - bool mtrr = false; + struct domain *d = v->domain; int ret = X86EMUL_OKAY; HVMTRACE_3D(MSR_WRITE, msr, (uint32_t)msr_content, (uint32_t)(msr_content >> 32)); - if ( msr >= MSR_IA32_MTRR_PHYSBASE(0) && msr <= MSR_MTRRdefType ) - { - unsigned int edx; - - hvm_cpuid(1, NULL, NULL, NULL, &edx); - if ( edx & cpufeat_mask(X86_FEATURE_MTRR) ) - mtrr = true; - } - if ( may_defer && unlikely(monitored_msr(v->domain, msr)) ) { ASSERT(v->arch.vm_event); @@ -3821,7 +3788,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, switch ( msr ) { - unsigned int eax, ebx, ecx, index; + unsigned int index; case MSR_EFER: if ( hvm_set_efer(msr_content) ) @@ -3867,14 +3834,14 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, goto gp_fault; case MSR_MTRRdefType: - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; if ( !mtrr_def_type_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr, msr_content) ) goto gp_fault; break; case MSR_MTRRfix64K_00000: - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; if ( !mtrr_fix_range_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr, 0, msr_content) ) @@ -3882,7 +3849,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, break; case MSR_MTRRfix16K_80000: case MSR_MTRRfix16K_A0000: - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; index = msr - MSR_MTRRfix16K_80000 + 1; if ( !mtrr_fix_range_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr, @@ -3890,7 +3857,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, goto gp_fault; break; case MSR_MTRRfix4K_C0000...MSR_MTRRfix4K_F8000: - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; index = msr - MSR_MTRRfix4K_C0000 + 3; if ( !mtrr_fix_range_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr, @@ -3898,7 +3865,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, goto gp_fault; break; case MSR_IA32_MTRR_PHYSBASE(0)...MSR_IA32_MTRR_PHYSMASK(MTRR_VCNT-1): - if ( !mtrr ) + if ( !d->arch.cpuid->basic.mtrr ) goto gp_fault; if ( !mtrr_var_range_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr, msr, msr_content) ) @@ -3906,18 +3873,14 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, break; case MSR_IA32_XSS: - ecx = 1; - hvm_cpuid(XSTATE_CPUID, &eax, NULL, &ecx, NULL); /* No XSS features currently supported for guests. */ - if ( !(eax & cpufeat_mask(X86_FEATURE_XSAVES)) || msr_content != 0 ) + if ( !d->arch.cpuid->xstate.xsaves || msr_content != 0 ) goto gp_fault; v->arch.hvm_vcpu.msr_xss = msr_content; break; case MSR_IA32_BNDCFGS: - ecx = 0; - hvm_cpuid(7, NULL, &ebx, &ecx, NULL); - if ( !(ebx & cpufeat_mask(X86_FEATURE_MPX)) || + if ( !d->arch.cpuid->feat.mpx || !hvm_set_guest_bndcfgs(v, msr_content) ) goto gp_fault; break;