From patchwork Wed Jul 19 11:57:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 9851871 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8ACC160392 for ; Wed, 19 Jul 2017 12:00:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 76862274D0 for ; Wed, 19 Jul 2017 12:00:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6B25D28426; Wed, 19 Jul 2017 12:00:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C4890274D0 for ; Wed, 19 Jul 2017 12:00:13 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dXncD-0005LV-1l; Wed, 19 Jul 2017 11:58:05 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dXncB-0005Jy-7d for xen-devel@lists.xen.org; Wed, 19 Jul 2017 11:58:03 +0000 Received: from [85.158.137.68] by server-17.bemta-3.messagelabs.com id 66/89-01859-A494F695; Wed, 19 Jul 2017 11:58:02 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprAIsWRWlGSWpSXmKPExsXitHSDva6nZ36 kwbE7xhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8bKOXkFuwIrnvb9ZW5g7LTuYuTgkBDwl3g9 R7SLkZODTUBfYveLT0wgtoiAusTpjousXYxcHMwCsxglXu/6wQKSEBYIklhx6zgbiM0ioCpx6 +txZhCbV8BT4tKRZrC4hICcxPnjP8HinAJeEk0PP4L1CgHV/Nx/iRnCVpO41n+JHaJXUOLkzC dgNcwCEhIHX7xgnsDIOwtJahaS1AJGplWMGsWpRWWpRbpGpnpJRZnpGSW5iZk5uoYGxnq5qcX FiempOYlJxXrJ+bmbGIGhU8/AwLiDsfWE3yFGSQ4mJVHeObz5kUJ8SfkplRmJxRnxRaU5qcWH GGU4OJQkeO08gHKCRanpqRVpmTnAIIZJS3DwKInwvnUHSvMWFyTmFmemQ6ROMepyvJrw/xuTE Etefl6qlDjvUpAiAZCijNI8uBGwiLrEKCslzMvIwMAgxFOQWpSbWYIq/4pRnINRSZjXHeQSns y8ErhNr4COYAI6Qtg3B+SIkkSElFQDI9eliyvau+7s9Td989BVyjV0xYpJDQ4TXqfv4BMJrzI 9ccHub6zk3ba4L3Iz7vuzm9/ZcZKxc3lHfkPN0Ui+5Vq/lsze0ZSw7nnYa8/Vt78FW1edT3rC +NvHUz9XRY939mUhvf9H3f8evcD4ei3r6n9f8vlPTDjuWCfk/E7wOXvjjsVRYe5CsUosxRmJh lrMRcWJAHPOl96jAgAA X-Env-Sender: prvs=366acc0af=Andrew.Cooper3@citrix.com X-Msg-Ref: server-15.tower-31.messagelabs.com!1500465480!102112072!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 13217 invoked from network); 19 Jul 2017 11:58:01 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 19 Jul 2017 11:58:01 -0000 X-IronPort-AV: E=Sophos;i="5.40,380,1496102400"; d="scan'208";a="440444008" From: Andrew Cooper To: Xen-devel Date: Wed, 19 Jul 2017 12:57:52 +0100 Message-ID: <1500465477-23793-2-git-send-email-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1500465477-23793-1-git-send-email-andrew.cooper3@citrix.com> References: <1500465477-23793-1-git-send-email-andrew.cooper3@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper , Kevin Tian , Jun Nakajima , Jan Beulich Subject: [Xen-devel] [PATCH 1/6] x86/vmx: Improvements to vmx_{dis, en}able_intercept_for_msr() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP * Shorten the names to vmx_{clear,set}_msr_intercept() * Use an enumeration for MSR_TYPE rather than a plain integer * Introduce VMX_MSR_RW, as most callers alter both the read and write intercept at the same time. No functional change. Signed-off-by: Andrew Cooper Acked-by: Kevin Tian --- CC: Jan Beulich CC: Jun Nakajima CC: Kevin Tian --- xen/arch/x86/hvm/vmx/vmcs.c | 38 ++++++++++++++++++++------------------ xen/arch/x86/hvm/vmx/vmx.c | 34 +++++++++++++--------------------- xen/include/asm-x86/hvm/vmx/vmcs.h | 15 ++++++++++----- 3 files changed, 43 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index 8103b20..e36a908 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -802,7 +802,8 @@ static void vmx_set_host_env(struct vcpu *v) (unsigned long)&get_cpu_info()->guest_cpu_user_regs.error_code); } -void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type) +void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, + enum vmx_msr_intercept_type type) { unsigned long *msr_bitmap = v->arch.hvm_vmx.msr_bitmap; struct domain *d = v->domain; @@ -821,17 +822,17 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type) */ if ( msr <= 0x1fff ) { - if ( type & MSR_TYPE_R ) + if ( type & VMX_MSR_R ) clear_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */ - if ( type & MSR_TYPE_W ) + if ( type & VMX_MSR_W ) clear_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */ } else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) ) { msr &= 0x1fff; - if ( type & MSR_TYPE_R ) + if ( type & VMX_MSR_R ) clear_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */ - if ( type & MSR_TYPE_W ) + if ( type & VMX_MSR_W ) clear_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */ } else @@ -842,7 +843,8 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type) } -void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type) +void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, + enum vmx_msr_intercept_type type) { unsigned long *msr_bitmap = v->arch.hvm_vmx.msr_bitmap; @@ -857,17 +859,17 @@ void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type) */ if ( msr <= 0x1fff ) { - if ( type & MSR_TYPE_R ) + if ( type & VMX_MSR_R ) set_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */ - if ( type & MSR_TYPE_W ) + if ( type & VMX_MSR_W ) set_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */ } else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) ) { msr &= 0x1fff; - if ( type & MSR_TYPE_R ) + if ( type & VMX_MSR_R ) set_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */ - if ( type & MSR_TYPE_W ) + if ( type & VMX_MSR_W ) set_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */ } else @@ -1104,17 +1106,17 @@ static int construct_vmcs(struct vcpu *v) v->arch.hvm_vmx.msr_bitmap = msr_bitmap; __vmwrite(MSR_BITMAP, virt_to_maddr(msr_bitmap)); - vmx_disable_intercept_for_msr(v, MSR_FS_BASE, MSR_TYPE_R | MSR_TYPE_W); - vmx_disable_intercept_for_msr(v, MSR_GS_BASE, MSR_TYPE_R | MSR_TYPE_W); - vmx_disable_intercept_for_msr(v, MSR_SHADOW_GS_BASE, MSR_TYPE_R | MSR_TYPE_W); - vmx_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_CS, MSR_TYPE_R | MSR_TYPE_W); - vmx_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_ESP, MSR_TYPE_R | MSR_TYPE_W); - vmx_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_EIP, MSR_TYPE_R | MSR_TYPE_W); + vmx_clear_msr_intercept(v, MSR_FS_BASE, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_GS_BASE, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, VMX_MSR_RW); if ( paging_mode_hap(d) && (!iommu_enabled || iommu_snoop) ) - vmx_disable_intercept_for_msr(v, MSR_IA32_CR_PAT, MSR_TYPE_R | MSR_TYPE_W); + vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) && (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) ) - vmx_disable_intercept_for_msr(v, MSR_IA32_BNDCFGS, MSR_TYPE_R | MSR_TYPE_W); + vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, VMX_MSR_RW); } /* I/O access bitmap. */ diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 69ce3aa..ff97e6a 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1374,8 +1374,7 @@ static void vmx_handle_cd(struct vcpu *v, unsigned long value) vmx_get_guest_pat(v, pat); vmx_set_guest_pat(v, uc_pat); - vmx_enable_intercept_for_msr(v, MSR_IA32_CR_PAT, - MSR_TYPE_R | MSR_TYPE_W); + vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); wbinvd(); /* flush possibly polluted cache */ hvm_asid_flush_vcpu(v); /* invalidate memory type cached in TLB */ @@ -1386,8 +1385,7 @@ static void vmx_handle_cd(struct vcpu *v, unsigned long value) v->arch.hvm_vcpu.cache_mode = NORMAL_CACHE_MODE; vmx_set_guest_pat(v, *pat); if ( !iommu_enabled || iommu_snoop ) - vmx_disable_intercept_for_msr(v, MSR_IA32_CR_PAT, - MSR_TYPE_R | MSR_TYPE_W); + vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); hvm_asid_flush_vcpu(v); /* no need to flush cache */ } } @@ -2127,7 +2125,7 @@ static void vmx_enable_msr_interception(struct domain *d, uint32_t msr) struct vcpu *v; for_each_vcpu ( d, v ) - vmx_enable_intercept_for_msr(v, msr, MSR_TYPE_W); + vmx_set_msr_intercept(v, msr, VMX_MSR_W); } static bool_t vmx_is_singlestep_supported(void) @@ -3031,23 +3029,17 @@ void vmx_vlapic_msr_changed(struct vcpu *v) { for ( msr = MSR_IA32_APICBASE_MSR; msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ ) - vmx_disable_intercept_for_msr(v, msr, MSR_TYPE_R); - - vmx_enable_intercept_for_msr(v, MSR_IA32_APICPPR_MSR, - MSR_TYPE_R); - vmx_enable_intercept_for_msr(v, MSR_IA32_APICTMICT_MSR, - MSR_TYPE_R); - vmx_enable_intercept_for_msr(v, MSR_IA32_APICTMCCT_MSR, - MSR_TYPE_R); + vmx_clear_msr_intercept(v, msr, VMX_MSR_R); + + vmx_set_msr_intercept(v, MSR_IA32_APICPPR_MSR, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_IA32_APICTMICT_MSR, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_IA32_APICTMCCT_MSR, VMX_MSR_R); } if ( cpu_has_vmx_virtual_intr_delivery ) { - vmx_disable_intercept_for_msr(v, MSR_IA32_APICTPR_MSR, - MSR_TYPE_W); - vmx_disable_intercept_for_msr(v, MSR_IA32_APICEOI_MSR, - MSR_TYPE_W); - vmx_disable_intercept_for_msr(v, MSR_IA32_APICSELF_MSR, - MSR_TYPE_W); + vmx_clear_msr_intercept(v, MSR_IA32_APICTPR_MSR, VMX_MSR_W); + vmx_clear_msr_intercept(v, MSR_IA32_APICEOI_MSR, VMX_MSR_W); + vmx_clear_msr_intercept(v, MSR_IA32_APICSELF_MSR, VMX_MSR_W); } } else @@ -3058,7 +3050,7 @@ void vmx_vlapic_msr_changed(struct vcpu *v) SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) ) for ( msr = MSR_IA32_APICBASE_MSR; msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ ) - vmx_enable_intercept_for_msr(v, msr, MSR_TYPE_R | MSR_TYPE_W); + vmx_set_msr_intercept(v, msr, VMX_MSR_RW); vmx_update_secondary_exec_control(v); vmx_vmcs_exit(v); @@ -3107,7 +3099,7 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content) for ( i = 0; (rc == 0) && (i < lbr->count); i++ ) if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 ) { - vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R | MSR_TYPE_W); + vmx_clear_msr_intercept(v, lbr->base + i, VMX_MSR_RW); if ( lbr_tsx_fixup_needed ) v->arch.hvm_vmx.lbr_fixup_enabled |= FIXUP_LBR_TSX; if ( bdw_erratum_bdf14_fixup_needed ) diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h index e3cdfdf..e318dc2 100644 --- a/xen/include/asm-x86/hvm/vmx/vmcs.h +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h @@ -498,9 +498,6 @@ enum vmcs_field { #define VMCS_VPID_WIDTH 16 -#define MSR_TYPE_R 1 -#define MSR_TYPE_W 2 - #define VMX_GUEST_MSR 0 #define VMX_HOST_MSR 1 @@ -521,8 +518,16 @@ enum vmx_insn_errno VMX_INSN_FAIL_INVALID = ~0, }; -void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type); -void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type); +enum vmx_msr_intercept_type { + VMX_MSR_R = 1, + VMX_MSR_W = 2, + VMX_MSR_RW = VMX_MSR_R | VMX_MSR_W, +}; + +void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, + enum vmx_msr_intercept_type type); +void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, + enum vmx_msr_intercept_type type); int vmx_read_guest_msr(u32 msr, u64 *val); int vmx_write_guest_msr(u32 msr, u64 val); struct vmx_msr_entry *vmx_find_msr(u32 msr, int type);