From patchwork Thu May 16 08:25:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luwei Kang X-Patchwork-Id: 10945901 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DF39B933 for ; Thu, 16 May 2019 08:26:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D08DF289D3 for ; Thu, 16 May 2019 08:26:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C49CC28ABC; Thu, 16 May 2019 08:26:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A760289D3 for ; Thu, 16 May 2019 08:26:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726794AbfEPI0l (ORCPT ); Thu, 16 May 2019 04:26:41 -0400 Received: from mga11.intel.com ([192.55.52.93]:22098 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726748AbfEPI0k (ORCPT ); Thu, 16 May 2019 04:26:40 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 May 2019 01:26:40 -0700 X-ExtLoop1: 1 Received: from skl-s2.bj.intel.com ([10.240.192.102]) by orsmga005.jf.intel.com with ESMTP; 16 May 2019 01:26:38 -0700 From: Luwei Kang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, Luwei Kang Subject: [PATCH v1 1/6] x86/fpu: Introduce new fpu state for Intel processor trace Date: Thu, 16 May 2019 16:25:09 +0800 Message-Id: <1557995114-21629-2-git-send-email-luwei.kang@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> References: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce new fpu state structure pt_state to save Intel processor trace configuration. The upcoming using XSAVES/XRSTORS to switch the Intel PT configuration on VM-Entry/Exit will use this structure. Signed-off-by: Luwei Kang --- arch/x86/include/asm/fpu/types.h | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/x86/include/asm/fpu/types.h b/arch/x86/include/asm/fpu/types.h index 2e32e17..8cbb42e 100644 --- a/arch/x86/include/asm/fpu/types.h +++ b/arch/x86/include/asm/fpu/types.h @@ -221,6 +221,19 @@ struct avx_512_hi16_state { } __packed; /* + * State component 8 is used for some 64-bit registers + * of Intel processor trace. + */ +struct pt_state { + u64 rtit_ctl; + u64 rtit_output_base; + u64 rtit_output_mask; + u64 rtit_status; + u64 rtit_cr3_match; + u64 rtit_addrx_ab[0]; +} __packed; + +/* * State component 9: 32-bit PKRU register. The state is * 8 bytes long but only 4 bytes is used currently. */ From patchwork Thu May 16 08:25:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luwei Kang X-Patchwork-Id: 10945903 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 08105933 for ; Thu, 16 May 2019 08:26:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE80F289D3 for ; Thu, 16 May 2019 08:26:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E271028ABC; Thu, 16 May 2019 08:26:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 12BC528AA0 for ; Thu, 16 May 2019 08:26:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726833AbfEPI0q (ORCPT ); Thu, 16 May 2019 04:26:46 -0400 Received: from mga11.intel.com ([192.55.52.93]:22098 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726383AbfEPI0p (ORCPT ); Thu, 16 May 2019 04:26:45 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 May 2019 01:26:44 -0700 X-ExtLoop1: 1 Received: from skl-s2.bj.intel.com ([10.240.192.102]) by orsmga005.jf.intel.com with ESMTP; 16 May 2019 01:26:42 -0700 From: Luwei Kang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, Luwei Kang Subject: [PATCH v1 2/6] KVM: VMX: Reuse the pt_state structure for PT context Date: Thu, 16 May 2019 16:25:10 +0800 Message-Id: <1557995114-21629-3-git-send-email-luwei.kang@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> References: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Remove the previous pt_ctx structure and use pt_state to save the PT configuration because they are saved the same things. Add *_ctx postfix to different with the upcoming host and guest fpu pointer for PT state. Signed-off-by: Luwei Kang --- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/vmx/vmx.c | 96 +++++++++++++++++++++-------------------------- arch/x86/kvm/vmx/vmx.h | 16 +------- 3 files changed, 46 insertions(+), 68 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index f4b1ae4..e8d5c61 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4201,7 +4201,7 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu) vmx->nested.vmxon = true; if (pt_mode == PT_MODE_HOST_GUEST) { - vmx->pt_desc.guest.ctl = 0; + vmx->pt_desc.guest_ctx.rtit_ctl = 0; pt_update_intercept_for_msr(vmx); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0db7ded..4234e40e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -976,32 +976,28 @@ static unsigned long segment_base(u16 selector) } #endif -static inline void pt_load_msr(struct pt_ctx *ctx, u32 addr_range) +static inline void pt_load_msr(struct pt_state *ctx, u32 addr_range) { u32 i; - wrmsrl(MSR_IA32_RTIT_STATUS, ctx->status); - wrmsrl(MSR_IA32_RTIT_OUTPUT_BASE, ctx->output_base); - wrmsrl(MSR_IA32_RTIT_OUTPUT_MASK, ctx->output_mask); - wrmsrl(MSR_IA32_RTIT_CR3_MATCH, ctx->cr3_match); - for (i = 0; i < addr_range; i++) { - wrmsrl(MSR_IA32_RTIT_ADDR0_A + i * 2, ctx->addr_a[i]); - wrmsrl(MSR_IA32_RTIT_ADDR0_B + i * 2, ctx->addr_b[i]); - } + wrmsrl(MSR_IA32_RTIT_OUTPUT_BASE, ctx->rtit_output_base); + wrmsrl(MSR_IA32_RTIT_OUTPUT_MASK, ctx->rtit_output_mask); + wrmsrl(MSR_IA32_RTIT_STATUS, ctx->rtit_status); + wrmsrl(MSR_IA32_RTIT_CR3_MATCH, ctx->rtit_cr3_match); + for (i = 0; i < addr_range * 2; i++) + wrmsrl(MSR_IA32_RTIT_ADDR0_A + i, ctx->rtit_addrx_ab[i]); } -static inline void pt_save_msr(struct pt_ctx *ctx, u32 addr_range) +static inline void pt_save_msr(struct pt_state *ctx, u32 addr_range) { u32 i; - rdmsrl(MSR_IA32_RTIT_STATUS, ctx->status); - rdmsrl(MSR_IA32_RTIT_OUTPUT_BASE, ctx->output_base); - rdmsrl(MSR_IA32_RTIT_OUTPUT_MASK, ctx->output_mask); - rdmsrl(MSR_IA32_RTIT_CR3_MATCH, ctx->cr3_match); - for (i = 0; i < addr_range; i++) { - rdmsrl(MSR_IA32_RTIT_ADDR0_A + i * 2, ctx->addr_a[i]); - rdmsrl(MSR_IA32_RTIT_ADDR0_B + i * 2, ctx->addr_b[i]); - } + rdmsrl(MSR_IA32_RTIT_OUTPUT_BASE, ctx->rtit_output_base); + rdmsrl(MSR_IA32_RTIT_OUTPUT_MASK, ctx->rtit_output_mask); + rdmsrl(MSR_IA32_RTIT_STATUS, ctx->rtit_status); + rdmsrl(MSR_IA32_RTIT_CR3_MATCH, ctx->rtit_cr3_match); + for (i = 0; i < addr_range; i++) + rdmsrl(MSR_IA32_RTIT_ADDR0_A + i, ctx->rtit_addrx_ab[i]); } static void pt_guest_enter(struct vcpu_vmx *vmx) @@ -1013,11 +1009,11 @@ static void pt_guest_enter(struct vcpu_vmx *vmx) * GUEST_IA32_RTIT_CTL is already set in the VMCS. * Save host state before VM entry. */ - rdmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc.host.ctl); - if (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) { + rdmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc.host_ctx.rtit_ctl); + if (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) { wrmsrl(MSR_IA32_RTIT_CTL, 0); - pt_save_msr(&vmx->pt_desc.host, vmx->pt_desc.addr_range); - pt_load_msr(&vmx->pt_desc.guest, vmx->pt_desc.addr_range); + pt_save_msr(&vmx->pt_desc.host_ctx, vmx->pt_desc.addr_range); + pt_load_msr(&vmx->pt_desc.guest_ctx, vmx->pt_desc.addr_range); } } @@ -1026,13 +1022,13 @@ static void pt_guest_exit(struct vcpu_vmx *vmx) if (pt_mode == PT_MODE_SYSTEM) return; - if (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) { - pt_save_msr(&vmx->pt_desc.guest, vmx->pt_desc.addr_range); - pt_load_msr(&vmx->pt_desc.host, vmx->pt_desc.addr_range); + if (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) { + pt_save_msr(&vmx->pt_desc.guest_ctx, vmx->pt_desc.addr_range); + pt_load_msr(&vmx->pt_desc.host_ctx, vmx->pt_desc.addr_range); } /* Reload host state (IA32_RTIT_CTL will be cleared on VM exit). */ - wrmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc.host.ctl); + wrmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc.host_ctx.rtit_ctl); } void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) @@ -1402,8 +1398,8 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data) * Any attempt to modify IA32_RTIT_CTL while TraceEn is set will * result in a #GP unless the same write also clears TraceEn. */ - if ((vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) && - ((vmx->pt_desc.guest.ctl ^ data) & ~RTIT_CTL_TRACEEN)) + if ((vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) && + ((vmx->pt_desc.guest_ctx.rtit_ctl ^ data) & ~RTIT_CTL_TRACEEN)) return 1; /* @@ -1725,19 +1721,19 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_RTIT_CTL: if (pt_mode != PT_MODE_HOST_GUEST) return 1; - msr_info->data = vmx->pt_desc.guest.ctl; + msr_info->data = vmx->pt_desc.guest_ctx.rtit_ctl; break; case MSR_IA32_RTIT_STATUS: if (pt_mode != PT_MODE_HOST_GUEST) return 1; - msr_info->data = vmx->pt_desc.guest.status; + msr_info->data = vmx->pt_desc.guest_ctx.rtit_status; break; case MSR_IA32_RTIT_CR3_MATCH: if ((pt_mode != PT_MODE_HOST_GUEST) || !intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_cr3_filtering)) return 1; - msr_info->data = vmx->pt_desc.guest.cr3_match; + msr_info->data = vmx->pt_desc.guest_ctx.rtit_cr3_match; break; case MSR_IA32_RTIT_OUTPUT_BASE: if ((pt_mode != PT_MODE_HOST_GUEST) || @@ -1746,7 +1742,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) !intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_single_range_output))) return 1; - msr_info->data = vmx->pt_desc.guest.output_base; + msr_info->data = vmx->pt_desc.guest_ctx.rtit_output_base; break; case MSR_IA32_RTIT_OUTPUT_MASK: if ((pt_mode != PT_MODE_HOST_GUEST) || @@ -1755,7 +1751,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) !intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_single_range_output))) return 1; - msr_info->data = vmx->pt_desc.guest.output_mask; + msr_info->data = vmx->pt_desc.guest_ctx.rtit_output_mask; break; case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B: index = msr_info->index - MSR_IA32_RTIT_ADDR0_A; @@ -1763,10 +1759,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) (index >= 2 * intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_num_address_ranges))) return 1; - if (index % 2) - msr_info->data = vmx->pt_desc.guest.addr_b[index / 2]; - else - msr_info->data = vmx->pt_desc.guest.addr_a[index / 2]; + msr_info->data = vmx->pt_desc.guest_ctx.rtit_addrx_ab[index]; break; case MSR_TSC_AUX: if (!msr_info->host_initiated && @@ -1953,56 +1946,53 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) vmx->nested.vmxon) return 1; vmcs_write64(GUEST_IA32_RTIT_CTL, data); - vmx->pt_desc.guest.ctl = data; + vmx->pt_desc.guest_ctx.rtit_ctl = data; pt_update_intercept_for_msr(vmx); break; case MSR_IA32_RTIT_STATUS: if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) || + (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || (data & MSR_IA32_RTIT_STATUS_MASK)) return 1; - vmx->pt_desc.guest.status = data; + vmx->pt_desc.guest_ctx.rtit_status = data; break; case MSR_IA32_RTIT_CR3_MATCH: if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) || + (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || !intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_cr3_filtering)) return 1; - vmx->pt_desc.guest.cr3_match = data; + vmx->pt_desc.guest_ctx.rtit_cr3_match = data; break; case MSR_IA32_RTIT_OUTPUT_BASE: if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) || + (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || (!intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_topa_output) && !intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_single_range_output)) || (data & MSR_IA32_RTIT_OUTPUT_BASE_MASK)) return 1; - vmx->pt_desc.guest.output_base = data; + vmx->pt_desc.guest_ctx.rtit_output_base = data; break; case MSR_IA32_RTIT_OUTPUT_MASK: if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) || + (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || (!intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_topa_output) && !intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_single_range_output))) return 1; - vmx->pt_desc.guest.output_mask = data; + vmx->pt_desc.guest_ctx.rtit_output_mask = data; break; case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B: index = msr_info->index - MSR_IA32_RTIT_ADDR0_A; if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) || + (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || (index >= 2 * intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_num_address_ranges))) return 1; - if (index % 2) - vmx->pt_desc.guest.addr_b[index / 2] = data; - else - vmx->pt_desc.guest.addr_a[index / 2] = data; + vmx->pt_desc.guest_ctx.rtit_addrx_ab[index] = data; break; case MSR_TSC_AUX: if (!msr_info->host_initiated && @@ -3591,7 +3581,7 @@ void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu) void pt_update_intercept_for_msr(struct vcpu_vmx *vmx) { unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; - bool flag = !(vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN); + bool flag = !(vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN); u32 i; vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_RTIT_STATUS, @@ -4105,7 +4095,7 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx) if (pt_mode == PT_MODE_HOST_GUEST) { memset(&vmx->pt_desc, 0, sizeof(vmx->pt_desc)); /* Bit[6~0] are forced to 1, writes are ignored. */ - vmx->pt_desc.guest.output_mask = 0x7F; + vmx->pt_desc.guest_ctx.rtit_output_mask = 0x7F; vmcs_write64(GUEST_IA32_RTIT_CTL, 0); } } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 63d37cc..11ad856 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -65,24 +65,12 @@ struct pi_desc { u32 rsvd[6]; } __aligned(64); -#define RTIT_ADDR_RANGE 4 - -struct pt_ctx { - u64 ctl; - u64 status; - u64 output_base; - u64 output_mask; - u64 cr3_match; - u64 addr_a[RTIT_ADDR_RANGE]; - u64 addr_b[RTIT_ADDR_RANGE]; -}; - struct pt_desc { u64 ctl_bitmask; u32 addr_range; u32 caps[PT_CPUID_REGS_NUM * PT_CPUID_LEAVES]; - struct pt_ctx host; - struct pt_ctx guest; + struct pt_state host_ctx; + struct pt_state guest_ctx; }; /* From patchwork Thu May 16 08:25:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luwei Kang X-Patchwork-Id: 10945905 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 680266C5 for ; Thu, 16 May 2019 08:26:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 599BE289D3 for ; Thu, 16 May 2019 08:26:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D89D28ABC; Thu, 16 May 2019 08:26:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 26923289D3 for ; Thu, 16 May 2019 08:26:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726894AbfEPI0s (ORCPT ); Thu, 16 May 2019 04:26:48 -0400 Received: from mga11.intel.com ([192.55.52.93]:22098 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726872AbfEPI0s (ORCPT ); Thu, 16 May 2019 04:26:48 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 May 2019 01:26:48 -0700 X-ExtLoop1: 1 Received: from skl-s2.bj.intel.com ([10.240.192.102]) by orsmga005.jf.intel.com with ESMTP; 16 May 2019 01:26:45 -0700 From: Luwei Kang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, Luwei Kang Subject: [PATCH v1 3/6] KVM: VMX: Dymamic allocate Intel PT configuration state Date: Thu, 16 May 2019 16:25:11 +0800 Message-Id: <1557995114-21629-4-git-send-email-luwei.kang@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> References: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch change the Intel PT configuration state to structure pointer so that we only need to allocate the state buffer when Intel PT working in HOST_GUEST mode. Signed-off-by: Luwei Kang --- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/vmx/vmx.c | 202 +++++++++++++++++++++++++++------------------- arch/x86/kvm/vmx/vmx.h | 6 +- 3 files changed, 121 insertions(+), 89 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index e8d5c61..349be88 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4201,7 +4201,7 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu) vmx->nested.vmxon = true; if (pt_mode == PT_MODE_HOST_GUEST) { - vmx->pt_desc.guest_ctx.rtit_ctl = 0; + vmx->pt_desc->guest_ctx->rtit_ctl = 0; pt_update_intercept_for_msr(vmx); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4234e40e..4595230 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1009,11 +1009,11 @@ static void pt_guest_enter(struct vcpu_vmx *vmx) * GUEST_IA32_RTIT_CTL is already set in the VMCS. * Save host state before VM entry. */ - rdmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc.host_ctx.rtit_ctl); - if (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) { + rdmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc->host_ctx->rtit_ctl); + if (vmx->pt_desc->guest_ctx->rtit_ctl & RTIT_CTL_TRACEEN) { wrmsrl(MSR_IA32_RTIT_CTL, 0); - pt_save_msr(&vmx->pt_desc.host_ctx, vmx->pt_desc.addr_range); - pt_load_msr(&vmx->pt_desc.guest_ctx, vmx->pt_desc.addr_range); + pt_save_msr(vmx->pt_desc->host_ctx, vmx->pt_desc->addr_range); + pt_load_msr(vmx->pt_desc->guest_ctx, vmx->pt_desc->addr_range); } } @@ -1022,13 +1022,35 @@ static void pt_guest_exit(struct vcpu_vmx *vmx) if (pt_mode == PT_MODE_SYSTEM) return; - if (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) { - pt_save_msr(&vmx->pt_desc.guest_ctx, vmx->pt_desc.addr_range); - pt_load_msr(&vmx->pt_desc.host_ctx, vmx->pt_desc.addr_range); + if (vmx->pt_desc->guest_ctx->rtit_ctl & RTIT_CTL_TRACEEN) { + pt_save_msr(vmx->pt_desc->guest_ctx, vmx->pt_desc->addr_range); + pt_load_msr(vmx->pt_desc->host_ctx, vmx->pt_desc->addr_range); } /* Reload host state (IA32_RTIT_CTL will be cleared on VM exit). */ - wrmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc.host_ctx.rtit_ctl); + wrmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc->host_ctx->rtit_ctl); +} + +static int pt_init(struct vcpu_vmx *vmx) +{ + u32 pt_state_sz = sizeof(struct pt_state) + sizeof(u64) * + intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2; + + vmx->pt_desc = kzalloc(sizeof(struct pt_desc) + pt_state_sz * 2, + GFP_KERNEL_ACCOUNT); + if (!vmx->pt_desc) + return -ENOMEM; + + vmx->pt_desc->host_ctx = (struct pt_state *)(vmx->pt_desc + 1); + vmx->pt_desc->guest_ctx = (void *)vmx->pt_desc->host_ctx + pt_state_sz; + + return 0; +} + +static void pt_uninit(struct vcpu_vmx *vmx) +{ + if (pt_mode == PT_MODE_HOST_GUEST) + kfree(vmx->pt_desc); } void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) @@ -1391,15 +1413,16 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data) * Any MSR write that attempts to change bits marked reserved will * case a #GP fault. */ - if (data & vmx->pt_desc.ctl_bitmask) + if (data & vmx->pt_desc->ctl_bitmask) return 1; /* * Any attempt to modify IA32_RTIT_CTL while TraceEn is set will * result in a #GP unless the same write also clears TraceEn. */ - if ((vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) && - ((vmx->pt_desc.guest_ctx.rtit_ctl ^ data) & ~RTIT_CTL_TRACEEN)) + if ((vmx->pt_desc->guest_ctx->rtit_ctl & RTIT_CTL_TRACEEN) && + ((vmx->pt_desc->guest_ctx->rtit_ctl ^ data) & + ~RTIT_CTL_TRACEEN)) return 1; /* @@ -1409,7 +1432,7 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data) */ if ((data & RTIT_CTL_TRACEEN) && !(data & RTIT_CTL_TOPA) && !(data & RTIT_CTL_FABRIC_EN) && - !intel_pt_validate_cap(vmx->pt_desc.caps, + !intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_single_range_output)) return 1; @@ -1417,19 +1440,19 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data) * MTCFreq, CycThresh and PSBFreq encodings check, any MSR write that * utilize encodings marked reserved will casue a #GP fault. */ - value = intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_mtc_periods); - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_mtc) && + value = intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_mtc_periods); + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_mtc) && !test_bit((data & RTIT_CTL_MTC_RANGE) >> RTIT_CTL_MTC_RANGE_OFFSET, &value)) return 1; - value = intel_pt_validate_cap(vmx->pt_desc.caps, + value = intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_cycle_thresholds); - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_psb_cyc) && + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_psb_cyc) && !test_bit((data & RTIT_CTL_CYC_THRESH) >> RTIT_CTL_CYC_THRESH_OFFSET, &value)) return 1; - value = intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_psb_periods); - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_psb_cyc) && + value = intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_psb_periods); + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_psb_cyc) && !test_bit((data & RTIT_CTL_PSB_FREQ) >> RTIT_CTL_PSB_FREQ_OFFSET, &value)) return 1; @@ -1439,16 +1462,16 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data) * cause a #GP fault. */ value = (data & RTIT_CTL_ADDR0) >> RTIT_CTL_ADDR0_OFFSET; - if ((value && (vmx->pt_desc.addr_range < 1)) || (value > 2)) + if ((value && (vmx->pt_desc->addr_range < 1)) || (value > 2)) return 1; value = (data & RTIT_CTL_ADDR1) >> RTIT_CTL_ADDR1_OFFSET; - if ((value && (vmx->pt_desc.addr_range < 2)) || (value > 2)) + if ((value && (vmx->pt_desc->addr_range < 2)) || (value > 2)) return 1; value = (data & RTIT_CTL_ADDR2) >> RTIT_CTL_ADDR2_OFFSET; - if ((value && (vmx->pt_desc.addr_range < 3)) || (value > 2)) + if ((value && (vmx->pt_desc->addr_range < 3)) || (value > 2)) return 1; value = (data & RTIT_CTL_ADDR3) >> RTIT_CTL_ADDR3_OFFSET; - if ((value && (vmx->pt_desc.addr_range < 4)) || (value > 2)) + if ((value && (vmx->pt_desc->addr_range < 4)) || (value > 2)) return 1; return 0; @@ -1721,45 +1744,46 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_RTIT_CTL: if (pt_mode != PT_MODE_HOST_GUEST) return 1; - msr_info->data = vmx->pt_desc.guest_ctx.rtit_ctl; + msr_info->data = vmx->pt_desc->guest_ctx->rtit_ctl; break; case MSR_IA32_RTIT_STATUS: if (pt_mode != PT_MODE_HOST_GUEST) return 1; - msr_info->data = vmx->pt_desc.guest_ctx.rtit_status; + msr_info->data = vmx->pt_desc->guest_ctx->rtit_status; break; case MSR_IA32_RTIT_CR3_MATCH: if ((pt_mode != PT_MODE_HOST_GUEST) || - !intel_pt_validate_cap(vmx->pt_desc.caps, + !intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_cr3_filtering)) return 1; - msr_info->data = vmx->pt_desc.guest_ctx.rtit_cr3_match; + msr_info->data = vmx->pt_desc->guest_ctx->rtit_cr3_match; break; case MSR_IA32_RTIT_OUTPUT_BASE: if ((pt_mode != PT_MODE_HOST_GUEST) || - (!intel_pt_validate_cap(vmx->pt_desc.caps, + (!intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_topa_output) && - !intel_pt_validate_cap(vmx->pt_desc.caps, + !intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_single_range_output))) return 1; - msr_info->data = vmx->pt_desc.guest_ctx.rtit_output_base; + msr_info->data = vmx->pt_desc->guest_ctx->rtit_output_base; break; case MSR_IA32_RTIT_OUTPUT_MASK: if ((pt_mode != PT_MODE_HOST_GUEST) || - (!intel_pt_validate_cap(vmx->pt_desc.caps, + (!intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_topa_output) && - !intel_pt_validate_cap(vmx->pt_desc.caps, + !intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_single_range_output))) return 1; - msr_info->data = vmx->pt_desc.guest_ctx.rtit_output_mask; + msr_info->data = + vmx->pt_desc->guest_ctx->rtit_output_mask | 0x7f; break; case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B: index = msr_info->index - MSR_IA32_RTIT_ADDR0_A; if ((pt_mode != PT_MODE_HOST_GUEST) || - (index >= 2 * intel_pt_validate_cap(vmx->pt_desc.caps, + (index >= 2 * intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_num_address_ranges))) return 1; - msr_info->data = vmx->pt_desc.guest_ctx.rtit_addrx_ab[index]; + msr_info->data = vmx->pt_desc->guest_ctx->rtit_addrx_ab[index]; break; case MSR_TSC_AUX: if (!msr_info->host_initiated && @@ -1946,53 +1970,58 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) vmx->nested.vmxon) return 1; vmcs_write64(GUEST_IA32_RTIT_CTL, data); - vmx->pt_desc.guest_ctx.rtit_ctl = data; + vmx->pt_desc->guest_ctx->rtit_ctl = data; pt_update_intercept_for_msr(vmx); break; case MSR_IA32_RTIT_STATUS: if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || + (vmx->pt_desc->guest_ctx->rtit_ctl & + RTIT_CTL_TRACEEN) || (data & MSR_IA32_RTIT_STATUS_MASK)) return 1; - vmx->pt_desc.guest_ctx.rtit_status = data; + vmx->pt_desc->guest_ctx->rtit_status = data; break; case MSR_IA32_RTIT_CR3_MATCH: if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || - !intel_pt_validate_cap(vmx->pt_desc.caps, + (vmx->pt_desc->guest_ctx->rtit_ctl & + RTIT_CTL_TRACEEN) || + !intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_cr3_filtering)) return 1; - vmx->pt_desc.guest_ctx.rtit_cr3_match = data; + vmx->pt_desc->guest_ctx->rtit_cr3_match = data; break; case MSR_IA32_RTIT_OUTPUT_BASE: if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || - (!intel_pt_validate_cap(vmx->pt_desc.caps, + (vmx->pt_desc->guest_ctx->rtit_ctl & + RTIT_CTL_TRACEEN) || + (!intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_topa_output) && - !intel_pt_validate_cap(vmx->pt_desc.caps, + !intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_single_range_output)) || (data & MSR_IA32_RTIT_OUTPUT_BASE_MASK)) return 1; - vmx->pt_desc.guest_ctx.rtit_output_base = data; + vmx->pt_desc->guest_ctx->rtit_output_base = data; break; case MSR_IA32_RTIT_OUTPUT_MASK: if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || - (!intel_pt_validate_cap(vmx->pt_desc.caps, + (vmx->pt_desc->guest_ctx->rtit_ctl & + RTIT_CTL_TRACEEN) || + (!intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_topa_output) && - !intel_pt_validate_cap(vmx->pt_desc.caps, + !intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_single_range_output))) return 1; - vmx->pt_desc.guest_ctx.rtit_output_mask = data; + vmx->pt_desc->guest_ctx->rtit_output_mask = data; break; case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B: index = msr_info->index - MSR_IA32_RTIT_ADDR0_A; if ((pt_mode != PT_MODE_HOST_GUEST) || - (vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN) || - (index >= 2 * intel_pt_validate_cap(vmx->pt_desc.caps, + (vmx->pt_desc->guest_ctx->rtit_ctl & + RTIT_CTL_TRACEEN) || + (index >= 2 * intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_num_address_ranges))) return 1; - vmx->pt_desc.guest_ctx.rtit_addrx_ab[index] = data; + vmx->pt_desc->guest_ctx->rtit_addrx_ab[index] = data; break; case MSR_TSC_AUX: if (!msr_info->host_initiated && @@ -3581,7 +3610,7 @@ void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu) void pt_update_intercept_for_msr(struct vcpu_vmx *vmx) { unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; - bool flag = !(vmx->pt_desc.guest_ctx.rtit_ctl & RTIT_CTL_TRACEEN); + bool flag = !(vmx->pt_desc->guest_ctx->rtit_ctl & RTIT_CTL_TRACEEN); u32 i; vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_RTIT_STATUS, @@ -3592,12 +3621,9 @@ void pt_update_intercept_for_msr(struct vcpu_vmx *vmx) MSR_TYPE_RW, flag); vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_RTIT_CR3_MATCH, MSR_TYPE_RW, flag); - for (i = 0; i < vmx->pt_desc.addr_range; i++) { - vmx_set_intercept_for_msr(msr_bitmap, - MSR_IA32_RTIT_ADDR0_A + i * 2, MSR_TYPE_RW, flag); + for (i = 0; i < vmx->pt_desc->addr_range * 2; i++) vmx_set_intercept_for_msr(msr_bitmap, - MSR_IA32_RTIT_ADDR0_B + i * 2, MSR_TYPE_RW, flag); - } + MSR_IA32_RTIT_ADDR0_A + i, MSR_TYPE_RW, flag); } static bool vmx_get_enable_apicv(struct kvm_vcpu *vcpu) @@ -4092,12 +4118,8 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx) if (cpu_has_vmx_encls_vmexit()) vmcs_write64(ENCLS_EXITING_BITMAP, -1ull); - if (pt_mode == PT_MODE_HOST_GUEST) { - memset(&vmx->pt_desc, 0, sizeof(vmx->pt_desc)); - /* Bit[6~0] are forced to 1, writes are ignored. */ - vmx->pt_desc.guest_ctx.rtit_output_mask = 0x7F; + if (pt_mode == PT_MODE_HOST_GUEST) vmcs_write64(GUEST_IA32_RTIT_CTL, 0); - } } static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) @@ -6544,6 +6566,8 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu) if (enable_pml) vmx_destroy_pml_buffer(vmx); + if (pt_mode == PT_MODE_HOST_GUEST) + pt_uninit(vmx); free_vpid(vmx->vpid); nested_vmx_free_vcpu(vcpu); free_loaded_vmcs(vmx->loaded_vmcs); @@ -6592,12 +6616,18 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) goto uninit_vcpu; } + if (pt_mode == PT_MODE_HOST_GUEST) { + err = pt_init(vmx); + if (err) + goto free_pml; + } + vmx->guest_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL_ACCOUNT); BUILD_BUG_ON(ARRAY_SIZE(vmx_msr_index) * sizeof(vmx->guest_msrs[0]) > PAGE_SIZE); if (!vmx->guest_msrs) - goto free_pml; + goto free_pt; err = alloc_loaded_vmcs(&vmx->vmcs01); if (err < 0) @@ -6659,6 +6689,8 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) free_loaded_vmcs(vmx->loaded_vmcs); free_msrs: kfree(vmx->guest_msrs); +free_pt: + pt_uninit(vmx); free_pml: vmx_destroy_pml_buffer(vmx); uninit_vcpu: @@ -6866,63 +6898,63 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) best = kvm_find_cpuid_entry(vcpu, 0x14, i); if (!best) return; - vmx->pt_desc.caps[CPUID_EAX + i*PT_CPUID_REGS_NUM] = best->eax; - vmx->pt_desc.caps[CPUID_EBX + i*PT_CPUID_REGS_NUM] = best->ebx; - vmx->pt_desc.caps[CPUID_ECX + i*PT_CPUID_REGS_NUM] = best->ecx; - vmx->pt_desc.caps[CPUID_EDX + i*PT_CPUID_REGS_NUM] = best->edx; + vmx->pt_desc->caps[CPUID_EAX + i*PT_CPUID_REGS_NUM] = best->eax; + vmx->pt_desc->caps[CPUID_EBX + i*PT_CPUID_REGS_NUM] = best->ebx; + vmx->pt_desc->caps[CPUID_ECX + i*PT_CPUID_REGS_NUM] = best->ecx; + vmx->pt_desc->caps[CPUID_EDX + i*PT_CPUID_REGS_NUM] = best->edx; } /* Get the number of configurable Address Ranges for filtering */ - vmx->pt_desc.addr_range = intel_pt_validate_cap(vmx->pt_desc.caps, + vmx->pt_desc->addr_range = intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_num_address_ranges); /* Initialize and clear the no dependency bits */ - vmx->pt_desc.ctl_bitmask = ~(RTIT_CTL_TRACEEN | RTIT_CTL_OS | + vmx->pt_desc->ctl_bitmask = ~(RTIT_CTL_TRACEEN | RTIT_CTL_OS | RTIT_CTL_USR | RTIT_CTL_TSC_EN | RTIT_CTL_DISRETC); /* * If CPUID.(EAX=14H,ECX=0):EBX[0]=1 CR3Filter can be set otherwise * will inject an #GP */ - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_cr3_filtering)) - vmx->pt_desc.ctl_bitmask &= ~RTIT_CTL_CR3EN; + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_cr3_filtering)) + vmx->pt_desc->ctl_bitmask &= ~RTIT_CTL_CR3EN; /* * If CPUID.(EAX=14H,ECX=0):EBX[1]=1 CYCEn, CycThresh and * PSBFreq can be set */ - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_psb_cyc)) - vmx->pt_desc.ctl_bitmask &= ~(RTIT_CTL_CYCLEACC | + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_psb_cyc)) + vmx->pt_desc->ctl_bitmask &= ~(RTIT_CTL_CYCLEACC | RTIT_CTL_CYC_THRESH | RTIT_CTL_PSB_FREQ); /* * If CPUID.(EAX=14H,ECX=0):EBX[3]=1 MTCEn BranchEn and * MTCFreq can be set */ - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_mtc)) - vmx->pt_desc.ctl_bitmask &= ~(RTIT_CTL_MTC_EN | + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_mtc)) + vmx->pt_desc->ctl_bitmask &= ~(RTIT_CTL_MTC_EN | RTIT_CTL_BRANCH_EN | RTIT_CTL_MTC_RANGE); /* If CPUID.(EAX=14H,ECX=0):EBX[4]=1 FUPonPTW and PTWEn can be set */ - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_ptwrite)) - vmx->pt_desc.ctl_bitmask &= ~(RTIT_CTL_FUP_ON_PTW | + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_ptwrite)) + vmx->pt_desc->ctl_bitmask &= ~(RTIT_CTL_FUP_ON_PTW | RTIT_CTL_PTW_EN); /* If CPUID.(EAX=14H,ECX=0):EBX[5]=1 PwrEvEn can be set */ - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_power_event_trace)) - vmx->pt_desc.ctl_bitmask &= ~RTIT_CTL_PWR_EVT_EN; + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_power_event_trace)) + vmx->pt_desc->ctl_bitmask &= ~RTIT_CTL_PWR_EVT_EN; /* If CPUID.(EAX=14H,ECX=0):ECX[0]=1 ToPA can be set */ - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_topa_output)) - vmx->pt_desc.ctl_bitmask &= ~RTIT_CTL_TOPA; + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_topa_output)) + vmx->pt_desc->ctl_bitmask &= ~RTIT_CTL_TOPA; /* If CPUID.(EAX=14H,ECX=0):ECX[3]=1 FabircEn can be set */ - if (intel_pt_validate_cap(vmx->pt_desc.caps, PT_CAP_output_subsys)) - vmx->pt_desc.ctl_bitmask &= ~RTIT_CTL_FABRIC_EN; + if (intel_pt_validate_cap(vmx->pt_desc->caps, PT_CAP_output_subsys)) + vmx->pt_desc->ctl_bitmask &= ~RTIT_CTL_FABRIC_EN; /* unmask address range configure area */ - for (i = 0; i < vmx->pt_desc.addr_range; i++) - vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4)); + for (i = 0; i < vmx->pt_desc->addr_range; i++) + vmx->pt_desc->ctl_bitmask &= ~(0xfULL << (32 + i * 4)); } static void vmx_cpuid_update(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 11ad856..283f69d 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -69,8 +69,8 @@ struct pt_desc { u64 ctl_bitmask; u32 addr_range; u32 caps[PT_CPUID_REGS_NUM * PT_CPUID_LEAVES]; - struct pt_state host_ctx; - struct pt_state guest_ctx; + struct pt_state *host_ctx; + struct pt_state *guest_ctx; }; /* @@ -259,7 +259,7 @@ struct vcpu_vmx { u64 msr_ia32_feature_control_valid_bits; u64 ept_pointer; - struct pt_desc pt_desc; + struct pt_desc *pt_desc; }; enum ept_pointers_status { From patchwork Thu May 16 08:25:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luwei Kang X-Patchwork-Id: 10945911 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D60486C5 for ; Thu, 16 May 2019 08:27:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C7B95289D3 for ; Thu, 16 May 2019 08:27:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BBDB028ABC; Thu, 16 May 2019 08:27:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 604CC289D3 for ; Thu, 16 May 2019 08:27:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726928AbfEPI0w (ORCPT ); Thu, 16 May 2019 04:26:52 -0400 Received: from mga14.intel.com ([192.55.52.115]:24486 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726918AbfEPI0v (ORCPT ); Thu, 16 May 2019 04:26:51 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 May 2019 01:26:51 -0700 X-ExtLoop1: 1 Received: from skl-s2.bj.intel.com ([10.240.192.102]) by orsmga005.jf.intel.com with ESMTP; 16 May 2019 01:26:48 -0700 From: Luwei Kang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, Luwei Kang Subject: [PATCH v1 4/6] KVM: VMX: Allocate XSAVE area for Intel PT configuration Date: Thu, 16 May 2019 16:25:12 +0800 Message-Id: <1557995114-21629-5-git-send-email-luwei.kang@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> References: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Allocate XSAVE area for host and guest Intel PT configuration when Intel PT working in HOST_GUEST mode. Intel PT configuration state can be saved using XSAVES and restored by XRSTORS instruction. Signed-off-by: Luwei Kang --- arch/x86/kvm/vmx/vmx.c | 25 ++++++++++++++++++++++++- arch/x86/kvm/vmx/vmx.h | 3 +++ 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4595230..4691665 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1033,6 +1033,7 @@ static void pt_guest_exit(struct vcpu_vmx *vmx) static int pt_init(struct vcpu_vmx *vmx) { + unsigned int eax, ebx, ecx, edx; u32 pt_state_sz = sizeof(struct pt_state) + sizeof(u64) * intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2; @@ -1044,13 +1045,35 @@ static int pt_init(struct vcpu_vmx *vmx) vmx->pt_desc->host_ctx = (struct pt_state *)(vmx->pt_desc + 1); vmx->pt_desc->guest_ctx = (void *)vmx->pt_desc->host_ctx + pt_state_sz; + cpuid_count(XSTATE_CPUID, 1, &eax, &ebx, &ecx, &edx); + if (ecx & XFEATURE_MASK_PT) { + vmx->pt_desc->host_xs = kmem_cache_zalloc(x86_fpu_cache, + GFP_KERNEL_ACCOUNT); + vmx->pt_desc->guest_xs = kmem_cache_zalloc(x86_fpu_cache, + GFP_KERNEL_ACCOUNT); + if (!vmx->pt_desc->host_xs || !vmx->pt_desc->guest_xs) { + if (vmx->pt_desc->host_xs) + kmem_cache_free(x86_fpu_cache, + vmx->pt_desc->host_xs); + if (vmx->pt_desc->guest_xs) + kmem_cache_free(x86_fpu_cache, + vmx->pt_desc->guest_xs); + } else + vmx->pt_desc->pt_xsave = true; + } + return 0; } static void pt_uninit(struct vcpu_vmx *vmx) { - if (pt_mode == PT_MODE_HOST_GUEST) + if (pt_mode == PT_MODE_HOST_GUEST) { kfree(vmx->pt_desc); + if (vmx->pt_desc->pt_xsave) { + kmem_cache_free(x86_fpu_cache, vmx->pt_desc->host_xs); + kmem_cache_free(x86_fpu_cache, vmx->pt_desc->guest_xs); + } + } } void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 283f69d..e103991 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -69,8 +69,11 @@ struct pt_desc { u64 ctl_bitmask; u32 addr_range; u32 caps[PT_CPUID_REGS_NUM * PT_CPUID_LEAVES]; + bool pt_xsave; struct pt_state *host_ctx; struct pt_state *guest_ctx; + struct fpu *host_xs; + struct fpu *guest_xs; }; /* From patchwork Thu May 16 08:25:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luwei Kang X-Patchwork-Id: 10945909 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1CDBA933 for ; Thu, 16 May 2019 08:27:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E2B9289D3 for ; Thu, 16 May 2019 08:27:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0243528ABC; Thu, 16 May 2019 08:27:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B556289D3 for ; Thu, 16 May 2019 08:27:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726964AbfEPI04 (ORCPT ); Thu, 16 May 2019 04:26:56 -0400 Received: from mga14.intel.com ([192.55.52.115]:24486 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726918AbfEPI0y (ORCPT ); Thu, 16 May 2019 04:26:54 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 May 2019 01:26:54 -0700 X-ExtLoop1: 1 Received: from skl-s2.bj.intel.com ([10.240.192.102]) by orsmga005.jf.intel.com with ESMTP; 16 May 2019 01:26:51 -0700 From: Luwei Kang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, Luwei Kang Subject: [PATCH v1 5/6] KVM: VMX: Intel PT configration context switch using XSAVES/XRSTORS Date: Thu, 16 May 2019 16:25:13 +0800 Message-Id: <1557995114-21629-6-git-send-email-luwei.kang@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> References: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch add the support of using XSAVES/XRSTORS to do the Intel processor trace context switch. Because of native driver didn't set the XSS[bit8] to enabled the PT state in xsave area, so this patch only set this bit before XSAVE/XRSTORS intstuction executtion and restore the original value after. The flag "initialized" need to be cleared when PT is change from enabled to disabled. Guest may modify PT MSRs when PT is disabled and they are only saved in variables. We need to reload these value to HW manual when PT is enabled. Signed-off-by: Luwei Kang --- arch/x86/kvm/vmx/vmx.c | 80 ++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 65 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4691665..d323e6b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1002,33 +1002,83 @@ static inline void pt_save_msr(struct pt_state *ctx, u32 addr_range) static void pt_guest_enter(struct vcpu_vmx *vmx) { + struct pt_desc *desc; + int err; + if (pt_mode == PT_MODE_SYSTEM) return; - /* - * GUEST_IA32_RTIT_CTL is already set in the VMCS. - * Save host state before VM entry. - */ - rdmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc->host_ctx->rtit_ctl); - if (vmx->pt_desc->guest_ctx->rtit_ctl & RTIT_CTL_TRACEEN) { - wrmsrl(MSR_IA32_RTIT_CTL, 0); - pt_save_msr(vmx->pt_desc->host_ctx, vmx->pt_desc->addr_range); - pt_load_msr(vmx->pt_desc->guest_ctx, vmx->pt_desc->addr_range); + desc = vmx->pt_desc; + + rdmsrl(MSR_IA32_RTIT_CTL, desc->host_ctx->rtit_ctl); + + if (desc->guest_ctx->rtit_ctl & RTIT_CTL_TRACEEN) { + if (likely(desc->pt_xsave)) { + wrmsrl(MSR_IA32_XSS, host_xss | XFEATURE_MASK_PT); + /* + * XSAVES instruction will clears the TeaceEn after + * saving the value of RTIT_CTL and before saving any + * other PT state. + */ + XSTATE_XSAVE(&desc->host_xs->state.xsave, + XFEATURE_MASK_PT, 0, err); + /* + * Still need to load the guest PT state manual if + * PT stste not populated in xsave area. + */ + if (desc->guest_xs->initialized) + XSTATE_XRESTORE(&desc->guest_xs->state.xsave, + XFEATURE_MASK_PT, 0); + else + pt_load_msr(desc->guest_ctx, desc->addr_range); + + wrmsrl(MSR_IA32_XSS, host_xss); + } else { + if (desc->host_ctx->rtit_ctl & RTIT_CTL_TRACEEN) + wrmsrl(MSR_IA32_RTIT_CTL, 0); + + pt_save_msr(desc->host_ctx, desc->addr_range); + pt_load_msr(desc->guest_ctx, desc->addr_range); + } } } static void pt_guest_exit(struct vcpu_vmx *vmx) { + struct pt_desc *desc; + int err; + if (pt_mode == PT_MODE_SYSTEM) return; - if (vmx->pt_desc->guest_ctx->rtit_ctl & RTIT_CTL_TRACEEN) { - pt_save_msr(vmx->pt_desc->guest_ctx, vmx->pt_desc->addr_range); - pt_load_msr(vmx->pt_desc->host_ctx, vmx->pt_desc->addr_range); - } + desc = vmx->pt_desc; + + if (desc->guest_ctx->rtit_ctl & RTIT_CTL_TRACEEN) { + if (likely(desc->pt_xsave)) { + wrmsrl(MSR_IA32_XSS, host_xss | XFEATURE_MASK_PT); + /* + * Save guest state. TraceEn is 0 before and after + * XSAVES instruction because RTIT_CTL will be cleared + * on VM-exit (VM Exit control bit25). + */ + XSTATE_XSAVE(&desc->guest_xs->state.xsave, + XFEATURE_MASK_PT, 0, err); + desc->guest_xs->initialized = 1; + /* + * Resume host PT state and PT may enabled after this + * instruction if host PT is enabled before VM-entry. + */ + XSTATE_XRESTORE(&desc->host_xs->state.xsave, + XFEATURE_MASK_PT, 0); + wrmsrl(MSR_IA32_XSS, host_xss); + } else { + pt_save_msr(desc->guest_ctx, desc->addr_range); + pt_load_msr(desc->host_ctx, desc->addr_range); - /* Reload host state (IA32_RTIT_CTL will be cleared on VM exit). */ - wrmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc->host_ctx->rtit_ctl); + wrmsrl(MSR_IA32_RTIT_CTL, desc->host_ctx->rtit_ctl); + } + } else if (desc->host_ctx->rtit_ctl & RTIT_CTL_TRACEEN) + wrmsrl(MSR_IA32_RTIT_CTL, desc->host_ctx->rtit_ctl); } static int pt_init(struct vcpu_vmx *vmx) From patchwork Thu May 16 08:25:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luwei Kang X-Patchwork-Id: 10945907 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 31386933 for ; Thu, 16 May 2019 08:27:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23E49289D3 for ; Thu, 16 May 2019 08:27:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 16C8528ABC; Thu, 16 May 2019 08:27:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0473C289D3 for ; Thu, 16 May 2019 08:27:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726990AbfEPI1A (ORCPT ); Thu, 16 May 2019 04:27:00 -0400 Received: from mga14.intel.com ([192.55.52.115]:24486 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726970AbfEPI07 (ORCPT ); Thu, 16 May 2019 04:26:59 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 May 2019 01:26:57 -0700 X-ExtLoop1: 1 Received: from skl-s2.bj.intel.com ([10.240.192.102]) by orsmga005.jf.intel.com with ESMTP; 16 May 2019 01:26:54 -0700 From: Luwei Kang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, Luwei Kang Subject: [PATCH v1 6/6] KVM: VMX: Get PT state from xsave area to variables Date: Thu, 16 May 2019 16:25:14 +0800 Message-Id: <1557995114-21629-7-git-send-email-luwei.kang@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> References: <1557995114-21629-1-git-send-email-luwei.kang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch get the Intel PT state from xsave area to variables when PT is change from enabled to disabled. Because PT state is saved/restored to/from xsave area by XSAVES/XRSTORES instructions when Intel PT is enabled. The KVM guest may read this MSRs when PT is disabled but the real value is saved in xsave area not variables. Signed-off-by: Luwei Kang --- arch/x86/kvm/vmx/vmx.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d323e6b..d3e2569 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1000,6 +1000,16 @@ static inline void pt_save_msr(struct pt_state *ctx, u32 addr_range) rdmsrl(MSR_IA32_RTIT_ADDR0_A + i, ctx->rtit_addrx_ab[i]); } +static void pt_state_get(struct pt_state *ctx, struct fpu *fpu, u32 addr_range) +{ + char *buff = fpu->state.xsave.extended_state_area; + + /* skip riti_ctl register */ + memcpy(&ctx->rtit_output_base, buff + sizeof(u64), + sizeof(struct pt_state) - sizeof(u64) + + sizeof(u64) * addr_range * 2); +} + static void pt_guest_enter(struct vcpu_vmx *vmx) { struct pt_desc *desc; @@ -1040,6 +1050,9 @@ static void pt_guest_enter(struct vcpu_vmx *vmx) pt_save_msr(desc->host_ctx, desc->addr_range); pt_load_msr(desc->guest_ctx, desc->addr_range); } + } else if (desc->pt_xsave && desc->guest_xs->initialized) { + pt_state_get(desc->guest_ctx, desc->guest_xs, desc->addr_range); + desc->guest_xs->initialized = 0; } }