Message ID | 20250307212053.2948340-10-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | KVM: TDX: TD vcpu enter/exit | expand |
On 3/8/2025 5:20 AM, Paolo Bonzini wrote: > From: Adrian Hunter <adrian.hunter@intel.com> > > Save the IA32_DEBUGCTL MSR before entering a TDX VCPU and restore it > afterwards. The TDX Module preserves bits 1, 12, and 14, so if no > other bits are set, no restore is done. Reviewed-by: Xiayao Li <xiaoyao.li@intel.com> > Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> > Message-ID: <20250129095902.16391-12-adrian.hunter@intel.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > arch/x86/kvm/vmx/tdx.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c > index 5625b0801ce8..25972e12504b 100644 > --- a/arch/x86/kvm/vmx/tdx.c > +++ b/arch/x86/kvm/vmx/tdx.c > @@ -683,6 +683,8 @@ void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) > else > vt->msr_host_kernel_gs_base = read_msr(MSR_KERNEL_GS_BASE); > > + vt->host_debugctlmsr = get_debugctlmsr(); > + > vt->guest_state_loaded = true; > } > > @@ -826,11 +828,15 @@ static void tdx_load_host_xsave_state(struct kvm_vcpu *vcpu) > if (kvm_host.xss != (kvm_tdx->xfam & kvm_caps.supported_xss)) > wrmsrl(MSR_IA32_XSS, kvm_host.xss); > } > -EXPORT_SYMBOL_GPL(kvm_load_host_xsave_state); This needs to be cleaned up in patch 05; > + > +#define TDX_DEBUGCTL_PRESERVED (DEBUGCTLMSR_BTF | \ > + DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI | \ > + DEBUGCTLMSR_FREEZE_IN_SMM) > > fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) > { > struct vcpu_tdx *tdx = to_tdx(vcpu); > + struct vcpu_vt *vt = to_vt(vcpu); > > /* > * force_immediate_exit requires vCPU entering for events injection with > @@ -846,6 +852,9 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) > > tdx_vcpu_enter_exit(vcpu); > > + if (vt->host_debugctlmsr & ~TDX_DEBUGCTL_PRESERVED) > + update_debugctlmsr(vt->host_debugctlmsr); > + > tdx_load_host_xsave_state(vcpu); > tdx->guest_entered = true; >
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 5625b0801ce8..25972e12504b 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -683,6 +683,8 @@ void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) else vt->msr_host_kernel_gs_base = read_msr(MSR_KERNEL_GS_BASE); + vt->host_debugctlmsr = get_debugctlmsr(); + vt->guest_state_loaded = true; } @@ -826,11 +828,15 @@ static void tdx_load_host_xsave_state(struct kvm_vcpu *vcpu) if (kvm_host.xss != (kvm_tdx->xfam & kvm_caps.supported_xss)) wrmsrl(MSR_IA32_XSS, kvm_host.xss); } -EXPORT_SYMBOL_GPL(kvm_load_host_xsave_state); + +#define TDX_DEBUGCTL_PRESERVED (DEBUGCTLMSR_BTF | \ + DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI | \ + DEBUGCTLMSR_FREEZE_IN_SMM) fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) { struct vcpu_tdx *tdx = to_tdx(vcpu); + struct vcpu_vt *vt = to_vt(vcpu); /* * force_immediate_exit requires vCPU entering for events injection with @@ -846,6 +852,9 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) tdx_vcpu_enter_exit(vcpu); + if (vt->host_debugctlmsr & ~TDX_DEBUGCTL_PRESERVED) + update_debugctlmsr(vt->host_debugctlmsr); + tdx_load_host_xsave_state(vcpu); tdx->guest_entered = true;