Message ID | 1574101067-5638-6-git-send-email-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: vmx: implement MSR_IA32_TSX_CTRL for guests | expand |
On Mon, Nov 18, 2019 at 07:17:47PM +0100, Paolo Bonzini wrote: > If X86_FEATURE_RTM is disabled, the guest should not be able to access > MSR_IA32_TSX_CTRL. We can therefore use it in KVM to force all > transactions from the guest to abort. > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> So, without this patch guest OSes will incorrectly report "Not affected" at /sys/devices/system/cpu/vulnerabilities/tsx_async_abort if RTM is disabled in the VM configuration. Is there anything host userspace can do to detect this situation and issue a warning on that case? Is there anything the guest kernel can do to detect this and not report a false negative at /sys/.../tsx_async_abort?
On 21/11/19 03:22, Eduardo Habkost wrote: > On Mon, Nov 18, 2019 at 07:17:47PM +0100, Paolo Bonzini wrote: >> If X86_FEATURE_RTM is disabled, the guest should not be able to access >> MSR_IA32_TSX_CTRL. We can therefore use it in KVM to force all >> transactions from the guest to abort. >> >> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > > So, without this patch guest OSes will incorrectly report "Not > affected" at /sys/devices/system/cpu/vulnerabilities/tsx_async_abort > if RTM is disabled in the VM configuration. > > Is there anything host userspace can do to detect this situation > and issue a warning on that case? > > Is there anything the guest kernel can do to detect this and not > report a false negative at /sys/.../tsx_async_abort? Unfortunately not. The hypervisor needs to know about TAA in order to mitigate it on behalf of the guest. At least this doesn't require an updated userspace and VM configuration! Paolo
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ed25fe7d5234..8cba65eec0d3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -639,6 +639,23 @@ struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr) return NULL; } +static int vmx_set_guest_msr(struct vcpu_vmx *vmx, struct shared_msr_entry *msr, u64 data) +{ + int ret = 0; + + u64 old_msr_data = msr->data; + msr->data = data; + if (msr - vmx->guest_msrs < vmx->save_nmsrs) { + preempt_disable(); + ret = kvm_set_shared_msr(msr->index, msr->data, + msr->mask); + preempt_enable(); + if (ret) + msr->data = old_msr_data; + } + return ret; +} + void loaded_vmcs_init(struct loaded_vmcs *loaded_vmcs) { vmcs_clear(loaded_vmcs->vmcs); @@ -2174,20 +2191,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) default: find_shared_msr: msr = find_msr_entry(vmx, msr_index); - if (msr) { - u64 old_msr_data = msr->data; - msr->data = data; - if (msr - vmx->guest_msrs < vmx->save_nmsrs) { - preempt_disable(); - ret = kvm_set_shared_msr(msr->index, msr->data, - msr->mask); - preempt_enable(); - if (ret) - msr->data = old_msr_data; - } - break; - } - ret = kvm_set_msr_common(vcpu, msr_info); + if (msr) + ret = vmx_set_guest_msr(vmx, msr, data); + else + ret = kvm_set_msr_common(vcpu, msr_info); } return ret; @@ -7138,6 +7145,15 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) if (boot_cpu_has(X86_FEATURE_INTEL_PT) && guest_cpuid_has(vcpu, X86_FEATURE_INTEL_PT)) update_intel_pt_cfg(vcpu); + + if (boot_cpu_has(X86_FEATURE_RTM)) { + struct shared_msr_entry *msr; + msr = find_msr_entry(vmx, MSR_IA32_TSX_CTRL); + if (msr) { + bool enabled = guest_cpuid_has(vcpu, X86_FEATURE_RTM); + vmx_set_guest_msr(vmx, msr, enabled ? 0 : TSX_CTRL_RTM_DISABLE); + } + } } static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
If X86_FEATURE_RTM is disabled, the guest should not be able to access MSR_IA32_TSX_CTRL. We can therefore use it in KVM to force all transactions from the guest to abort. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- arch/x86/kvm/vmx/vmx.c | 44 ++++++++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 14 deletions(-)