Message ID | 20230511040857.6094-10-weijiang.yang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Enable CET Virtualization | expand |
On Thu, May 11, 2023, Yang Weijiang wrote: > From: Sean Christopherson <sean.j.christopherson@intel.com> > > Load the guest's FPU state if userspace is accessing MSRs whose values are > managed by XSAVES. Two MSR access helpers, i.e., kvm_{get,set}_xsave_msr(), > are introduced by a later patch to facilitate access to this kind of MSRs. > > If new feature MSRs supported in XSS are passed through to the guest they > are saved and restored by {XSAVES|XRSTORS} to/from guest's FPU state at > vm-entry/exit. > > Because the modified code is also used for the KVM_GET_MSRS device ioctl(), > explicitly check @vcpu is non-null before attempting to load guest state. > The XSS supporting MSRs cannot be retrieved via the device ioctl() without > loading guest FPU state (which doesn't exist). > > Note that guest_cpuid_has() is not queried as host userspace is allowed > to access MSRs that have not been exposed to the guest, e.g. it might do > KVM_SET_MSRS prior to KVM_SET_CPUID2. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> > Co-developed-by: Yang Weijiang <weijiang.yang@intel.com> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > --- > arch/x86/kvm/x86.c | 29 ++++++++++++++++++++++++++++- > 1 file changed, 28 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index d2975ca96ac5..7788646bbf1f 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -130,6 +130,9 @@ static int __set_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); > static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); > > static DEFINE_MUTEX(vendor_module_lock); > +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); > +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); > + > struct kvm_x86_ops kvm_x86_ops __read_mostly; > > #define KVM_X86_OP(func) \ > @@ -4336,6 +4339,21 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > } > EXPORT_SYMBOL_GPL(kvm_get_msr_common); > > +static const u32 xsave_msrs[] = { Can you change this to "xstate_msrs"? > + MSR_IA32_U_CET, MSR_IA32_PL3_SSP, > +}; > + > +static bool is_xsaves_msr(u32 index) And then is_xstate_msr(). The intent to is check if an MSR is managed as part of the xstate, not if the MSR is somehow related to XSAVE itself.
On 6/16/2023 7:50 AM, Sean Christopherson wrote: > On Thu, May 11, 2023, Yang Weijiang wrote: >> From: Sean Christopherson <sean.j.christopherson@intel.com> >> >> Load the guest's FPU state if userspace is accessing MSRs whose values are >> managed by XSAVES. Two MSR access helpers, i.e., kvm_{get,set}_xsave_msr(), >> are introduced by a later patch to facilitate access to this kind of MSRs. >> >> >> [...] >> >> #define KVM_X86_OP(func) \ >> @@ -4336,6 +4339,21 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) >> } >> EXPORT_SYMBOL_GPL(kvm_get_msr_common); >> >> +static const u32 xsave_msrs[] = { > Can you change this to "xstate_msrs"? OK, will change it in next version. > > >> + MSR_IA32_U_CET, MSR_IA32_PL3_SSP, >> +}; >> + >> +static bool is_xsaves_msr(u32 index) > And then is_xstate_msr(). The intent to is check if an MSR is managed as part of > the xstate, not if the MSR is somehow related to XSAVE itself. Make sense, will change it. Thanks!
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d2975ca96ac5..7788646bbf1f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -130,6 +130,9 @@ static int __set_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); static DEFINE_MUTEX(vendor_module_lock); +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); + struct kvm_x86_ops kvm_x86_ops __read_mostly; #define KVM_X86_OP(func) \ @@ -4336,6 +4339,21 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_get_msr_common); +static const u32 xsave_msrs[] = { + MSR_IA32_U_CET, MSR_IA32_PL3_SSP, +}; + +static bool is_xsaves_msr(u32 index) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(xsave_msrs); i++) { + if (index == xsave_msrs[i]) + return true; + } + return false; +} + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * @@ -4346,11 +4364,20 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs, int (*do_msr)(struct kvm_vcpu *vcpu, unsigned index, u64 *data)) { + bool fpu_loaded = false; int i; - for (i = 0; i < msrs->nmsrs; ++i) + for (i = 0; i < msrs->nmsrs; ++i) { + if (vcpu && !fpu_loaded && kvm_caps.supported_xss && + is_xsaves_msr(entries[i].index)) { + kvm_load_guest_fpu(vcpu); + fpu_loaded = true; + } if (do_msr(vcpu, entries[i].index, &entries[i].data)) break; + } + if (fpu_loaded) + kvm_put_guest_fpu(vcpu); return i; }