Message ID | 20230914063325.85503-17-weijiang.yang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Enable CET Virtualization | expand |
On Thu, Sep 14, 2023 at 02:33:16AM -0400, Yang Weijiang wrote: >Add CET MSRs to the list of MSRs reported to userspace if the feature, >i.e. IBT or SHSTK, associated with the MSRs is supported by KVM. > >SSP can only be read via RDSSP. Writing even requires destructive and >potentially faulting operations such as SAVEPREVSSP/RSTORSSP or >SETSSBSY/CLRSSBSY. Let the host use a pseudo-MSR that is just a wrapper >for the GUEST_SSP field of the VMCS. > >Suggested-by: Chao Gao <chao.gao@intel.com> >Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> >--- > arch/x86/include/uapi/asm/kvm_para.h | 1 + > arch/x86/kvm/vmx/vmx.c | 2 ++ > arch/x86/kvm/x86.c | 18 ++++++++++++++++++ > 3 files changed, 21 insertions(+) > >diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h >index 6e64b27b2c1e..9864bbcf2470 100644 >--- a/arch/x86/include/uapi/asm/kvm_para.h >+++ b/arch/x86/include/uapi/asm/kvm_para.h >@@ -58,6 +58,7 @@ > #define MSR_KVM_ASYNC_PF_INT 0x4b564d06 > #define MSR_KVM_ASYNC_PF_ACK 0x4b564d07 > #define MSR_KVM_MIGRATION_CONTROL 0x4b564d08 >+#define MSR_KVM_SSP 0x4b564d09 > > struct kvm_steal_time { > __u64 steal; >diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c >index 72e3943f3693..9409753f45b0 100644 >--- a/arch/x86/kvm/vmx/vmx.c >+++ b/arch/x86/kvm/vmx/vmx.c >@@ -7009,6 +7009,8 @@ static bool vmx_has_emulated_msr(struct kvm *kvm, u32 index) > case MSR_AMD64_TSC_RATIO: > /* This is AMD only. */ > return false; >+ case MSR_KVM_SSP: >+ return kvm_cpu_cap_has(X86_FEATURE_SHSTK); For other MSRs in emulated_msrs_all[], KVM doesn't check the associated CPUID feature bits. Why bother doing this for MSR_KVM_SSP?
On 10/8/2023 2:19 PM, Chao Gao wrote: > On Thu, Sep 14, 2023 at 02:33:16AM -0400, Yang Weijiang wrote: >> Add CET MSRs to the list of MSRs reported to userspace if the feature, >> i.e. IBT or SHSTK, associated with the MSRs is supported by KVM. >> >> SSP can only be read via RDSSP. Writing even requires destructive and >> potentially faulting operations such as SAVEPREVSSP/RSTORSSP or >> SETSSBSY/CLRSSBSY. Let the host use a pseudo-MSR that is just a wrapper >> for the GUEST_SSP field of the VMCS. >> >> Suggested-by: Chao Gao <chao.gao@intel.com> >> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> >> --- >> arch/x86/include/uapi/asm/kvm_para.h | 1 + >> arch/x86/kvm/vmx/vmx.c | 2 ++ >> arch/x86/kvm/x86.c | 18 ++++++++++++++++++ >> 3 files changed, 21 insertions(+) >> >> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h >> index 6e64b27b2c1e..9864bbcf2470 100644 >> --- a/arch/x86/include/uapi/asm/kvm_para.h >> +++ b/arch/x86/include/uapi/asm/kvm_para.h >> @@ -58,6 +58,7 @@ >> #define MSR_KVM_ASYNC_PF_INT 0x4b564d06 >> #define MSR_KVM_ASYNC_PF_ACK 0x4b564d07 >> #define MSR_KVM_MIGRATION_CONTROL 0x4b564d08 >> +#define MSR_KVM_SSP 0x4b564d09 >> >> struct kvm_steal_time { >> __u64 steal; >> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c >> index 72e3943f3693..9409753f45b0 100644 >> --- a/arch/x86/kvm/vmx/vmx.c >> +++ b/arch/x86/kvm/vmx/vmx.c >> @@ -7009,6 +7009,8 @@ static bool vmx_has_emulated_msr(struct kvm *kvm, u32 index) >> case MSR_AMD64_TSC_RATIO: >> /* This is AMD only. */ >> return false; >> + case MSR_KVM_SSP: >> + return kvm_cpu_cap_has(X86_FEATURE_SHSTK); > For other MSRs in emulated_msrs_all[], KVM doesn't check the associated > CPUID feature bits. Why bother doing this for MSR_KVM_SSP? As you can see MSR_KVM_SSP is not purely emulated MSR, it's linked to VMCS field(GUEST_SSP), IMO, the check is necessary, in other words, no need to expose it when SHSTK is not supported by KVM.
On Thu, 2023-09-14 at 02:33 -0400, Yang Weijiang wrote: > Add CET MSRs to the list of MSRs reported to userspace if the feature, > i.e. IBT or SHSTK, associated with the MSRs is supported by KVM. > > SSP can only be read via RDSSP. Writing even requires destructive and > potentially faulting operations such as SAVEPREVSSP/RSTORSSP or > SETSSBSY/CLRSSBSY. Let the host use a pseudo-MSR that is just a wrapper > for the GUEST_SSP field of the VMCS. Fake MSR just feels wrong for the future generations of readers of this code. This is not a MSR no matter how we look at it, and KVM never supported such fake MSRs - this is the first one. I'll say its better to define a new ioctl for this register, or if you are feeling adventurous, you can try to add support for KVM_GET_ONE_REG/KVM_SET_ONE_REG which is what at least arm uses for this purpose. Also I think it will be better to split this patch into two - first patch that adds new ioctl, and the second patch will add the normal CET msrs to the list of msrs to be saved. > > Suggested-by: Chao Gao <chao.gao@intel.com> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > --- > arch/x86/include/uapi/asm/kvm_para.h | 1 + > arch/x86/kvm/vmx/vmx.c | 2 ++ > arch/x86/kvm/x86.c | 18 ++++++++++++++++++ > 3 files changed, 21 insertions(+) > > diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h > index 6e64b27b2c1e..9864bbcf2470 100644 > --- a/arch/x86/include/uapi/asm/kvm_para.h > +++ b/arch/x86/include/uapi/asm/kvm_para.h > @@ -58,6 +58,7 @@ > #define MSR_KVM_ASYNC_PF_INT 0x4b564d06 > #define MSR_KVM_ASYNC_PF_ACK 0x4b564d07 > #define MSR_KVM_MIGRATION_CONTROL 0x4b564d08 > +#define MSR_KVM_SSP 0x4b564d09 Another reason for not doing this - someone will think that this is a KVM PV msr. > > struct kvm_steal_time { > __u64 steal; > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 72e3943f3693..9409753f45b0 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -7009,6 +7009,8 @@ static bool vmx_has_emulated_msr(struct kvm *kvm, u32 index) > case MSR_AMD64_TSC_RATIO: > /* This is AMD only. */ > return false; > + case MSR_KVM_SSP: > + return kvm_cpu_cap_has(X86_FEATURE_SHSTK); > default: > return true; > } > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index dda9c7141ea1..73b45351c0fc 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -1476,6 +1476,9 @@ static const u32 msrs_to_save_base[] = { > > MSR_IA32_XFD, MSR_IA32_XFD_ERR, > MSR_IA32_XSS, > + MSR_IA32_U_CET, MSR_IA32_S_CET, > + MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP, MSR_IA32_PL2_SSP, > + MSR_IA32_PL3_SSP, MSR_IA32_INT_SSP_TAB, > }; > > static const u32 msrs_to_save_pmu[] = { > @@ -1576,6 +1579,7 @@ static const u32 emulated_msrs_all[] = { > > MSR_K7_HWCR, > MSR_KVM_POLL_CONTROL, > + MSR_KVM_SSP, > }; > > static u32 emulated_msrs[ARRAY_SIZE(emulated_msrs_all)]; > @@ -7241,6 +7245,20 @@ static void kvm_probe_msr_to_save(u32 msr_index) > if (!kvm_caps.supported_xss) > return; > break; > + case MSR_IA32_U_CET: > + case MSR_IA32_S_CET: > + if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) && > + !kvm_cpu_cap_has(X86_FEATURE_IBT)) > + return; > + break; > + case MSR_IA32_INT_SSP_TAB: > + if (!kvm_cpu_cap_has(X86_FEATURE_LM)) > + return; > + fallthrough; > + case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP: > + if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK)) > + return; > + break; > default: > break; > } Best regards, Maxim Levitsky
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h index 6e64b27b2c1e..9864bbcf2470 100644 --- a/arch/x86/include/uapi/asm/kvm_para.h +++ b/arch/x86/include/uapi/asm/kvm_para.h @@ -58,6 +58,7 @@ #define MSR_KVM_ASYNC_PF_INT 0x4b564d06 #define MSR_KVM_ASYNC_PF_ACK 0x4b564d07 #define MSR_KVM_MIGRATION_CONTROL 0x4b564d08 +#define MSR_KVM_SSP 0x4b564d09 struct kvm_steal_time { __u64 steal; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 72e3943f3693..9409753f45b0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7009,6 +7009,8 @@ static bool vmx_has_emulated_msr(struct kvm *kvm, u32 index) case MSR_AMD64_TSC_RATIO: /* This is AMD only. */ return false; + case MSR_KVM_SSP: + return kvm_cpu_cap_has(X86_FEATURE_SHSTK); default: return true; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index dda9c7141ea1..73b45351c0fc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1476,6 +1476,9 @@ static const u32 msrs_to_save_base[] = { MSR_IA32_XFD, MSR_IA32_XFD_ERR, MSR_IA32_XSS, + MSR_IA32_U_CET, MSR_IA32_S_CET, + MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP, MSR_IA32_PL2_SSP, + MSR_IA32_PL3_SSP, MSR_IA32_INT_SSP_TAB, }; static const u32 msrs_to_save_pmu[] = { @@ -1576,6 +1579,7 @@ static const u32 emulated_msrs_all[] = { MSR_K7_HWCR, MSR_KVM_POLL_CONTROL, + MSR_KVM_SSP, }; static u32 emulated_msrs[ARRAY_SIZE(emulated_msrs_all)]; @@ -7241,6 +7245,20 @@ static void kvm_probe_msr_to_save(u32 msr_index) if (!kvm_caps.supported_xss) return; break; + case MSR_IA32_U_CET: + case MSR_IA32_S_CET: + if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) && + !kvm_cpu_cap_has(X86_FEATURE_IBT)) + return; + break; + case MSR_IA32_INT_SSP_TAB: + if (!kvm_cpu_cap_has(X86_FEATURE_LM)) + return; + fallthrough; + case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP: + if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK)) + return; + break; default: break; }
Add CET MSRs to the list of MSRs reported to userspace if the feature, i.e. IBT or SHSTK, associated with the MSRs is supported by KVM. SSP can only be read via RDSSP. Writing even requires destructive and potentially faulting operations such as SAVEPREVSSP/RSTORSSP or SETSSBSY/CLRSSBSY. Let the host use a pseudo-MSR that is just a wrapper for the GUEST_SSP field of the VMCS. Suggested-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> --- arch/x86/include/uapi/asm/kvm_para.h | 1 + arch/x86/kvm/vmx/vmx.c | 2 ++ arch/x86/kvm/x86.c | 18 ++++++++++++++++++ 3 files changed, 21 insertions(+)