Message ID | 59DB45E00200007800183C29@prv-mh.provo.novell.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 09/10/17 08:48, Jan Beulich wrote: > Halfway recent Linux kernels probe MISC_FEATURES_ENABLES on all CPUs, > leading to ugly recovered #GP fault messages with debug builds on older > systems. We can do better, so introduce synthetic feature flags for > both this and PLATFORM_INFO to avoid the rdmsr_safe() altogether. > > Note that the r/o nature of PLATFORM_INFO is now also being enforced. > > The rdmsr_safe() uses for MISC_ENABLE are left in place as benign - it > exists for all 64-bit capable Intel CPUs (see e.g. early_init_intel()). > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > --- > v2: Force PLATFORM_INFO writes to fail. > > --- a/xen/arch/x86/cpu/intel.c > +++ b/xen/arch/x86/cpu/intel.c > @@ -21,10 +21,19 @@ static bool __init probe_intel_cpuid_fau > { > uint64_t x; > > - if (rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) || > - !(x & MSR_PLATFORM_INFO_CPUID_FAULTING)) > + if (rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x)) > return 0; > > + setup_force_cpu_cap(X86_FEATURE_MSR_PLATFORM_INFO); > + > + if (!(x & MSR_PLATFORM_INFO_CPUID_FAULTING)) { > + if (!rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, x)) > + setup_force_cpu_cap(X86_FEATURE_MSR_MISC_FEATURES); > + return 0; > + } > + > + setup_force_cpu_cap(X86_FEATURE_MSR_MISC_FEATURES); Why this odd arrangement with the double setup_force_cpu_cap()? As neither of these MSRs are architectural, would it not be better to probe both of them, rather than assuming MISC_FEATURES_ENABLES is available if faulting is reported? Otherwise, everything else looks ok. ~Andrew
>>> On 09.10.17 at 11:49, <andrew.cooper3@citrix.com> wrote: > On 09/10/17 08:48, Jan Beulich wrote: >> Halfway recent Linux kernels probe MISC_FEATURES_ENABLES on all CPUs, >> leading to ugly recovered #GP fault messages with debug builds on older >> systems. We can do better, so introduce synthetic feature flags for >> both this and PLATFORM_INFO to avoid the rdmsr_safe() altogether. >> >> Note that the r/o nature of PLATFORM_INFO is now also being enforced. >> >> The rdmsr_safe() uses for MISC_ENABLE are left in place as benign - it >> exists for all 64-bit capable Intel CPUs (see e.g. early_init_intel()). >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com> >> --- >> v2: Force PLATFORM_INFO writes to fail. >> >> --- a/xen/arch/x86/cpu/intel.c >> +++ b/xen/arch/x86/cpu/intel.c >> @@ -21,10 +21,19 @@ static bool __init probe_intel_cpuid_fau >> { >> uint64_t x; >> >> - if (rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) || >> - !(x & MSR_PLATFORM_INFO_CPUID_FAULTING)) >> + if (rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x)) >> return 0; >> >> + setup_force_cpu_cap(X86_FEATURE_MSR_PLATFORM_INFO); >> + >> + if (!(x & MSR_PLATFORM_INFO_CPUID_FAULTING)) { >> + if (!rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, x)) >> + setup_force_cpu_cap(X86_FEATURE_MSR_MISC_FEATURES); >> + return 0; >> + } >> + >> + setup_force_cpu_cap(X86_FEATURE_MSR_MISC_FEATURES); > > Why this odd arrangement with the double setup_force_cpu_cap()? As > neither of these MSRs are architectural, would it not be better to probe > both of them, rather than assuming MISC_FEATURES_ENABLES is available if > faulting is reported? Since faulting being available means it can be enabled, and since enabling works by writing MISC_FEATURES_ENABLES, I don't see why we need to probe that second register in that case. set_cpuid_faulting() makes this very assumption already. Jan
On 09/10/17 11:07, Jan Beulich wrote: >>>> On 09.10.17 at 11:49, <andrew.cooper3@citrix.com> wrote: >> On 09/10/17 08:48, Jan Beulich wrote: >>> Halfway recent Linux kernels probe MISC_FEATURES_ENABLES on all CPUs, >>> leading to ugly recovered #GP fault messages with debug builds on older >>> systems. We can do better, so introduce synthetic feature flags for >>> both this and PLATFORM_INFO to avoid the rdmsr_safe() altogether. >>> >>> Note that the r/o nature of PLATFORM_INFO is now also being enforced. >>> >>> The rdmsr_safe() uses for MISC_ENABLE are left in place as benign - it >>> exists for all 64-bit capable Intel CPUs (see e.g. early_init_intel()). >>> >>> Signed-off-by: Jan Beulich <jbeulich@suse.com> >>> --- >>> v2: Force PLATFORM_INFO writes to fail. >>> >>> --- a/xen/arch/x86/cpu/intel.c >>> +++ b/xen/arch/x86/cpu/intel.c >>> @@ -21,10 +21,19 @@ static bool __init probe_intel_cpuid_fau >>> { >>> uint64_t x; >>> >>> - if (rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) || >>> - !(x & MSR_PLATFORM_INFO_CPUID_FAULTING)) >>> + if (rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x)) >>> return 0; >>> >>> + setup_force_cpu_cap(X86_FEATURE_MSR_PLATFORM_INFO); >>> + >>> + if (!(x & MSR_PLATFORM_INFO_CPUID_FAULTING)) { >>> + if (!rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, x)) >>> + setup_force_cpu_cap(X86_FEATURE_MSR_MISC_FEATURES); >>> + return 0; >>> + } >>> + >>> + setup_force_cpu_cap(X86_FEATURE_MSR_MISC_FEATURES); >> Why this odd arrangement with the double setup_force_cpu_cap()? As >> neither of these MSRs are architectural, would it not be better to probe >> both of them, rather than assuming MISC_FEATURES_ENABLES is available if >> faulting is reported? > Since faulting being available means it can be enabled, and since > enabling works by writing MISC_FEATURES_ENABLES, I don't see > why we need to probe that second register in that case. > set_cpuid_faulting() makes this very assumption already. Ok. Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/cpu/intel.c +++ b/xen/arch/x86/cpu/intel.c @@ -21,10 +21,19 @@ static bool __init probe_intel_cpuid_fau { uint64_t x; - if (rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) || - !(x & MSR_PLATFORM_INFO_CPUID_FAULTING)) + if (rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x)) return 0; + setup_force_cpu_cap(X86_FEATURE_MSR_PLATFORM_INFO); + + if (!(x & MSR_PLATFORM_INFO_CPUID_FAULTING)) { + if (!rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, x)) + setup_force_cpu_cap(X86_FEATURE_MSR_MISC_FEATURES); + return 0; + } + + setup_force_cpu_cap(X86_FEATURE_MSR_MISC_FEATURES); + expected_levelling_cap |= LCAP_faulting; levelling_caps |= LCAP_faulting; setup_force_cpu_cap(X86_FEATURE_CPUID_FAULTING); --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -2626,8 +2626,7 @@ static int read_msr(unsigned int reg, ui return X86EMUL_OKAY; case MSR_INTEL_PLATFORM_INFO: - if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || - rdmsr_safe(MSR_INTEL_PLATFORM_INFO, *val) ) + if ( !boot_cpu_has(X86_FEATURE_MSR_PLATFORM_INFO) ) break; *val = 0; if ( this_cpu(cpuid_faulting_enabled) ) @@ -2635,8 +2634,7 @@ static int read_msr(unsigned int reg, ui return X86EMUL_OKAY; case MSR_INTEL_MISC_FEATURES_ENABLES: - if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || - rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, *val) ) + if ( !boot_cpu_has(X86_FEATURE_MSR_MISC_FEATURES) ) break; *val = 0; if ( curr->arch.cpuid_faulting ) @@ -2834,15 +2832,12 @@ static int write_msr(unsigned int reg, u return X86EMUL_OKAY; case MSR_INTEL_PLATFORM_INFO: - if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || - val || rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val) ) - break; - return X86EMUL_OKAY; + /* The MSR is read-only. */ + break; case MSR_INTEL_MISC_FEATURES_ENABLES: - if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || - (val & ~MSR_MISC_FEATURES_CPUID_FAULTING) || - rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, temp) ) + if ( !boot_cpu_has(X86_FEATURE_MSR_MISC_FEATURES) || + (val & ~MSR_MISC_FEATURES_CPUID_FAULTING) ) break; if ( (val & MSR_MISC_FEATURES_CPUID_FAULTING) && !this_cpu(cpuid_faulting_enabled) ) --- a/xen/include/asm-x86/cpufeatures.h +++ b/xen/include/asm-x86/cpufeatures.h @@ -22,3 +22,5 @@ XEN_CPUFEATURE(APERFMPERF, (FSCAPIN XEN_CPUFEATURE(MFENCE_RDTSC, (FSCAPINTS+0)*32+ 9) /* MFENCE synchronizes RDTSC */ XEN_CPUFEATURE(XEN_SMEP, (FSCAPINTS+0)*32+10) /* SMEP gets used by Xen itself */ XEN_CPUFEATURE(XEN_SMAP, (FSCAPINTS+0)*32+11) /* SMAP gets used by Xen itself */ +XEN_CPUFEATURE(MSR_PLATFORM_INFO, (FSCAPINTS+0)*32+12) /* PLATFORM_INFO MSR present */ +XEN_CPUFEATURE(MSR_MISC_FEATURES, (FSCAPINTS+0)*32+13) /* MISC_FEATURES_ENABLES MSR present */
Halfway recent Linux kernels probe MISC_FEATURES_ENABLES on all CPUs, leading to ugly recovered #GP fault messages with debug builds on older systems. We can do better, so introduce synthetic feature flags for both this and PLATFORM_INFO to avoid the rdmsr_safe() altogether. Note that the r/o nature of PLATFORM_INFO is now also being enforced. The rdmsr_safe() uses for MISC_ENABLE are left in place as benign - it exists for all 64-bit capable Intel CPUs (see e.g. early_init_intel()). Signed-off-by: Jan Beulich <jbeulich@suse.com> --- v2: Force PLATFORM_INFO writes to fail.