Message ID | 20200320194305.3532606-1-wei.huang2@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [V2,1/1] x86/mce/amd: Add PPIN support for AMD MCE | expand |
Acked-by: Tony Luck <tony.luck@intel.com>
Still true for this new version.
-Tony
On Fri, Mar 20, 2020 at 02:43:05PM -0500, Wei Huang wrote: > Newer AMD CPUs support a feature called protected processor identification > number (PPIN). This feature can be detected via CPUID_Fn80000008_EBX[23]. > However CPUID alone is not enough to read the processor serial number. > MSR_AMD_PPIN_CTL also needs to be configured properly. If for any reason > X86_FEATURE_AMD_PPIN[PPIN_EN] can not be turned on, such as disabled in > BIOS, we have to clear the CPU capability bit of X86_FEATURE_AMD_PPIN. > > When the X86_FEATURE_AMD_PPIN capability is available, MCE can read the > serial number to keep track the source of MCE errors. > > Co-developed-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> > Signed-off-by: Smita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> > Signed-off-by: Wei Huang <wei.huang2@amd.com> > Acked-by: Tony Luck <tony.luck@intel.com> > Cc: Borislav Petkov <bp@suse.de> > Cc: Yazen Ghannam <yazen.ghannam@amd.com> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: "H. Peter Anvin" <hpa@zytor.com> > Cc: linux-edac <linux-edac@vger.kernel.org> > Cc: x86-ml <x86@kernel.org> > --- > arch/x86/include/asm/cpufeatures.h | 1 + > arch/x86/kernel/cpu/amd.c | 26 ++++++++++++++++++++++++++ > arch/x86/kernel/cpu/mce/core.c | 2 ++ > 3 files changed, 29 insertions(+) > > diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h > index f3327cb56edf..4b263ffb793b 100644 > --- a/arch/x86/include/asm/cpufeatures.h > +++ b/arch/x86/include/asm/cpufeatures.h > @@ -299,6 +299,7 @@ > #define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */ > #define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */ > #define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */ > +#define X86_FEATURE_AMD_PPIN (13*32+23) /* Protected Processor Inventory Number */ > #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */ > #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ > #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */ > diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c > index 1f875fbe1384..9176db4be69b 100644 > --- a/arch/x86/kernel/cpu/amd.c > +++ b/arch/x86/kernel/cpu/amd.c > @@ -394,6 +394,31 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c) > per_cpu(cpu_llc_id, cpu) = c->phys_proc_id; > } > > +static void amd_detect_ppin(struct cpuinfo_x86 *c) > +{ > + unsigned long long val; > + > + if (cpu_has(c, X86_FEATURE_AMD_PPIN)) { Flip check: if (!cpu_has... return; > + /* Turn off now until MSR is properly configured */ > + clear_cpu_cap(c, X86_FEATURE_AMD_PPIN); What for? You can do the final decision in the end, once, instead of toggling that bit here. > + > + if (rdmsrl_safe(MSR_AMD_PPIN_CTL, &val)) > + return; > + > + if ((val & 3UL) == 1UL) > + return; > + > + if (!(val & 2UL)) { > + wrmsrl_safe(MSR_AMD_PPIN_CTL, val | 2UL); > + rdmsrl_safe(MSR_AMD_PPIN_CTL, &val); > + } Those need comments what they do, like in the Intel variant. > + > + /* MSR_AMD_PPIN_CTL[PPIN_EN] bit is 1, turn feature back on */ > + if (val & 2UL) > + set_cpu_cap(c, X86_FEATURE_AMD_PPIN); No, keep the feature bit set and clear it only when you determine so instead of clearing and setting again. Thx.
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index f3327cb56edf..4b263ffb793b 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -299,6 +299,7 @@ #define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */ #define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */ #define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */ +#define X86_FEATURE_AMD_PPIN (13*32+23) /* Protected Processor Inventory Number */ #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */ #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */ diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 1f875fbe1384..9176db4be69b 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -394,6 +394,31 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c) per_cpu(cpu_llc_id, cpu) = c->phys_proc_id; } +static void amd_detect_ppin(struct cpuinfo_x86 *c) +{ + unsigned long long val; + + if (cpu_has(c, X86_FEATURE_AMD_PPIN)) { + /* Turn off now until MSR is properly configured */ + clear_cpu_cap(c, X86_FEATURE_AMD_PPIN); + + if (rdmsrl_safe(MSR_AMD_PPIN_CTL, &val)) + return; + + if ((val & 3UL) == 1UL) + return; + + if (!(val & 2UL)) { + wrmsrl_safe(MSR_AMD_PPIN_CTL, val | 2UL); + rdmsrl_safe(MSR_AMD_PPIN_CTL, &val); + } + + /* MSR_AMD_PPIN_CTL[PPIN_EN] bit is 1, turn feature back on */ + if (val & 2UL) + set_cpu_cap(c, X86_FEATURE_AMD_PPIN); + } +} + u16 amd_get_nb_id(int cpu) { return per_cpu(cpu_llc_id, cpu); @@ -941,6 +966,7 @@ static void init_amd(struct cpuinfo_x86 *c) amd_detect_cmp(c); amd_get_topology(c); srat_detect_node(c); + amd_detect_ppin(c); init_amd_cacheinfo(c); diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 2c4f949611e4..57347e899575 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -142,6 +142,8 @@ void mce_setup(struct mce *m) if (this_cpu_has(X86_FEATURE_INTEL_PPIN)) rdmsrl(MSR_PPIN, m->ppin); + else if (this_cpu_has(X86_FEATURE_AMD_PPIN)) + rdmsrl(MSR_AMD_PPIN, m->ppin); m->microcode = boot_cpu_data.microcode; }