diff mbox series

[v9,7/8] x86/cpu: Support AMD Automatic IBRS

Message ID 20230124163319.2277355-8-kim.phillips@amd.com (mailing list archive)
State New, archived
Headers show
Series x86/cpu, kvm: Support AMD Automatic IBRS | expand

Commit Message

Kim Phillips Jan. 24, 2023, 4:33 p.m. UTC
The AMD Zen4 core supports a new feature called Automatic IBRS.

It is a "set-and-forget" feature that means that, like Intel's Enhanced IBRS,
h/w manages its IBRS mitigation resources automatically across CPL transitions.

The feature is advertised by CPUID_Fn80000021_EAX bit 8 and is enabled by
setting MSR C000_0080 (EFER) bit 21.

Enable Automatic IBRS by default if the CPU feature is present.  It typically
provides greater performance over the incumbent generic retpolines mitigation.

Reuse the SPECTRE_V2_EIBRS spectre_v2_mitigation enum.  AMD Automatic IBRS and
Intel Enhanced IBRS have similar enablement.  Add NO_EIBRS_PBRSB to
cpu_vuln_whitelist, since AMD Automatic IBRS isn't affected by PBRSB-eIBRS.

The kernel command line option spectre_v2=eibrs is used to select AMD Automatic
IBRS, if available.

Signed-off-by: Kim Phillips <kim.phillips@amd.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 Documentation/admin-guide/hw-vuln/spectre.rst |  6 +++---
 .../admin-guide/kernel-parameters.txt         |  6 +++---
 arch/x86/include/asm/cpufeatures.h            |  1 +
 arch/x86/include/asm/msr-index.h              |  2 ++
 arch/x86/kernel/cpu/bugs.c                    | 20 +++++++++++--------
 arch/x86/kernel/cpu/common.c                  | 19 ++++++++++--------
 6 files changed, 32 insertions(+), 22 deletions(-)

Comments

Josh Poimboeuf Feb. 24, 2023, 6:52 p.m. UTC | #1
On Tue, Jan 24, 2023 at 10:33:18AM -0600, Kim Phillips wrote:
> @@ -1495,8 +1495,12 @@ static void __init spectre_v2_select_mitigation(void)
>  		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
>  
>  	if (spectre_v2_in_ibrs_mode(mode)) {
> -		x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
> -		update_spec_ctrl(x86_spec_ctrl_base);
> +		if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
> +			msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);

Doesn't this only enable it on the boot CPU?
Borislav Petkov Feb. 24, 2023, 9:08 p.m. UTC | #2
On Fri, Feb 24, 2023 at 10:52:57AM -0800, Josh Poimboeuf wrote:
> Doesn't this only enable it on the boot CPU?

Whoops, you might be right.

Lemme fix it.

Thx!
Josh Poimboeuf Feb. 24, 2023, 9:35 p.m. UTC | #3
On Fri, Feb 24, 2023 at 10:08:32PM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 10:52:57AM -0800, Josh Poimboeuf wrote:
> > Doesn't this only enable it on the boot CPU?
> 
> Whoops, you might be right.
> 
> Lemme fix it.
> 
> Thx!

BTW, I wasn't copied on the patch set, despite having dedicated years of
my life that file ;-)

Can we add bugs.c and friends to MAINTAINERS?

---8<---

From: Josh Poimboeuf <jpoimboe@kernel.org>
Subject: [PATCH] MAINTAINERS: Add x86 hardware vulnerabilities section

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index eb6f650c6c0b..338dc7469f80 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22553,6 +22553,16 @@ S:	Maintained
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/asm
 F:	arch/x86/entry/
 
+X86 HARDWARE VULNERABILITIES
+M:	Thomas Gleixner <tglx@linutronix.de>
+M:	Borislav Petkov <bp@alien8.de>
+M:	Peter Zijlstra <peterz@infradead.org>
+M:	Josh Poimboeuf <jpoimboe@kernel.org>
+S:	Maintained
+F:	Documentation/admin-guide/hw-vuln/
+F:	arch/x86/include/asm/nospec-branch.h
+F:	arch/x86/kernel/cpu/bugs.c
+
 X86 MCE INFRASTRUCTURE
 M:	Tony Luck <tony.luck@intel.com>
 M:	Borislav Petkov <bp@alien8.de>
Borislav Petkov Feb. 24, 2023, 9:59 p.m. UTC | #4
On Fri, Feb 24, 2023 at 01:35:22PM -0800, Josh Poimboeuf wrote:
> BTW, I wasn't copied on the patch set, despite having dedicated years of
> my life that file ;-)

... and yet, even after all that pain, you're still willing to
self-inflict moar. :-P

> Can we add bugs.c and friends to MAINTAINERS?

Sure, might as well.

Acked-by: Borislav Petkov (AMD) <bp@alien8.de>

I'll queue it after the MW is over.

Thx.
Luck, Tony Feb. 24, 2023, 10:03 p.m. UTC | #5
>> Can we add bugs.c and friends to MAINTAINERS?
>
> Sure, might as well.
>
> Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
>
> I'll queue it after the MW is over.

Should also include Pawan as another unfortunate soul sucked
into keeping that file up to date with the latest wreckage. If not
as "M", at least as "R":

R: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>

-Tony
Borislav Petkov Feb. 24, 2023, 10:12 p.m. UTC | #6
On Fri, Feb 24, 2023 at 10:03:16PM +0000, Luck, Tony wrote:
> Should also include Pawan as another unfortunate soul sucked
> into keeping that file up to date with the latest wreckage. If not
> as "M", at least as "R":
> 
> R: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>

We probably should hear from him before you offer his soul into the
purgatory of hardware speculation.

:-P
Borislav Petkov Feb. 24, 2023, 10:51 p.m. UTC | #7
On Fri, Feb 24, 2023 at 10:08:32PM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 10:52:57AM -0800, Josh Poimboeuf wrote:
> > Doesn't this only enable it on the boot CPU?
> 
> Whoops, you might be right.

Actually, we stick that MSR - EFER - into the trampoline header and then
each AP gets it written to in arch/x86/realmode/rm/trampoline_64.S

But this is only from code staring - I'll confirm this tomorrow.

And if so, we should at least put comments in that trampoline code so
that people do not remove the MSR writes.

Or, actually, we should simply write it again because it is the init
path and not really a hot path but it should damn well make sure that
that bit gets set.

Thx.
Borislav Petkov Feb. 24, 2023, 11:23 p.m. UTC | #8
On Fri, Feb 24, 2023 at 11:51:17PM +0100, Borislav Petkov wrote:
> Or, actually, we should simply write it again because it is the init
> path and not really a hot path but it should damn well make sure that
> that bit gets set.

Yeah, we have this fancy msr_set_bit() interface which saves us the MSR
write when not needed. And it also tells us that. :-)

So we can do:

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 380753b14cab..2aa089aa23db 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -996,6 +996,12 @@ static void init_amd(struct cpuinfo_x86 *c)
 		msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
 
 	check_null_seg_clears_base(c);
+
+	if (cpu_has(c, X86_FEATURE_AUTOIBRS)) {
+		int ret = msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
+
+		pr_info("%s: CPU%d, ret: %d\n", __func__, smp_processor_id(), ret);
+	}
 }
 
 #ifdef CONFIG_X86_32

---

and the output looks like this:

[    3.046607] x86: Booting SMP configuration:
[    3.046609] .... node  #0, CPUs:          #1
[    2.874768] init_amd: CPU1, ret: 0
[    3.046873]    #2
[    2.874768] init_amd: CPU2, ret: 0
[    3.049155]    #3
[    2.874768] init_amd: CPU3, ret: 0
[    3.050834]    #4
[    2.874768] init_amd: CPU4, ret: 0
...

which says that the bit was already set - which confirms the
trampoline setting thing.

And doing the write again serves as a guard when in the future we decide
to not set EFER anymore - I doubt it - but we can't allow ourselves to
not set the autoibrs bit so one more RDMSR on init doesn't matter.

Proper patch tomorrow.

Thx.
Pawan Gupta Feb. 24, 2023, 11:30 p.m. UTC | #9
On Fri, Feb 24, 2023 at 11:12:29PM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 10:03:16PM +0000, Luck, Tony wrote:
> > Should also include Pawan as another unfortunate soul sucked
> > into keeping that file up to date with the latest wreckage. If not
> > as "M", at least as "R":
> > 
> > R: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> 
> We probably should hear from him before you offer his soul into the
> purgatory of hardware speculation.

I will be happy to review what I can.

Soulfully yours,
Pawan
Josh Poimboeuf Feb. 25, 2023, 12:09 a.m. UTC | #10
On Fri, Feb 24, 2023 at 11:51:17PM +0100, Borislav Petkov wrote:
> On Fri, Feb 24, 2023 at 10:08:32PM +0100, Borislav Petkov wrote:
> > On Fri, Feb 24, 2023 at 10:52:57AM -0800, Josh Poimboeuf wrote:
> > > Doesn't this only enable it on the boot CPU?
> > 
> > Whoops, you might be right.
> 
> Actually, we stick that MSR - EFER - into the trampoline header and then
> each AP gets it written to in arch/x86/realmode/rm/trampoline_64.S
> 
> But this is only from code staring - I'll confirm this tomorrow.

Ah, I had to stare it that for a bit to figure out how it works.
setup_real_mode() reads MSR_EFER from the boot CPU and stores it in
trampoline_header->efer.  Then the other CPUs read that stored value in
startup_32() and write it into their MSR.

> And if so, we should at least put comments in that trampoline code so
> that people do not remove the MSR writes.
> 
> Or, actually, we should simply write it again because it is the init
> path and not really a hot path but it should damn well make sure that
> that bit gets set.

Yeah, I think that would be good.  Otherwise it's rather magical.  That
EFER MSR is a surprising place to put that bit.
diff mbox series

Patch

diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
index c4dcdb3d0d45..3fe6511c5405 100644
--- a/Documentation/admin-guide/hw-vuln/spectre.rst
+++ b/Documentation/admin-guide/hw-vuln/spectre.rst
@@ -610,9 +610,9 @@  kernel command line.
                 retpoline,generic       Retpolines
                 retpoline,lfence        LFENCE; indirect branch
                 retpoline,amd           alias for retpoline,lfence
-                eibrs                   enhanced IBRS
-                eibrs,retpoline         enhanced IBRS + Retpolines
-                eibrs,lfence            enhanced IBRS + LFENCE
+                eibrs                   Enhanced/Auto IBRS
+                eibrs,retpoline         Enhanced/Auto IBRS + Retpolines
+                eibrs,lfence            Enhanced/Auto IBRS + LFENCE
                 ibrs                    use IBRS to protect kernel
 
 		Not specifying this option is equivalent to
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 0ee891133d76..1d2f92edb5a1 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5729,9 +5729,9 @@ 
 			retpoline,generic - Retpolines
 			retpoline,lfence  - LFENCE; indirect branch
 			retpoline,amd     - alias for retpoline,lfence
-			eibrs		  - enhanced IBRS
-			eibrs,retpoline   - enhanced IBRS + Retpolines
-			eibrs,lfence      - enhanced IBRS + LFENCE
+			eibrs		  - Enhanced/Auto IBRS
+			eibrs,retpoline   - Enhanced/Auto IBRS + Retpolines
+			eibrs,lfence      - Enhanced/Auto IBRS + LFENCE
 			ibrs		  - use IBRS to protect kernel
 
 			Not specifying this option is equivalent to
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 8ef89d595771..fdb8e09234ba 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -434,6 +434,7 @@ 
 #define X86_FEATURE_NO_NESTED_DATA_BP	(20*32+ 0) /* "" No Nested Data Breakpoints */
 #define X86_FEATURE_LFENCE_RDTSC	(20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
 #define X86_FEATURE_NULL_SEL_CLR_BASE	(20*32+ 6) /* "" Null Selector Clears Base */
+#define X86_FEATURE_AUTOIBRS		(20*32+ 8) /* "" Automatic IBRS */
 #define X86_FEATURE_NO_SMM_CTL_MSR	(20*32+ 9) /* "" SMM_CTL MSR is not present */
 
 /*
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 4e0a7ad17083..ad35355ee43e 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -25,6 +25,7 @@ 
 #define _EFER_SVME		12 /* Enable virtualization */
 #define _EFER_LMSLE		13 /* Long Mode Segment Limit Enable */
 #define _EFER_FFXSR		14 /* Enable Fast FXSAVE/FXRSTOR */
+#define _EFER_AUTOIBRS		21 /* Enable Automatic IBRS */
 
 #define EFER_SCE		(1<<_EFER_SCE)
 #define EFER_LME		(1<<_EFER_LME)
@@ -33,6 +34,7 @@ 
 #define EFER_SVME		(1<<_EFER_SVME)
 #define EFER_LMSLE		(1<<_EFER_LMSLE)
 #define EFER_FFXSR		(1<<_EFER_FFXSR)
+#define EFER_AUTOIBRS		(1<<_EFER_AUTOIBRS)
 
 /* Intel MSRs. Some also available on other CPUs */
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4a0add86c182..cf81848b72f4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1238,9 +1238,9 @@  static const char * const spectre_v2_strings[] = {
 	[SPECTRE_V2_NONE]			= "Vulnerable",
 	[SPECTRE_V2_RETPOLINE]			= "Mitigation: Retpolines",
 	[SPECTRE_V2_LFENCE]			= "Mitigation: LFENCE",
-	[SPECTRE_V2_EIBRS]			= "Mitigation: Enhanced IBRS",
-	[SPECTRE_V2_EIBRS_LFENCE]		= "Mitigation: Enhanced IBRS + LFENCE",
-	[SPECTRE_V2_EIBRS_RETPOLINE]		= "Mitigation: Enhanced IBRS + Retpolines",
+	[SPECTRE_V2_EIBRS]			= "Mitigation: Enhanced / Automatic IBRS",
+	[SPECTRE_V2_EIBRS_LFENCE]		= "Mitigation: Enhanced / Automatic IBRS + LFENCE",
+	[SPECTRE_V2_EIBRS_RETPOLINE]		= "Mitigation: Enhanced / Automatic IBRS + Retpolines",
 	[SPECTRE_V2_IBRS]			= "Mitigation: IBRS",
 };
 
@@ -1309,7 +1309,7 @@  static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 	     cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
 	     cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
 	    !boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
-		pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
+		pr_err("%s selected but CPU doesn't have Enhanced or Automatic IBRS. Switching to AUTO select\n",
 		       mitigation_options[i].option);
 		return SPECTRE_V2_CMD_AUTO;
 	}
@@ -1495,8 +1495,12 @@  static void __init spectre_v2_select_mitigation(void)
 		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
 
 	if (spectre_v2_in_ibrs_mode(mode)) {
-		x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
-		update_spec_ctrl(x86_spec_ctrl_base);
+		if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
+			msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
+		} else {
+			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
+			update_spec_ctrl(x86_spec_ctrl_base);
+		}
 	}
 
 	switch (mode) {
@@ -1580,8 +1584,8 @@  static void __init spectre_v2_select_mitigation(void)
 	/*
 	 * Retpoline protects the kernel, but doesn't protect firmware.  IBRS
 	 * and Enhanced IBRS protect firmware too, so enable IBRS around
-	 * firmware calls only when IBRS / Enhanced IBRS aren't otherwise
-	 * enabled.
+	 * firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
+	 * otherwise enabled.
 	 *
 	 * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
 	 * the user might select retpoline on the kernel command line and if
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 162352d42ce0..8ce67a8a61a6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1229,8 +1229,8 @@  static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 	VULNWL_AMD(0x12,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
 
 	/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
-	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
-	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
+	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB),
 
 	/* Zhaoxin Family 7 */
 	VULNWL(CENTAUR,	7, X86_MODEL_ANY,	NO_SPECTRE_V2 | NO_SWAPGS | NO_MMIO),
@@ -1341,8 +1341,16 @@  static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
-	if (ia32_cap & ARCH_CAP_IBRS_ALL)
+	/*
+	 * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel feature
+	 * flag and protect from vendor-specific bugs via the whitelist.
+	 */
+	if ((ia32_cap & ARCH_CAP_IBRS_ALL) || cpu_has(c, X86_FEATURE_AUTOIBRS)) {
 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+		if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
+		    !(ia32_cap & ARCH_CAP_PBRSB_NO))
+			setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
+	}
 
 	if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
 	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
@@ -1404,11 +1412,6 @@  static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 			setup_force_cpu_bug(X86_BUG_RETBLEED);
 	}
 
-	if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) &&
-	    !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
-	    !(ia32_cap & ARCH_CAP_PBRSB_NO))
-		setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
-
 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
 		return;