diff mbox series

[v5] ARM: Implement SLS mitigation

Message ID 20210223023542.2287529-1-jiancai@google.com (mailing list archive)
State New, archived
Headers show
Series [v5] ARM: Implement SLS mitigation | expand

Commit Message

Jian Cai Feb. 23, 2021, 2:35 a.m. UTC
This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
-mharden-sls=all, which mitigates the straight-line speculation
vulnerability, speculative execution of the instruction following some
unconditional jumps. Notice -mharden-sls= has other options as below,
and this config turns on the strongest option.

all: enable all mitigations against Straight Line Speculation that are implemented.
none: disable all mitigations against Straight Line Speculation.
retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
blr: enable the mitigation against Straight Line Speculation for BLR instructions.

Links:
https://reviews.llvm.org/D93221
https://reviews.llvm.org/D81404
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2

Suggested-by: Manoj Gupta <manojgupta@google.com>
Suggested-by: Nick Desaulniers <ndesaulniers@google.com>
Suggested-by: Nathan Chancellor <nathan@kernel.org>
Suggested-by: David Laight <David.Laight@aculab.com>
Suggested-by: Will Deacon <will@kernel.org>
Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Jian Cai <jiancai@google.com>
---
Changes v4->v5:
  Removed "default n" and made the description target indepdent in
  Kconfig.hardening. Please ignore my last email, it did not include the
  changes.

 arch/arm/Makefile                  | 4 ++++
 arch/arm/include/asm/vmlinux.lds.h | 4 ++++
 arch/arm/kernel/vmlinux.lds.S      | 1 +
 arch/arm64/Makefile                | 4 ++++
 arch/arm64/kernel/vmlinux.lds.S    | 5 +++++
 security/Kconfig.hardening         | 8 ++++++++
 6 files changed, 26 insertions(+)

Comments

Linus Walleij March 3, 2021, 3:04 p.m. UTC | #1
On Tue, Feb 23, 2021 at 3:36 AM Jian Cai <jiancai@google.com> wrote:

> This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> -mharden-sls=all, which mitigates the straight-line speculation
> vulnerability, speculative execution of the instruction following some
> unconditional jumps. Notice -mharden-sls= has other options as below,
> and this config turns on the strongest option.
>
> all: enable all mitigations against Straight Line Speculation that are implemented.
> none: disable all mitigations against Straight Line Speculation.
> retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> blr: enable the mitigation against Straight Line Speculation for BLR instructions.

I heard about compiler protection for this, so nice to see it happening!

Would you happen to know if there is any plan to do the same for GCC?
I know you folks at Google like LLVM, but if you know let us know.

> +config HARDEN_SLS_ALL
> +       bool "enable SLS vulnerability hardening"

I would go in and also edit arch/arm/mm/Kconfig under:
config HARDEN_BRANCH_PREDICTOR add
select HARDEN_SLS_ALL

Because if the user wants hardening for branch prediction
in general then the user certainly wants this as well, if
available. The help text for that option literally says:

 "This config option will take CPU-specific actions to harden
   the branch predictor against aliasing attacks and may rely on
   specific instruction sequences or control bits being set by
   the system firmware."

Notice this only turns on for CPUs with CPU_SPECTRE
defined which makes sense. Also it is default y which fulfils
Will's request that it be turned on by default where
applicable. Notably it will not be turned on for pre-v7 silicon
which would be unhelpful as they don't suffer from
these bugs.

Reading Kristofs compiler patch here:
https://reviews.llvm.org/rG195f44278c4361a4a32377a98a1e3a15203d3647

I take it that for affected CPUs we should also patch all assembly
in the kernel containing a RET, BR or BLR with
DSB SYS followed by ISB?

I suppose we would also need to look for any mov PC, <>
code...

I guess we can invent a "SB" macro to mimic what Aarch64 is
doing so the code is easy to read. (Thinking aloud.)

Yours,
Linus Walleij
Jian Cai March 4, 2021, 11:22 p.m. UTC | #2
On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <linus.walleij@linaro.org> wrote:
>
> On Tue, Feb 23, 2021 at 3:36 AM Jian Cai <jiancai@google.com> wrote:
>
> > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > -mharden-sls=all, which mitigates the straight-line speculation
> > vulnerability, speculative execution of the instruction following some
> > unconditional jumps. Notice -mharden-sls= has other options as below,
> > and this config turns on the strongest option.
> >
> > all: enable all mitigations against Straight Line Speculation that are implemented.
> > none: disable all mitigations against Straight Line Speculation.
> > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
>
> I heard about compiler protection for this, so nice to see it happening!
>
> Would you happen to know if there is any plan to do the same for GCC?
> I know you folks at Google like LLVM, but if you know let us know.

I think gcc also has these options.
https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html

>
> > +config HARDEN_SLS_ALL
> > +       bool "enable SLS vulnerability hardening"
>
> I would go in and also edit arch/arm/mm/Kconfig under:
> config HARDEN_BRANCH_PREDICTOR add
> select HARDEN_SLS_ALL
>
> Because if the user wants hardening for branch prediction
> in general then the user certainly wants this as well, if
> available. The help text for that option literally says:
>
>  "This config option will take CPU-specific actions to harden
>    the branch predictor against aliasing attacks and may rely on
>    specific instruction sequences or control bits being set by
>    the system firmware."
>
> Notice this only turns on for CPUs with CPU_SPECTRE
> defined which makes sense. Also it is default y which fulfils
> Will's request that it be turned on by default where
> applicable. Notably it will not be turned on for pre-v7 silicon
> which would be unhelpful as they don't suffer from
> these bugs.

Thanks for the suggestion. I will update the patch.

>
> Reading Kristofs compiler patch here:
> https://reviews.llvm.org/rG195f44278c4361a4a32377a98a1e3a15203d3647
>
> I take it that for affected CPUs we should also patch all assembly
> in the kernel containing a RET, BR or BLR with
> DSB SYS followed by ISB?
>
> I suppose we would also need to look for any mov PC, <>
> code...
>
> I guess we can invent a "SB" macro to mimic what Aarch64 is
> doing so the code is easy to read. (Thinking aloud.)
>
> Yours,
> Linus Walleij
Linus Walleij March 6, 2021, 12:25 p.m. UTC | #3
On Fri, Mar 5, 2021 at 12:23 AM Jian Cai <jiancai@google.com> wrote:
> On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <linus.walleij@linaro.org> wrote:
> >
> > On Tue, Feb 23, 2021 at 3:36 AM Jian Cai <jiancai@google.com> wrote:
> >
> > > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > > -mharden-sls=all, which mitigates the straight-line speculation
> > > vulnerability, speculative execution of the instruction following some
> > > unconditional jumps. Notice -mharden-sls= has other options as below,
> > > and this config turns on the strongest option.
> > >
> > > all: enable all mitigations against Straight Line Speculation that are implemented.
> > > none: disable all mitigations against Straight Line Speculation.
> > > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
> >
> > I heard about compiler protection for this, so nice to see it happening!
> >
> > Would you happen to know if there is any plan to do the same for GCC?
> > I know you folks at Google like LLVM, but if you know let us know.
>
> I think gcc also has these options.
> https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html

And how does that work with this part of your patch:

+#define SLS_TEXT                                       \
+       ALIGN_FUNCTION();                              \
+       *(.text.__llvm_slsblr_thunk_*)

This does not look compiler agnostic?

Yours,
Linus Walleij
Jian Cai March 10, 2021, 4:43 a.m. UTC | #4
On Sat, Mar 6, 2021 at 4:25 AM Linus Walleij <linus.walleij@linaro.org> wrote:
>
> On Fri, Mar 5, 2021 at 12:23 AM Jian Cai <jiancai@google.com> wrote:
> > On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <linus.walleij@linaro.org> wrote:
> > >
> > > On Tue, Feb 23, 2021 at 3:36 AM Jian Cai <jiancai@google.com> wrote:
> > >
> > > > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > > > -mharden-sls=all, which mitigates the straight-line speculation
> > > > vulnerability, speculative execution of the instruction following some
> > > > unconditional jumps. Notice -mharden-sls= has other options as below,
> > > > and this config turns on the strongest option.
> > > >
> > > > all: enable all mitigations against Straight Line Speculation that are implemented.
> > > > none: disable all mitigations against Straight Line Speculation.
> > > > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > > > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
> > >
> > > I heard about compiler protection for this, so nice to see it happening!
> > >
> > > Would you happen to know if there is any plan to do the same for GCC?
> > > I know you folks at Google like LLVM, but if you know let us know.
> >
> > I think gcc also has these options.
> > https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html
>
> And how does that work with this part of your patch:
>
> +#define SLS_TEXT                                       \
> +       ALIGN_FUNCTION();                              \
> +       *(.text.__llvm_slsblr_thunk_*)
>
> This does not look compiler agnostic?
>

You are right, GCC does generate different oraphan section names. I
will address it in the next version of the patch. Also it seems only
arm64 gcc supports -mharden-sls=* at this moment, arm32 gcc does not
support it yet. I don't know if there is any plan to implement it for
32-bit gcc, but should we patch arm32 linker script preemptively,
assuming the sections will be named with the same pattern like how
clang does so the kernel would not fail to boot when the flag is
implemented?

Thanks,
Jian

> Yours,
> Linus Walleij
Linus Walleij March 22, 2021, 11:45 a.m. UTC | #5
On Wed, Mar 10, 2021 at 5:43 AM Jian Cai <jiancai@google.com> wrote:
> On Sat, Mar 6, 2021 at 4:25 AM Linus Walleij <linus.walleij@linaro.org> wrote:
> > On Fri, Mar 5, 2021 at 12:23 AM Jian Cai <jiancai@google.com> wrote:
> > > On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <linus.walleij@linaro.org> wrote:

> > > I think gcc also has these options.
> > > https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html
> >
> > And how does that work with this part of your patch:
> >
> > +#define SLS_TEXT                                       \
> > +       ALIGN_FUNCTION();                              \
> > +       *(.text.__llvm_slsblr_thunk_*)
> >
> > This does not look compiler agnostic?
>
> You are right, GCC does generate different oraphan section names. I
> will address it in the next version of the patch. Also it seems only
> arm64 gcc supports -mharden-sls=* at this moment, arm32 gcc does not
> support it yet. I don't know if there is any plan to implement it for
> 32-bit gcc, but should we patch arm32 linker script preemptively,
> assuming the sections will be named with the same pattern like how
> clang does so the kernel would not fail to boot when the flag is
> implemented?

I think the best thing is to have something like this:
Implement a macro such as this in
include/linux/compiler-clang.h

#define SLS_TEXT_SECTION *(.text.__llvm_slsblr_thunk_*)

then the corresponding in include/linux/compiler-gcc.h
but here also add a

#define SLS_TEXT_SECTION #error "no compiler support"

if the compiler version does not have this.

I don't know the exact best approach sadly, as the patch
looks now it seems a bit fragile, I wonder if you get linker
warnings when this section is unused?

Yours,
Linus Walleij
Jian Cai March 23, 2021, 10:39 p.m. UTC | #6
Thanks for the suggestion. I've sent an inquiry to the author of
-mharden-sls* in GCC and hopefully that would shed some more light. We
do get warnings for oraphon sections when using lld. The other linkers
do not seem to provide such warnings, although the boot failure also
does not seem to happen with them.

On Mon, Mar 22, 2021 at 4:45 AM Linus Walleij <linus.walleij@linaro.org> wrote:
>
> On Wed, Mar 10, 2021 at 5:43 AM Jian Cai <jiancai@google.com> wrote:
> > On Sat, Mar 6, 2021 at 4:25 AM Linus Walleij <linus.walleij@linaro.org> wrote:
> > > On Fri, Mar 5, 2021 at 12:23 AM Jian Cai <jiancai@google.com> wrote:
> > > > On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <linus.walleij@linaro.org> wrote:
>
> > > > I think gcc also has these options.
> > > > https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html
> > >
> > > And how does that work with this part of your patch:
> > >
> > > +#define SLS_TEXT                                       \
> > > +       ALIGN_FUNCTION();                              \
> > > +       *(.text.__llvm_slsblr_thunk_*)
> > >
> > > This does not look compiler agnostic?
> >
> > You are right, GCC does generate different oraphan section names. I
> > will address it in the next version of the patch. Also it seems only
> > arm64 gcc supports -mharden-sls=* at this moment, arm32 gcc does not
> > support it yet. I don't know if there is any plan to implement it for
> > 32-bit gcc, but should we patch arm32 linker script preemptively,
> > assuming the sections will be named with the same pattern like how
> > clang does so the kernel would not fail to boot when the flag is
> > implemented?
>
> I think the best thing is to have something like this:
> Implement a macro such as this in
> include/linux/compiler-clang.h
>
> #define SLS_TEXT_SECTION *(.text.__llvm_slsblr_thunk_*)
>
> then the corresponding in include/linux/compiler-gcc.h
> but here also add a
>
> #define SLS_TEXT_SECTION #error "no compiler support"
>
> if the compiler version does not have this.
>
> I don't know the exact best approach sadly, as the patch
> looks now it seems a bit fragile, I wonder if you get linker
> warnings when this section is unused?
>
> Yours,
> Linus Walleij
diff mbox series

Patch

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 4aaec9599e8a..11d89ef32da9 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -48,6 +48,10 @@  CHECKFLAGS	+= -D__ARMEL__
 KBUILD_LDFLAGS	+= -EL
 endif
 
+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS  += -mharden-sls=all
+endif
+
 #
 # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
 # later may result in code being generated that handles signed short and signed
diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
index 4a91428c324d..c7f9717511ca 100644
--- a/arch/arm/include/asm/vmlinux.lds.h
+++ b/arch/arm/include/asm/vmlinux.lds.h
@@ -145,3 +145,7 @@ 
 		__edtcm_data = .;					\
 	}								\
 	. = __dtcm_start + SIZEOF(.data_dtcm);
+
+#define SLS_TEXT							\
+		ALIGN_FUNCTION();					\
+		*(.text.__llvm_slsblr_thunk_*)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index f7f4620d59c3..e71f2bc97bae 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -63,6 +63,7 @@  SECTIONS
 	.text : {			/* Real text segment		*/
 		_stext = .;		/* Text and read-only data	*/
 		ARM_TEXT
+		SLS_TEXT
 	}
 
 #ifdef CONFIG_DEBUG_ALIGN_RODATA
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 90309208bb28..ca7299b356a9 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -34,6 +34,10 @@  $(warning LSE atomics not supported by binutils)
   endif
 endif
 
+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS  += -mharden-sls=all
+endif
+
 cc_has_k_constraint := $(call try-run,echo				\
 	'int main(void) {						\
 		asm volatile("and w0, w0, %w0" :: "K" (4294967295));	\
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 4c0b0c89ad59..f8912e42ffcd 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -93,6 +93,10 @@  jiffies = jiffies_64;
 #define TRAMP_TEXT
 #endif
 
+#define SLS_TEXT					\
+	ALIGN_FUNCTION();				\
+	*(.text.__llvm_slsblr_thunk_*)
+
 /*
  * The size of the PE/COFF section that covers the kernel image, which
  * runs from _stext to _edata, must be a round multiple of the PE/COFF
@@ -144,6 +148,7 @@  SECTIONS
 			HIBERNATE_TEXT
 			TRAMP_TEXT
 			*(.fixup)
+			SLS_TEXT
 			*(.gnu.warning)
 		. = ALIGN(16);
 		*(.got)			/* Global offset table		*/
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 269967c4fc1b..db76ad732c14 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -121,6 +121,14 @@  choice
 
 endchoice
 
+config HARDEN_SLS_ALL
+	bool "enable SLS vulnerability hardening"
+	depends on $(cc-option,-mharden-sls=all)
+	help
+	  Enables straight-line speculation vulnerability hardening. This inserts
+	  speculation barrier instruction sequences after certain unconditional jumps
+	  to prevent speculative execution past those barriers.
+
 config GCC_PLUGIN_STRUCTLEAK_VERBOSE
 	bool "Report forcefully initialized variables"
 	depends on GCC_PLUGIN_STRUCTLEAK