diff mbox series

[v4] ARM: Implement SLS mitigation

Message ID 20210219230841.875875-1-jiancai@google.com (mailing list archive)
State New, archived
Headers show
Series [v4] ARM: Implement SLS mitigation | expand

Commit Message

Jian Cai Feb. 19, 2021, 11:08 p.m. UTC
This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
-mharden-sls=all, which mitigates the straight-line speculation
vulnerability, speculative execution of the instruction following some
unconditional jumps. Notice -mharden-sls= has other options as below,
and this config turns on the strongest option.

all: enable all mitigations against Straight Line Speculation that are implemented.
none: disable all mitigations against Straight Line Speculation.
retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
blr: enable the mitigation against Straight Line Speculation for BLR instructions.

Links:
https://reviews.llvm.org/D93221
https://reviews.llvm.org/D81404
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2

Suggested-by: Manoj Gupta <manojgupta@google.com>
Suggested-by: Nick Desaulniers <ndesaulniers@google.com>
Suggested-by: Nathan Chancellor  <nathan@kernel.org>
Suggested-by: David Laight <David.Laight@aculab.com>
Suggested-by: Will Deacon <will@kernel.org>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Jian Cai <jiancai@google.com>
---

Changes v3 -> v4:
  Address Nathan's comment and replace def_bool with depends on in
  HARDEN_SLS_ALL.

 arch/arm/Makefile                  |  4 ++++
 arch/arm/include/asm/vmlinux.lds.h |  4 ++++
 arch/arm/kernel/vmlinux.lds.S      |  1 +
 arch/arm64/Makefile                |  4 ++++
 arch/arm64/kernel/vmlinux.lds.S    |  5 +++++
 security/Kconfig.hardening         | 10 ++++++++++
 6 files changed, 28 insertions(+)

Comments

Russell King (Oracle) Feb. 21, 2021, 10:13 a.m. UTC | #1
On Fri, Feb 19, 2021 at 03:08:13PM -0800, Jian Cai wrote:
> diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
> index 269967c4fc1b..146b75a79d9e 100644
> --- a/security/Kconfig.hardening
> +++ b/security/Kconfig.hardening
> @@ -121,6 +121,16 @@ choice
>  
>  endchoice
>  
> +config HARDEN_SLS_ALL
> +	bool "enable SLS vulnerability hardening"
> +	default n

Please get rid of this useless "default n"

> +	depends on $(cc-option,-mharden-sls=all)
> +	help
> +	  Enables straight-line speculation vulnerability hardening on ARM and ARM64
> +	  architectures. It inserts speculation barrier sequences (SB or DSB+ISB
> +	  depending on the target architecture) after RET and BR, and replacing
> +	  BLR with BL+BR sequence.

Given that this is in an architecture independent Kconfig file, and it
detects support in CC for this feature, why should this help text be
written to be specific to a couple of architectures? Will this feature
only ever be available on these two architectures? What if someone adds
support for another architecture?
Will Deacon Feb. 22, 2021, 11:58 a.m. UTC | #2
On Fri, Feb 19, 2021 at 03:08:13PM -0800, Jian Cai wrote:
> This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> -mharden-sls=all, which mitigates the straight-line speculation
> vulnerability, speculative execution of the instruction following some
> unconditional jumps. Notice -mharden-sls= has other options as below,
> and this config turns on the strongest option.
> 
> all: enable all mitigations against Straight Line Speculation that are implemented.
> none: disable all mitigations against Straight Line Speculation.
> retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> blr: enable the mitigation against Straight Line Speculation for BLR instructions.
> 
> Links:
> https://reviews.llvm.org/D93221
> https://reviews.llvm.org/D81404
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
> 
> Suggested-by: Manoj Gupta <manojgupta@google.com>
> Suggested-by: Nick Desaulniers <ndesaulniers@google.com>
> Suggested-by: Nathan Chancellor  <nathan@kernel.org>
> Suggested-by: David Laight <David.Laight@aculab.com>
> Suggested-by: Will Deacon <will@kernel.org>
> Reviewed-by: Nathan Chancellor <nathan@kernel.org>
> Signed-off-by: Jian Cai <jiancai@google.com>
> ---

Please can you reply to my previous questions?

https://lore.kernel.org/linux-arm-kernel/20210217094859.GA3706@willie-the-truck/

(apologies if you did, but I don't see them in the archive or my inbox)

Will
Jian Cai Feb. 22, 2021, 9:50 p.m. UTC | #3
Please see my comments inlined below.

Thanks,
Jian

On Mon, Feb 22, 2021 at 3:58 AM Will Deacon <will@kernel.org> wrote:
>
> On Fri, Feb 19, 2021 at 03:08:13PM -0800, Jian Cai wrote:
> > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > -mharden-sls=all, which mitigates the straight-line speculation
> > vulnerability, speculative execution of the instruction following some
> > unconditional jumps. Notice -mharden-sls= has other options as below,
> > and this config turns on the strongest option.
> >
> > all: enable all mitigations against Straight Line Speculation that are implemented.
> > none: disable all mitigations against Straight Line Speculation.
> > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
> >
> > Links:
> > https://reviews.llvm.org/D93221
> > https://reviews.llvm.org/D81404
> > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
> >
> > Suggested-by: Manoj Gupta <manojgupta@google.com>
> > Suggested-by: Nick Desaulniers <ndesaulniers@google.com>
> > Suggested-by: Nathan Chancellor  <nathan@kernel.org>
> > Suggested-by: David Laight <David.Laight@aculab.com>
> > Suggested-by: Will Deacon <will@kernel.org>
> > Reviewed-by: Nathan Chancellor <nathan@kernel.org>
> > Signed-off-by: Jian Cai <jiancai@google.com>
> > ---
>
> Please can you reply to my previous questions?
>
> https://lore.kernel.org/linux-arm-kernel/20210217094859.GA3706@willie-the-truck/
>
> (apologies if you did, but I don't see them in the archive or my inbox)

I should have clarified the suggested-by tag was in regard to the
Kconfig text change. Regarding your earlier questions, please see my
comments below.

> So I think that either we enable this unconditionally, or we don't enable it
> at all (and people can hack their CFLAGS themselves if they want to).

Not sure if this answers your question but this config should provide
a way for people to turn on the mitigation at their own risk.

> It would be helpful for one of the Arm folks to chime in, as I'm yet to see any
> evidence that this is actually exploitable. Is it any worse that Spectre-v1,
> where we _don't_ have a compiler mitigation?

> Finally, do we have to worry about our assembly code?

I am not sure if there are any plans to protect assembly code and I
will leave it to the Arm folks since they know a whole lot better. But
even without that part, we should still have better protection,
especially when overhead does not look too bad: I did some preliminary
experiments on ChromeOS, code size of vmlinux increased 3%, and there
were no noticeable changes to run-time performance of the benchmarks I
used.
Will Deacon Feb. 23, 2021, 10:04 a.m. UTC | #4
On Mon, Feb 22, 2021 at 01:50:06PM -0800, Jian Cai wrote:
> Please see my comments inlined below.
> 
> Thanks,
> Jian
> 
> On Mon, Feb 22, 2021 at 3:58 AM Will Deacon <will@kernel.org> wrote:
> >
> > On Fri, Feb 19, 2021 at 03:08:13PM -0800, Jian Cai wrote:
> > > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > > -mharden-sls=all, which mitigates the straight-line speculation
> > > vulnerability, speculative execution of the instruction following some
> > > unconditional jumps. Notice -mharden-sls= has other options as below,
> > > and this config turns on the strongest option.
> > >
> > > all: enable all mitigations against Straight Line Speculation that are implemented.
> > > none: disable all mitigations against Straight Line Speculation.
> > > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
> > >
> > > Links:
> > > https://reviews.llvm.org/D93221
> > > https://reviews.llvm.org/D81404
> > > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> > > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
> > >
> > > Suggested-by: Manoj Gupta <manojgupta@google.com>
> > > Suggested-by: Nick Desaulniers <ndesaulniers@google.com>
> > > Suggested-by: Nathan Chancellor  <nathan@kernel.org>
> > > Suggested-by: David Laight <David.Laight@aculab.com>
> > > Suggested-by: Will Deacon <will@kernel.org>
> > > Reviewed-by: Nathan Chancellor <nathan@kernel.org>
> > > Signed-off-by: Jian Cai <jiancai@google.com>
> > > ---
> >
> > Please can you reply to my previous questions?
> >
> > https://lore.kernel.org/linux-arm-kernel/20210217094859.GA3706@willie-the-truck/
> >
> > (apologies if you did, but I don't see them in the archive or my inbox)
> 
> I should have clarified the suggested-by tag was in regard to the
> Kconfig text change. Regarding your earlier questions, please see my
> comments below.
> 
> > So I think that either we enable this unconditionally, or we don't enable it
> > at all (and people can hack their CFLAGS themselves if they want to).
> 
> Not sure if this answers your question but this config should provide
> a way for people to turn on the mitigation at their own risk.

I'm not sure I see the point; either it's needed or its not. I wonder if
there's a plan to fix this in future CPUs (another question for the Arm
folks).

> > It would be helpful for one of the Arm folks to chime in, as I'm yet to see any
> > evidence that this is actually exploitable. Is it any worse that Spectre-v1,
> > where we _don't_ have a compiler mitigation?
> 
> > Finally, do we have to worry about our assembly code?
> 
> I am not sure if there are any plans to protect assembly code and I
> will leave it to the Arm folks since they know a whole lot better. But
> even without that part, we should still have better protection,
> especially when overhead does not look too bad: I did some preliminary
> experiments on ChromeOS, code size of vmlinux increased 3%, and there
> were no noticeable changes to run-time performance of the benchmarks I
> used.

If the mitigation is required, I'm not sure I see a lot of point in only
doing a half-baked job of it. It feels a bit like a box-ticking exercise,
in which case any overhead is too much.

Will
Linus Walleij March 3, 2021, 3:18 p.m. UTC | #5
On Tue, Feb 23, 2021 at 11:05 AM Will Deacon <will@kernel.org> wrote:
> On Mon, Feb 22, 2021 at 01:50:06PM -0800, Jian Cai wrote:
> > I am not sure if there are any plans to protect assembly code and I
> > will leave it to the Arm folks since they know a whole lot better. But
> > even without that part, we should still have better protection,
> > especially when overhead does not look too bad: I did some preliminary
> > experiments on ChromeOS, code size of vmlinux increased 3%, and there
> > were no noticeable changes to run-time performance of the benchmarks I
> > used.
>
> If the mitigation is required, I'm not sure I see a lot of point in only
> doing a half-baked job of it. It feels a bit like a box-ticking exercise,
> in which case any overhead is too much.

I wrote some suggestions on follow-ups in my reply, and I can
help out doing some of the patches, I think.

Since ARM32 RET is mov pc, <>
git grep 'mov.*pc,' | wc -l gives 93 sites in arch/arm.
I suppose these need to come out:

mov pc, lr
dsb(nsh);
isb();

As ARM32 doesn't have sb my idea is to make a macro
"sb" that resolves to dsb/isb when this is enabled and then
we could start patching all the assembly users with that as
well. I need the Kconfig symbol from this patch though.

I also suggest selecting this mitigation as part of
HARDEN_BRANCH_PREDICTOR, by the token that either
you want all of them or none of them.

Yours,
Linus Walleij
David Laight March 3, 2021, 3:29 p.m. UTC | #6
From: Linus Walleij
> Sent: 03 March 2021 15:19
> 
> On Tue, Feb 23, 2021 at 11:05 AM Will Deacon <will@kernel.org> wrote:
> > On Mon, Feb 22, 2021 at 01:50:06PM -0800, Jian Cai wrote:
> > > I am not sure if there are any plans to protect assembly code and I
> > > will leave it to the Arm folks since they know a whole lot better. But
> > > even without that part, we should still have better protection,
> > > especially when overhead does not look too bad: I did some preliminary
> > > experiments on ChromeOS, code size of vmlinux increased 3%, and there
> > > were no noticeable changes to run-time performance of the benchmarks I
> > > used.
> >
> > If the mitigation is required, I'm not sure I see a lot of point in only
> > doing a half-baked job of it. It feels a bit like a box-ticking exercise,
> > in which case any overhead is too much.
> 
> I wrote some suggestions on follow-ups in my reply, and I can
> help out doing some of the patches, I think.
> 
> Since ARM32 RET is mov pc, <>
> git grep 'mov.*pc,' | wc -l gives 93 sites in arch/arm.
> I suppose these need to come out:
> 
> mov pc, lr
> dsb(nsh);
> isb();

Won't that go horribly wrong for conditional returns?

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Linus Walleij March 3, 2021, 3:31 p.m. UTC | #7
On Wed, Mar 3, 2021 at 4:29 PM David Laight <David.Laight@aculab.com> wrote:
> > On Tue, Feb 23, 2021 at 11:05 AM Will Deacon <will@kernel.org> wrote:

> > I wrote some suggestions on follow-ups in my reply, and I can
> > help out doing some of the patches, I think.
> >
> > Since ARM32 RET is mov pc, <>
> > git grep 'mov.*pc,' | wc -l gives 93 sites in arch/arm.
> > I suppose these need to come out:
> >
> > mov pc, lr
> > dsb(nsh);
> > isb();
>
> Won't that go horribly wrong for conditional returns?

It will so I would not insert it after those. It has to be
on a case-by-base basis, I am not planning any
search and replace operations.

Yours,
Linus Walleij
diff mbox series

Patch

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 4aaec9599e8a..11d89ef32da9 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -48,6 +48,10 @@  CHECKFLAGS	+= -D__ARMEL__
 KBUILD_LDFLAGS	+= -EL
 endif
 
+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS  += -mharden-sls=all
+endif
+
 #
 # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
 # later may result in code being generated that handles signed short and signed
diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
index 4a91428c324d..c7f9717511ca 100644
--- a/arch/arm/include/asm/vmlinux.lds.h
+++ b/arch/arm/include/asm/vmlinux.lds.h
@@ -145,3 +145,7 @@ 
 		__edtcm_data = .;					\
 	}								\
 	. = __dtcm_start + SIZEOF(.data_dtcm);
+
+#define SLS_TEXT							\
+		ALIGN_FUNCTION();					\
+		*(.text.__llvm_slsblr_thunk_*)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index f7f4620d59c3..e71f2bc97bae 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -63,6 +63,7 @@  SECTIONS
 	.text : {			/* Real text segment		*/
 		_stext = .;		/* Text and read-only data	*/
 		ARM_TEXT
+		SLS_TEXT
 	}
 
 #ifdef CONFIG_DEBUG_ALIGN_RODATA
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 90309208bb28..ca7299b356a9 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -34,6 +34,10 @@  $(warning LSE atomics not supported by binutils)
   endif
 endif
 
+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS  += -mharden-sls=all
+endif
+
 cc_has_k_constraint := $(call try-run,echo				\
 	'int main(void) {						\
 		asm volatile("and w0, w0, %w0" :: "K" (4294967295));	\
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 4c0b0c89ad59..f8912e42ffcd 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -93,6 +93,10 @@  jiffies = jiffies_64;
 #define TRAMP_TEXT
 #endif
 
+#define SLS_TEXT					\
+	ALIGN_FUNCTION();				\
+	*(.text.__llvm_slsblr_thunk_*)
+
 /*
  * The size of the PE/COFF section that covers the kernel image, which
  * runs from _stext to _edata, must be a round multiple of the PE/COFF
@@ -144,6 +148,7 @@  SECTIONS
 			HIBERNATE_TEXT
 			TRAMP_TEXT
 			*(.fixup)
+			SLS_TEXT
 			*(.gnu.warning)
 		. = ALIGN(16);
 		*(.got)			/* Global offset table		*/
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 269967c4fc1b..146b75a79d9e 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -121,6 +121,16 @@  choice
 
 endchoice
 
+config HARDEN_SLS_ALL
+	bool "enable SLS vulnerability hardening"
+	default n
+	depends on $(cc-option,-mharden-sls=all)
+	help
+	  Enables straight-line speculation vulnerability hardening on ARM and ARM64
+	  architectures. It inserts speculation barrier sequences (SB or DSB+ISB
+	  depending on the target architecture) after RET and BR, and replacing
+	  BLR with BL+BR sequence.
+
 config GCC_PLUGIN_STRUCTLEAK_VERBOSE
 	bool "Report forcefully initialized variables"
 	depends on GCC_PLUGIN_STRUCTLEAK